Stephen M. Kuebler Archives | 麻豆原创 News Central Florida Research, Arts, Technology, Student Life and College News, Stories and More Fri, 08 Apr 2022 21:18:04 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/blogs.dir/20/files/2019/05/cropped-logo-150x150.png Stephen M. Kuebler Archives | 麻豆原创 News 32 32 ‘Well, It’s Not Illegal!’ /news/well-not-illegal/ Wed, 22 May 2019 13:00:07 +0000 /news/?p=96917 Some things are immoral, yet perfectly legal. While other things may be illegal, but not necessarily immoral.

]]>
How often have you heard someone say: 鈥淲ell, it鈥檚 not illegal!鈥

The statement is frequently used to justify an action that is morally questionable, but not formally prevented by any kind of law or rule. We’re hearing it a lot in modern times, particularly in connection with politicians, their dealings in business, campaign finance, election processes and so on.

But it鈥檚 not just the best defense in Washington, D.C. We also hear it in our workplace, neighborhoods and social groups when someone wants to wriggle free from the discomfort of a bad choice come to light.

Rules and laws exist to protect and promote the function of communities. Yet, here lies one of many perennial chicken-or-egg problems: Which came first, compliance or ethics? We might tend to think that laws originate from moral convictions about what is right and wrong. But there are many interesting examples that challenge the perception that laws extend from morals.

For example, some things are immoral, yet perfectly legal. You can probably come up with many of your own powerful examples, but we鈥檒l just offer a few. First, if you don鈥檛 tip at a restaurant, that’s not illegal; but it seems like a crime, especially when the service is good. Another example: Wealthy people and corporations are often hotly criticized for using loopholes, off-shore accounts, and other schemes to avoid tax. Yet businesses rely more heavily on publicly funded resources than individuals to generate wealth, including roads to ship goods and services, energy and communication infrastructure, law enforcement, national defense, and bureaucracies that support state, national and international trade.

So, trying to avoid paying taxes can’t be moral, but there are many legal ways to get away with it 鈥 so it鈥檚 legal, but immoral. Our own history offers the best and saddest example. Before the Civil War, slavery was legal in the U.S., but certainly not moral.

In the 1970s the federal highway speed limit was dropped to 55 miles per hour, not to save lives, but to decrease national consumption of petroleum. So, speeding then was illegal, but could we regard it now as immoral?

And there are many examples of the reverse, where an action might be illegal, but it’s not necessarily immoral. For example, in the 1970s the federal highway speed limit was dropped to 55 miles per hour, not to save lives but to decrease national consumption of petroleum. So, speeding then was illegal, but could we regard it now as immoral?

Some examples depend on cultural framing. Consider Singapore, where it’s illegal to sell gum, not because it’s immoral but to help promote public cleanliness. And up until very recently, it was illegal for women to drive in Saudi Arabia, in part because it was regarded as religiously immoral. This stands in stark contrast to Western mores, where driving is commonplace, and in the U.S. it’s a rite of passage for all 16-year-olds, including women.

So what is the relationship between legality and morality, between compliance and ethics? And what are the implications of giving someone a pass when they do something that is legal but that makes us flinch morally?

We certainly have an expectation that people will act morally and ethically, even when there is no law or legal enforcement to bring consequences. We particularly hope politicians would exceed legal standards and make ethical choices, because they are elected leaders who are meant to promote the best interests of all citizens.

Fundamentally, we are all supposed to do what is right, and not just follow the rules, and we even learn that as children. Think about it. Young children often claim: 鈥淏ut you didn鈥檛 say I couldn’t!鈥 We tell our kids that does not make their actions right. So why would we expect anything less of adults, particularly elected leaders?

But more alarming than a politician skirting the rules is the ease with which their supporters often invoke: 鈥淲ell, it鈥檚 not illegal.鈥 Let鈥檚 go back to the schoolyard for some useful reminders of what our social standards are. We are alarmed by bullying, and not only are we telling kids not to bully, but we rebuke children who turn a blind eye to bullying. We tell our kids to speak out, defend the weak, etc. Similarly, whistle-blowing is being promoted by many national organizations, universities and even the federal government.

We want to catch the bad guys and promote justice. But how can that happen if we don’t speak up and call out immoral behavior, even when it is legal? Perhaps our willingness to give people a pass when they do bad things, even when they are legal, is undermining the likelihood that people will follow the rules, much less the spirit of the rule.

Stephen M. Kuebler is an associate professor of chemistry and optics in the 麻豆原创鈥檚 Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the 麻豆原创鈥檚 Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu.

]]>
Whom Should Self-Driving Cars Be Programmed to Protect? /news/self-driving-cars-programmed-protect/ Wed, 16 Jan 2019 16:24:36 +0000 /news/?p=93710 We need to be thinking more about the ethics of new technologies before they hit showroom floors.

]]>
Technology advances at breakneck speed. That’s exciting to early adopters, who can’t wait to get their hands on the latest piece of tech. For some, the rapid onslaught of technology is frustrating. But there are bigger issues that need our attention.

Economic pressures often move new technologies into the consumer space before people get a chance鈥搊r make the effort鈥搕o weigh the pros and cons. Time and again, society addresses the ethics of a new technology and makes new rules only after it’s in place and problems have emerged. There are examples in the news every day, like facial-recognition systems, gene editing, biobanking and data harvesting via social media. But we want to focus here on the problem of self-driving vehicles.

Artificial intelligence and advanced sensors are making self-driving vehicles a reality. There could be benefits. Self-driving vehicles would free up time for work, texting and talking on the phone. They could be safer if the technology is robust. But there may be downsides as well. For example, according to the American Trucking Associations, there are over 3.5 million truck drivers, and they stand to lose their jobs when self-driving trucks appear.

Self-driving vehicles are on the road now being field-tested, doing work and鈥搒ometimes鈥揾aving rocks thrown at them! In December, police in Chandler, Arizona, reported 21 cases of adults throwing rocks, slashing tires and even pointing guns at self-driving cars. Citizens were angered that the company Waymo was testing cars in their neighborhoods, potentially putting them at risk, and developing machines that could replace them.

But beyond economics鈥揳nd emotions鈥搕here鈥檚 a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect?

But beyond economics鈥揳nd emotions鈥搕here鈥檚 a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect? Two people have already been killed by self-driving cars during road-testing, and there will certainly be more fatalities. Even if we assume self-driving vehicles will be more predictable and reliable than humans, that predictability makes them seem, well, insensitively cold. In circumstances where an accident is unavoidable, the computer has to 鈥渃hoose鈥 between putting its passengers at risk, or risking other drivers, and even pedestrians. And by 鈥渃hoose鈥 we mean calculate. So how do programmers decide who becomes a casualty?

The ethical dilemma of self-driving cars represents what philosophers know as a Trolley Problem. These problems have endless variation, but the gist is something like this: Imagine a trolley carrying five people on a track heading toward a gorge, but the bridge is out. There is a switch that can redirect the trolley safely onto a second track. Unfortunately, a person is tied to the second track. Pulling the lever to switch the tracks will save five people from certain death, but kill the person tied to the second track.

What would you do? These trolley cases are problems because they set up conditions where an agent is forced to select between what seems like two bad choices. Either the agent allows several people to die (which seems immoral), or they intentionally cause someone to die (which seems differently but equally immoral).

This thought experiment is powerful because of its flexibility. If you tweak the problem a little, the answers change. For example, people are less likely to switch the trolley if you say the person tied to the track is young and vibrant, whereas the five on the trolley are very old and terminally ill. Or if you say the person on the track is a close relative, people are much more apprehensive to pull the lever.

The technology of self-driving vehicles shifts the Trolley Problem from the abstract to the eerily real. How should a self-driving vehicle respond in a situation where rapidly swerving to avoid a crowd would save many lives, but kill the passenger?

How should a self-driving vehicle respond in a situation where rapidly swerving to avoid a crowd would save many lives, but kill the passenger?

Writing for Science in 2016, psychologist Joshua Greene discusses what he calls 鈥渙ur driverless dilemma.鈥 But beyond economics鈥揳nd emotions鈥搕here鈥檚 a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect?

So how will self-driving vehicles be programmed to handle accidents? Who decides how they are programmed? Is it ethical for a company to offer two versions of the software 鈥 say, a gold package that saves the most lives, or a platinum package that saves the passenger? That is a moral dilemma for both the manufacturer and the purchaser.

Some will argue the Trolley Problem is moot because self-driving vehicles could communicate with one another and avoid no-win situations. But for that to work, we have to share personal data about where we are, when we travel, and where we are going. Advancing technologies like self-driving vehicles and DNA testing set up unexpected trade-offs between public safety and privacy rights. These issues are complex, but also rich in their potential to force us to reflect on and define our social values.

Ethics and moral philosophy provide ways to navigate the murky waters churned by advancing technology. And many companies and organizations do look to ethicists for answers to these questions. But our firm belief is that the public needs to participate in the discussion.

We have our say when we elect politicians who legislate public policy, and when we purchase or do not purchase products with new technologies. But we need to be more proactive, thinking about and weighing in on the ethics of new technologies before they hit showroom floors. We need to be engaged stakeholders in a technology-driven society, and not just consumers awaiting the next version of a phone.

As a society, we need to cultivate ethical literacy and be proactive in deciding how technologies are implemented鈥揵efore they run us over.

Stephen M. Kuebler is an associate professor of chemistry and optics in the 麻豆原创鈥檚 Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the 麻豆原创鈥檚 Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu.

]]>
Following Rules and Doing the Right Thing Aren鈥檛 Necessarily the Same /news/following-rules-right-thing-arent-necessarily/ Wed, 14 Nov 2018 14:10:12 +0000 /news/?p=92113 A wise person 鈥 or at least some person 鈥 once said: “If you don’t like following rules, just break some…You are sure to end up with more.”

Nobody likes having to follow rules. In any major organization 鈥 be it in the private or public sector 鈥 there are lots of rules and regulations to follow. The bureaucracy associated with rules can sometimes feel crushing, and compliance has become an industry unto itself. Countless hours in the professional world are spent on regulations training, enforcement and documenting compliance.

Of course, rules and regulations exist because someone, at some time, did something unethical, or at the very least, someone at some time imagined a world in which someone would do that unethical thing. Organizations certainly need rules, or something that keeps stakeholders operating on the straight and narrow. There are so many past and recent examples of unethical choices. Funds are misspent. Personal information becomes publically available. Environments are harmed. And when people within an organization make poor choices, there invariably follows a cry for more rules and regulations.

Compliance-based governance is one way to maintain proper functioning of a complex organization. Yet, compliance is regularly met with blank stares, nodding heads, and frustration over red tape. So is there a better way to ensure that organizations function ethically?

We explore why people should care about ethics, if and how ethics can be explicitly taught, and how one cultivates an ethical culture within an organization. We think that it is important to recognize that rules and ethics are distinct. Adding more rules and regulations does not always prevent unethical choices and bad outcomes. Adding rules certainly increases bureaucracy. And rules are most often reactive rather than proactive; they originate as a response to unethical behavior, with normally punitive restrictions on action.

Importantly, missing from compliance and regulatory structures are the 鈥渨hys鈥: the justification for rules that help community members understand why rules are ethically important (provided, of course that they are ethically important). Without such justification, individuals are apt to follow the notorious Capt. Barbossa of the Pirates of the Caribbean film series in believing that rules are 鈥渕ore what you’d call 鈥榞uidelines鈥 than actual rules.鈥 And nobody wants a community of pirates…except other pirates.

Ethics education and training provides that important justification. And we know it works. There is a growing body of evidence that organizations and their members make better choices when they have explicit training in ethics. Note that this is not the same as learning about rules and compliance. Ethics training is about actively engaging stakeholders in thinking about how their actions impact others, both within the organization and in the broader communities in which they operate. Ethics education is about creating the space to think about the underlying justification for rules and regulations, and taking up that opportunity.

There is a growing body of evidence that organizations and their members make better choices when they have explicit training in ethics.

Here is a concrete example: At the 麻豆原创, we want our faculty and students to engage in research ethically. We can and do talk about important rules 鈥 such as the strict prohibitions against plagiarism, falsifying information and fabricating data. But we also lead workshops, discussion groups and other modes of formative training that focus on the underlying ethics behind these rules. We are working to give each other space to think.

Reviewing and discussing case studies is a particularly effective means for teaching ethics because participants can discover not only the sequence of events that led to poor outcomes (and more rules), but also the subtleties of how limited information, insufficient consultation or incomplete consideration of downstream impacts enables people to make poor choices. It is equally useful to consider cases in which people did the right thing, and sometimes made tough choices 鈭 that may not have led to short-term professional gains 鈭 yet which upheld high ethical standards and generated greater societal benefit.

By shifting the emphasis in training from rules and compliance toward a focus on core values, the students, faculty and all participants can develop an innate recognition of the need to always operate ethically. This furthers the goals of 麻豆原创 by cultivating a culture of ethical behavior that enables collaboration, sustains research and ensures that those outside 麻豆原创 hold our products in high regard 鈭 particularly our main product, which is well-trained students.

By thinking actively about the ethical underpinning of our work, we are cultivating a culture of ethical behavior that enables members of our communities to choose well, even when there are no clear governing rules. Engaging ethics encourages each of us to think about the culture of our organizations and how individual actions shape its integrity. It helps ensure that we are following the rules, because we know that they are encouraging us to do the right thing in the first place.

Stephen M. Kuebler is an associate professor of chemistry and optics in the 麻豆原创鈥檚 Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the 麻豆原创鈥檚 Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu.

The 麻豆原创 Forum is a weekly series of opinion columns presented by 麻豆原创 Communications & Marketing. A new column is posted each Wednesday at /news/ and then broadcast between 7:50 and 8 a.m. Sunday on W麻豆原创-FM (89.9). The columns are the opinions of the writers, who serve on the 麻豆原创 Forum panel of faculty members, staffers and students for a year.

]]>
麻豆原创 Gill column
Can鈥檛 We Make Better Decisions to Ensure Ethical Outcomes? /news/cant-make-better-decisions-ensure-ethical-outcomes/ Wed, 12 Sep 2018 16:38:39 +0000 /news/?p=90481 Ethics is not just for deep philosophical discussion. Check out the news on any given day and you are apt to find a report that makes you wish people acted more ethically.

Our contributions to the 麻豆原创 Forum are a series of conversations about ethics. We are exploring why people should care about ethics, if and how ethics can be explicitly taught, and how one cultivates an ethical culture within an organization.

If we think about unethical behavior, our first instinct might be to point fingers at politicians and governments. But these are easy targets. There are many other examples in which one or more people made an unethical choice by breaking laws or explicit policies. Think about the scandals surrounding diesel vehicles with rigged emissions systems; the water supply of Flint, Michigan; or discredited reports that erroneously link autism and vaccinations.

But there are also important examples in which no explicit law or policy was broken, and yet a poor choice by one or more individuals led to harmful outcomes. Think about the management practices in NASA that led to the Challenger disaster; the creation and propagation of fake news; and how data-sharing by some firms doing DNA testing has weakened public trust.

Frequently poor outcomes result not because of malicious intent or a bad actor, but because a choice was made that seemed right at the time, but later turned out to have unethical implications. This can happen when decisions are made with limited information, insufficient consultation, or inadequate consideration of downstream effects.

Social media provides one of the best and most timely examples. The creators of social media platforms may not have broken any laws, but clearly they did not think through the broader ethical implications of their services, and how these could become platforms for digital misinformation.

We and many others working in academic and professional ethics are asking, “What training, structures, and decision-making skills could lead to better choices and avoid unethical outcomes? And can we structure training and education, either in the workplace or in academia, to help cultivate ethical awareness that leads to better choices?”

We come to this challenge from different but connected disciplines. (Jonathan’s expertise is in the ethics of science and engineering and how that is informed and shaped by emerging digital media. And Steve researches in the field of optical materials 鈭 think fiber optics and lasers.)

So although we practice different disciplines, we are both actively engaged in trying to promote the best practices of ethical science through our research, teaching, and service, and trying to pass those best practices on to our students. In doing so, we have thought about and discussed the ethics of research, ethical training, and how standards and perceptions of ethics can vary between students, faculty, disciplines, and national cultures.

Our discussions evolved into a project to help foster a culture of ethics at 麻豆原创. We are raising awareness of ethics through workshops, discussions, research, community-building, and other activities. Our goal is to shift thinking across our institution, so that ethics moves from being a second thought to becoming second nature.

The exercise is not limited to students. We are engaging faculty, staff, administrators, and stakeholders across Central Florida, because thinking and training at a university such as ours has a major impact on the entire community. Projects like these can also serve as national models for other organizations.

Ethical challenges are always complicated, so we cannot expect simple solutions. Yet, our work and that of others keeps drawing us back to a simple but powerful finding. There are many commonalities across major moral codes, ethical theories, and value commitments that distill down to something akin to the Golden Rule 鈭 and maybe this is the strongest foundation upon which to cultivate ethical cultures.

Faced with an increasingly complex world, and constant challenges to the things we value, organizations that want ethical outcomes may need to develop policies and procedures that focus on “thinking about the other person.”

Then maybe we can all become better, together.

Stephen M. Kuebler is an associate professor of chemistry and optics in the 麻豆原创鈥檚 Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the 麻豆原创鈥檚 Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu

 

 

]]>