Jonathan Beever Archives | 麻豆原创 News Central Florida Research, Arts, Technology, Student Life and College News, Stories and More Thu, 19 Mar 2026 18:06:00 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 /wp-content/blogs.dir/20/files/2019/05/cropped-logo-150x150.png Jonathan Beever Archives | 麻豆原创 News 32 32 麻豆原创 Researchers Receive Meta Support to Study Motor Learning in EMG-Based Interfaces /news/ucf-researchers-receive-meta-support-to-study-motor-learning-in-emg-based-interfaces/ Thu, 19 Mar 2026 13:00:54 +0000 /news/?p=151557 Meta funding will support research on gamified muscle-based human-computer interaction while embedding ethics directly into engineering design.

]]>
麻豆原创 researchers are partnering with Meta Platforms Inc. to study how people learn to control digital systems using muscle signals, work that could improve human-computer interaction in virtual and augmented environments.

Supported by a gift from Meta, the two-year project uses electromyographic (EMG)-based human-machine interface technology as a platform for investigating motor learning through gamified training systems. While EMG systems are often studied in the context of prosthetic limb control, the broader goal of the project is to understand how adaptive interfaces can become more intuitive and embodied over time.

鈥淭his Meta support will enable my lab to work on real-world problems that can have an immediate impact on neurotechnologies.鈥 鈥 Mohsen Rakhshan, assistant professor

麻豆原创 was selected through Meta鈥檚 competitive funding initiative, in part because of its interdisciplinary approach pairing engineering with philosophy and ethics.

Mohsen Rakhshan, an assistant professor in 麻豆原创鈥檚 Department of Electrical and Computer Engineering and the Disability, Aging and Technology (DAT) faculty cluster initiative, and Jonathan Beever, a professor of philosophy and director of the 麻豆原创 Center for Ethics, will lead the project.

鈥淭his Meta support will enable my lab to work on real-world problems that can have an immediate impact on neurotechnologies,鈥 Rakhshan says. 鈥淭he impact ranges from individuals using augmented and virtual reality for entertainment to individuals with amputation or paralysis seeking to improve their quality of life. It also gives my engineering students the opportunity to integrate ethics research into their technical work.鈥

Advancing Motor Learning Through EMG

EMG-based interfaces translate electrical signals generated by muscle activity into digital commands, allowing users to control devices through subtle physical gestures. In immersive environments, these systems can enable more natural interaction with virtual objects. In rehabilitation settings, they can assist in training neural prostheses.

The 麻豆原创 team is using this technology to examine how people learn new motor skills in digital environments, particularly through gamified interaction tasks designed to strengthen human-computer coordination. By training both the participant and the signal-processing algorithm (often called a 鈥渄ecoder鈥) simultaneously, through a process known as co-adaptation, researchers aim to create systems that improve alongside the user.

Professor Jonathan Beever (left) and Assistant Professor Mohsen Rakhshan (right) discuss an EMG-based interface prototype.

鈥淎 significant challenge for most of these systems is that they require constant retraining or calibration of the decoder,鈥 Rakhshan says. 鈥淩etraining after each use can discourage individuals from using these devices long term. The human nervous system is plastic 鈥 it can adapt and improve performance over time. But if the decoder is constantly reset or kept static, it may prevent the nervous system from leveraging that plasticity. We aim to develop a co-adaptive loop between the human and the device.鈥

Rather than focusing solely on stable decoding, the project investigates how adaptive systems can enhance motor learning, improve user confidence and promote a stronger sense of embodiment in human-machine interaction.

If successful, the research could inform next-generation EMG systems used in immersive computing, rehabilitation technologies and assistive devices.

A prototype EMG-based interface device that will be used to explore how people interact with systems that translate muscle signals into digital commands.

Embedding Ethics Into Engineering

A defining feature of the project is the integration of ethics alongside engineering from the outset.

鈥淚nterdisciplinary collaboration between ethics and technical experts is the best path forward for responsible innovation.鈥 鈥 Jonathan Beever, professor

Longitudinal EMG studies can reveal subtle motor signatures that uniquely identify individuals, raising questions about privacy and data protection. Adaptive systems may also influence a user鈥檚 sense of agency, whether individuals feel genuinely in control of the interface. For example, if an EMG system begins adjusting its interpretation of muscle signals automatically, users may feel the device is responding to them intuitively or, in some cases, acting unpredictably. Researchers want to better understand how these dynamics affect trust, confidence, and long-term use.

To address these questions, Beever will be embedded within the 麻豆原创 Laboratory for Interaction of Machine and Brain (LIMB), contributing directly to experimental design and evaluation. The team will conduct structured assessments of agency and embodiment while examining potential privacy leakage from EMG signal data.

鈥淚nterdisciplinary collaboration between ethics and technical experts is the best path forward for responsible innovation,鈥 Beever says. 鈥淭echnological advancement must be guided toward good ends. Our work emphasizes not only ethical research practices but also deeper questions about autonomy and agency in human-machine interfaces.鈥

A Three-Phase Study

The longitudinal study will involve 30 participants completing 10 sessions over two months, allowing researchers to measure both short-term and long-term motor learning outcomes.

The project will occur in three phases:

Phase 1: Standardizing muscle signal data so artificial intelligence systems can more accurately interpret user intent.

Phase 2: Training both participants and machine learning models simultaneously 鈥 a co-adaptive process designed to improve human-computer interaction through gamified tasks.

Phase 3: Conducting structured evaluation of agency, embodiment and privacy risks while developing a publishable ethics framework for adaptive EMG-based systems.

鈥淭here has been a significant increase in industry interest in using biological signals such as EMG, from muscles, and EEG, from the brain, to interact with virtual and augmented reality, consumer electronics, prostheses for individuals with amputation and robotic systems for individuals with paralysis,鈥 Rakhshan says.


This research is supported by a gift from Meta. The project is conducted by faculty, staff and students in 麻豆原创鈥檚 Department of Electrical and Computer Engineering, the Disability, Aging and Technology research cluster and the 麻豆原创 Center for Ethics.

]]>
2Z7A6644.jpg Jonathan Beever (left) and Mohsen Rakhshan (right) discuss an EMG-based interface prototype in their 麻豆原创 lab. 麻豆原创_Meta Grant 2026 A prototype EMG-based interface device developed at 麻豆原创, used to explore how people interact with systems that translate muscle signals into digital commands.
麻豆原创 Faculty, Staff Join National Institute for AI Teaching, Learning /news/ucf-faculty-staff-join-national-institute-for-ai-teaching-learning/ Thu, 11 Sep 2025 15:43:40 +0000 /news/?p=148981 The university joins more than 170 institutions for a yearlong program focused on implementing AI action plans for classrooms, curricula and campuses.

]]>
Ten faculty members from 麻豆原创鈥檚 College of Arts and Humanities have been selected to participate in the Association of American Colleges and Universities鈥 Institute on AI, Pedagogy and the Curriculum. The national institute brings together faculty from more than 170 institutions to examine how artificial intelligence (AI) is shaping teaching, learning and scholarship. The yearlong program kicks off today and is focused on helping faculty聽develop and implement AI action plans for their classrooms, curricula and campuses.

麻豆原创鈥檚 two teams include faculty from every department in the College of Arts and Humanities. The first, led by Associate Dean Peter Larson (professor, history), includes Meghan Velez (assistant professor, writing and rhetoric), Jonathan Beever (professor of philosophy), Matt Dombrowski 鈥05 鈥08 MFA聽 (professor, visual arts and design), Melissa Scott (lecturer, performing arts), Lisa Logan (associate professor, English) and Taoues Hadour (assistant professor, modern languages and literatures). Their project will focus on building AI literacy across the arts and humanities through course design, policy recommendations and sharable resources.

The second team, led by Anastasia Salter (professor, English; director, texts and technologies), includes Rudy McDaniel (professor, English; director, visual arts and design) and Sherry Rankins-Robertson (professor, writing and rhetoric). Their work will explore the viability of creating a college-level center on AI scholarship, teaching and learning, as well as opportunities for funding, partnerships and community engagement.

In addition, four 麻豆原创 employees are serving as AI fellows and mentors for the institute: Rankins-Robertson, Thomas Cavanaugh (vice president, digital learning)聽and Rohan Jowallah (senior instructional designer, Center for Distributed Learning), all returning for a second year, and newcomer Kevin Yee ’90 (special assistant to the provost, 麻豆原创 Faculty Center for Teaching and Learning). Rankins-Robertson and Jowallah also serve on the AAC&U institute faculty.

鈥淭he fact that we have enthusiastic participation from faculty in every department in the College of Arts and Humanities demonstrates how seriously we鈥檙e taking this moment,鈥 says Jeff Moore, dean of the college. 鈥淎I is changing how we teach, how students learn and what employers expect. This is our chance to rethink how we prepare students for today鈥檚 classrooms and tomorrow鈥檚 careers.鈥

This year鈥檚 institute includes more than 1,220 participants across 192 teams.

]]>
Philosophy Faculty Lead Ethical Conversations Surrounding AI /news/philosophy-faculty-lead-ethical-conversations-surrounding-ai/ Mon, 08 Sep 2025 13:00:33 +0000 /news/?p=148868 As artificial intelligence reshapes society, 麻豆原创鈥檚 Department of Philosophy is examining its ethical implications and exploring how technology intersects with human values, creativity and identity.

]]>
As artificial intelligence (AI) becomes increasingly integrated into everyday life, 麻豆原创鈥檚 Department of Philosophy has intentionally been strengthening faculty research in this area, as well as growing opportunities for students to learn more about the impact of technology on humans and the natural and social environments. A primary focus has been examining the ethical implications of AI and other emerging technologies.

Department Chair and Professor of Philosophy Nancy Stanlick emphasizes that understanding AI requires more than technical knowledge; it demands a deep exploration of ethics.

鈥淎s science and technology begin to shape more aspects of our lives, fundamental philosophical questions lie at the center of the ethical issues we face, especially with the rise of AI,鈥 Stanlick says. 鈥淧erhaps the central [concern] is that it pulls us away from the essence of our humanity.鈥

Steve Fiore, a philosophy professor whose work is in the cognitive sciences program, investigates how humans interact socially with technology. In 2023, he co-authored a International Journal of Human-Computer Interaction study, titled 鈥淪ix Human-Centered Artificial Intelligence Grand Challenges,鈥 that serves as a call to the scientific community to design AI systems that prioritize human values and ethical considerations. Fiore also collaborates with the U.S. Department of Defense to explore how emerging technologies may shape national security.

Professor Jonathan Beever played a key role in developing 麻豆原创鈥檚 artificial intelligence, big data and human impacts undergraduate certificate. The interdisciplinary program equips students with the tools to critically assess and advocate for the ethical development of data-driven technologies, particularly AI and big data.

Associate Lecturer Stacey DiLiberto brings a unique perspective through her work in digital humanities, a field that merges traditional humanities with digital tools. Her research and teaching encourage students to view AI as a tool, while critically examining its impact on identity and creativity. In her classes, she challenges students with questions like 鈥淲hat does it mean to be human when humans can mimic our creativity?鈥 DiLiberto argues that while AI can generate art, it lacks the lived experience and emotional depth that define human expression. Machines cannot replace lived experiences or memories, often lacking pathos when generating art.

While artificial intelligence has made remarkable progress, it does not replicate the depth of human connection or the ethical and moral reasoning inherent to human judgment. Department of Philosophy faculty like Stanlick, Fiore, Beever and DiLiberto provide frameworks for developing technology in ways that uphold ethical standards and preserve human values.

Visit the for more information about undergraduate and graduate programs, courses and opportunities to collaborate with the department鈥檚 faculty and students.

]]>
麻豆原创 Leads Development of First Large-scale System for Extended Reality Research /news/ucf-leads-development-of-first-large-scale-system-for-extended-reality-research/ Tue, 25 Apr 2023 15:09:23 +0000 /news/?p=134915 The nearly $5 million project will facilitate human subjects research to improve extended reality technologies for the general population and make them more available to groups such as older adults or people with disabilities.

]]>
A 麻豆原创 researcher is leading a nearly $5 million U.S. National Science Foundation-funded project to develop the first, large-scale system for extended reality human subjects research.

Called the Virtual Experience Research Accelerator, or VERA, the system will enable researchers to carry out large studies in extended reality (XR) environments, including virtual reality (VR), augmented reality and mixed reality, with large and wide-ranging populations. The four-year project will be led by Professor Greg Welch, a computer scientist and engineer, and the AdventHealth Endowed Chair in Healthcare Simulation in 麻豆原创鈥檚 College of Nursing. Welch also holds secondary appointments in the College of Engineering and Computer Science, and the School of Modeling, Simulation and Training (SMST).

The NSF announced the funding today as part of a $16.1 million investment the agency is making in artificial intelligence (AI) infrastructure through its Computer and Information Science and Engineering聽 (CISE) Community Research Infrastructure 鈥 or CCRI 鈥 program.

鈥淰ERA could transform the way XR researchers carry out human subjects research,鈥 Welch says. 鈥淚t will allow researchers to run studies relatively quickly, using a large number of study participants with wide-ranging demographics, to realize faster generation of better-quality results that are more generalizable to the larger population.鈥

One goal of the VERA project is to provide researchers with a new and powerful tool that could lead to improved XR technologies that are more effective for the user and make XR research more available to underrepresented groups, such as older adults or people with disabilities, who could potentially benefit from the technology, Welch says.

Other institutions also receiving NSF CCRI awards this year are the University of Pennsylvania; the University of Minnesota, Twin Cities; UCLA; and Penn State.

The 2023 CCRI projects will provide researchers and students across the nation with access to transformative resources through platforms for carrying out AI research on social robotics and research in immersive virtual environments that could also benefit AI research.

鈥淎 critical element to the success of the AI research revolution is ensuring that researchers have access to the data and platforms required to continue to drive innovation and scalability in AI technologies and systems,鈥 says NSF Director Sethuraman Panchanathan. 鈥淭his infrastructure must be accessible to a full breadth and variety of talent interested in AI [research and development], as that is the driving force behind modern discoveries.鈥

While VERA is primarily aimed at human subjects research in XR, it will also contribute to the success of AI research by providing researchers with a tool for collecting large data sets of realistic human behavior that is representative of the general population, Welch says.

About VERA

The VERA project will address a critical problem in human subjects research in XR 鈥 a vast majority of the studies rely on relatively small convenience samples of college-age participants that are not demographically mixed and take a relatively long time to carry out, Welch says.

鈥淏ecause laboratory-based studies are relatively slow, they are typically limited to relatively small population samples, and because those samples are not typically representative of the general population, the findings typically are not either,鈥 he says.

VERA will combine the ideas of distributed lab-based studies, online studies, research panels, crowdsourcing and virtual environments into a unified system for carrying out XR-based human subjects research. To create a large, wide-ranging pool of research participants, the team will recruit participants from around the country to serve in a standing participant pool.

The system will be comprised of a study management program, the participant pool, and a virtual metaworld where participants can join studies, and researchers can attend meetings and events as well as interact with 3D visualizations of final study data.

Individuals recruited for the VERA participant pool will include those who already own VR equipment as well as those who will have it provided to them. The system will allow for participants to take part in studies remotely, without having to come to a lab.

The VERA Team

In addition to Welch, the VERA team includes principal investigators Shiri Azenkot, an associate professor with Cornell Tech and a co-founder and Director of XR Access; Jeremy Bailenson, a Thomas More Storke Professor at Stanford University; Gerd Bruder, a research associate professor with 麻豆原创鈥檚 Institute of Simulation and Training, SMST; Tabitha Peck, an associate professor with Davidson College; and Valerie Jones Taylor, an associate professor with Lehigh University.

Co-investigators are Jonathan Beever, an associate professor in 麻豆原创鈥檚 College of Arts and Humanities; Nicholas Alvaro Coles, a research scientist with Stanford University and the Director of the Psychological Science Accelerator; Carolina Cruz-Neira, an Agere Chair Professor in 麻豆原创鈥檚 Department of Computer Science; John Murray, an assistant professor in 麻豆原创鈥檚 Nicholson School of Communication and Media; and Rui Xie, an assistant professor in 麻豆原创鈥檚 Department of Statistics and Data Science.

Several industry and nonprofit organizations are involved, as is the XR Association.

Next Steps

The VERA team will begin developing the system and curating a participant pool during the first year of the work, as well as build a community around the project.

鈥淚t鈥檚 really a joy to be working on this,鈥 Welch says. 鈥淲ith VERA, both established and advancing researchers will have a new power tool to do more great research, and researchers who do not have a laboratory where they can run XR human subjects research, due to perhaps money or space limitations, will now have a practical and powerful way to run studies. VERA offers a chance to do something for the amazing XR research community, by making high-quality human subjects research accessible to more researchers.鈥

Researcher Credentials

Welch received his doctorate in computer science from the University of North Carolina at Chapel Hill and joined 麻豆原创 in 2011.

Bruder received his doctorate in computer science from the University of Hamburg in Germany and joined 麻豆原创 in 2016.

Beever received his doctorate in philosophy from Purdue University and joined 麻豆原创 in 2015.

Cruz-Neira received her doctorate in computer science/virtual reality from the University of Illinois Chicago and joined 麻豆原创 in 2020.

Murray received his doctorate in computer science from the University of California, Santa Cruz, and joined 麻豆原创 in 2018.

Xie received his doctorate in statistics from the University of Georgia and joined 麻豆原创 in 2019.

]]>
‘Well, It’s Not Illegal!’ /news/well-not-illegal/ Wed, 22 May 2019 13:00:07 +0000 /news/?p=96917 Some things are immoral, yet perfectly legal. While other things may be illegal, but not necessarily immoral.

]]>
How often have you heard someone say: 鈥淲ell, it鈥檚 not illegal!鈥

The statement is frequently used to justify an action that is morally questionable, but not formally prevented by any kind of law or rule. We’re hearing it a lot in modern times, particularly in connection with politicians, their dealings in business, campaign finance, election processes and so on.

But it鈥檚 not just the best defense in Washington, D.C. We also hear it in our workplace, neighborhoods and social groups when someone wants to wriggle free from the discomfort of a bad choice come to light.

Rules and laws exist to protect and promote the function of communities. Yet, here lies one of many perennial chicken-or-egg problems: Which came first, compliance or ethics? We might tend to think that laws originate from moral convictions about what is right and wrong. But there are many interesting examples that challenge the perception that laws extend from morals.

For example, some things are immoral, yet perfectly legal. You can probably come up with many of your own powerful examples, but we鈥檒l just offer a few. First, if you don鈥檛 tip at a restaurant, that’s not illegal; but it seems like a crime, especially when the service is good. Another example: Wealthy people and corporations are often hotly criticized for using loopholes, off-shore accounts, and other schemes to avoid tax. Yet businesses rely more heavily on publicly funded resources than individuals to generate wealth, including roads to ship goods and services, energy and communication infrastructure, law enforcement, national defense, and bureaucracies that support state, national and international trade.

So, trying to avoid paying taxes can’t be moral, but there are many legal ways to get away with it 鈥 so it鈥檚 legal, but immoral. Our own history offers the best and saddest example. Before the Civil War, slavery was legal in the U.S., but certainly not moral.

In the 1970s the federal highway speed limit was dropped to 55 miles per hour, not to save lives, but to decrease national consumption of petroleum. So, speeding then was illegal, but could we regard it now as immoral?

And there are many examples of the reverse, where an action might be illegal, but it’s not necessarily immoral. For example, in the 1970s the federal highway speed limit was dropped to 55 miles per hour, not to save lives but to decrease national consumption of petroleum. So, speeding then was illegal, but could we regard it now as immoral?

Some examples depend on cultural framing. Consider Singapore, where it’s illegal to sell gum, not because it’s immoral but to help promote public cleanliness. And up until very recently, it was illegal for women to drive in Saudi Arabia, in part because it was regarded as religiously immoral. This stands in stark contrast to Western mores, where driving is commonplace, and in the U.S. it’s a rite of passage for all 16-year-olds, including women.

So what is the relationship between legality and morality, between compliance and ethics? And what are the implications of giving someone a pass when they do something that is legal but that makes us flinch morally?

We certainly have an expectation that people will act morally and ethically, even when there is no law or legal enforcement to bring consequences. We particularly hope politicians would exceed legal standards and make ethical choices, because they are elected leaders who are meant to promote the best interests of all citizens.

Fundamentally, we are all supposed to do what is right, and not just follow the rules, and we even learn that as children. Think about it. Young children often claim: 鈥淏ut you didn鈥檛 say I couldn’t!鈥 We tell our kids that does not make their actions right. So why would we expect anything less of adults, particularly elected leaders?

But more alarming than a politician skirting the rules is the ease with which their supporters often invoke: 鈥淲ell, it鈥檚 not illegal.鈥 Let鈥檚 go back to the schoolyard for some useful reminders of what our social standards are. We are alarmed by bullying, and not only are we telling kids not to bully, but we rebuke children who turn a blind eye to bullying. We tell our kids to speak out, defend the weak, etc. Similarly, whistle-blowing is being promoted by many national organizations, universities and even the federal government.

We want to catch the bad guys and promote justice. But how can that happen if we don’t speak up and call out immoral behavior, even when it is legal? Perhaps our willingness to give people a pass when they do bad things, even when they are legal, is undermining the likelihood that people will follow the rules, much less the spirit of the rule.

Stephen M. Kuebler is an associate professor of chemistry and optics in the 麻豆原创鈥檚 Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the 麻豆原创鈥檚 Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu.

]]>
Whom Should Self-Driving Cars Be Programmed to Protect? /news/self-driving-cars-programmed-protect/ Wed, 16 Jan 2019 16:24:36 +0000 /news/?p=93710 We need to be thinking more about the ethics of new technologies before they hit showroom floors.

]]>
Technology advances at breakneck speed. That’s exciting to early adopters, who can’t wait to get their hands on the latest piece of tech. For some, the rapid onslaught of technology is frustrating. But there are bigger issues that need our attention.

Economic pressures often move new technologies into the consumer space before people get a chance鈥搊r make the effort鈥搕o weigh the pros and cons. Time and again, society addresses the ethics of a new technology and makes new rules only after it’s in place and problems have emerged. There are examples in the news every day, like facial-recognition systems, gene editing, biobanking and data harvesting via social media. But we want to focus here on the problem of self-driving vehicles.

Artificial intelligence and advanced sensors are making self-driving vehicles a reality. There could be benefits. Self-driving vehicles would free up time for work, texting and talking on the phone. They could be safer if the technology is robust. But there may be downsides as well. For example, according to the American Trucking Associations, there are over 3.5 million truck drivers, and they stand to lose their jobs when self-driving trucks appear.

Self-driving vehicles are on the road now being field-tested, doing work and鈥搒ometimes鈥揾aving rocks thrown at them! In December, police in Chandler, Arizona, reported 21 cases of adults throwing rocks, slashing tires and even pointing guns at self-driving cars. Citizens were angered that the company Waymo was testing cars in their neighborhoods, potentially putting them at risk, and developing machines that could replace them.

But beyond economics鈥揳nd emotions鈥搕here鈥檚 a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect?

But beyond economics鈥揳nd emotions鈥搕here鈥檚 a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect? Two people have already been killed by self-driving cars during road-testing, and there will certainly be more fatalities. Even if we assume self-driving vehicles will be more predictable and reliable than humans, that predictability makes them seem, well, insensitively cold. In circumstances where an accident is unavoidable, the computer has to 鈥渃hoose鈥 between putting its passengers at risk, or risking other drivers, and even pedestrians. And by 鈥渃hoose鈥 we mean calculate. So how do programmers decide who becomes a casualty?

The ethical dilemma of self-driving cars represents what philosophers know as a Trolley Problem. These problems have endless variation, but the gist is something like this: Imagine a trolley carrying five people on a track heading toward a gorge, but the bridge is out. There is a switch that can redirect the trolley safely onto a second track. Unfortunately, a person is tied to the second track. Pulling the lever to switch the tracks will save five people from certain death, but kill the person tied to the second track.

What would you do? These trolley cases are problems because they set up conditions where an agent is forced to select between what seems like two bad choices. Either the agent allows several people to die (which seems immoral), or they intentionally cause someone to die (which seems differently but equally immoral).

This thought experiment is powerful because of its flexibility. If you tweak the problem a little, the answers change. For example, people are less likely to switch the trolley if you say the person tied to the track is young and vibrant, whereas the five on the trolley are very old and terminally ill. Or if you say the person on the track is a close relative, people are much more apprehensive to pull the lever.

The technology of self-driving vehicles shifts the Trolley Problem from the abstract to the eerily real. How should a self-driving vehicle respond in a situation where rapidly swerving to avoid a crowd would save many lives, but kill the passenger?

How should a self-driving vehicle respond in a situation where rapidly swerving to avoid a crowd would save many lives, but kill the passenger?

Writing for Science in 2016, psychologist Joshua Greene discusses what he calls 鈥渙ur driverless dilemma.鈥 But beyond economics鈥揳nd emotions鈥搕here鈥檚 a centrally important moral question with self-driving vehicles. Whom will they be programmed to protect?

So how will self-driving vehicles be programmed to handle accidents? Who decides how they are programmed? Is it ethical for a company to offer two versions of the software 鈥 say, a gold package that saves the most lives, or a platinum package that saves the passenger? That is a moral dilemma for both the manufacturer and the purchaser.

Some will argue the Trolley Problem is moot because self-driving vehicles could communicate with one another and avoid no-win situations. But for that to work, we have to share personal data about where we are, when we travel, and where we are going. Advancing technologies like self-driving vehicles and DNA testing set up unexpected trade-offs between public safety and privacy rights. These issues are complex, but also rich in their potential to force us to reflect on and define our social values.

Ethics and moral philosophy provide ways to navigate the murky waters churned by advancing technology. And many companies and organizations do look to ethicists for answers to these questions. But our firm belief is that the public needs to participate in the discussion.

We have our say when we elect politicians who legislate public policy, and when we purchase or do not purchase products with new technologies. But we need to be more proactive, thinking about and weighing in on the ethics of new technologies before they hit showroom floors. We need to be engaged stakeholders in a technology-driven society, and not just consumers awaiting the next version of a phone.

As a society, we need to cultivate ethical literacy and be proactive in deciding how technologies are implemented鈥揵efore they run us over.

Stephen M. Kuebler is an associate professor of chemistry and optics in the 麻豆原创鈥檚 Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the 麻豆原创鈥檚 Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu.

]]>
Following Rules and Doing the Right Thing Aren鈥檛 Necessarily the Same /news/following-rules-right-thing-arent-necessarily/ Wed, 14 Nov 2018 14:10:12 +0000 /news/?p=92113 A wise person 鈥 or at least some person 鈥 once said: “If you don’t like following rules, just break some…You are sure to end up with more.”

Nobody likes having to follow rules. In any major organization 鈥 be it in the private or public sector 鈥 there are lots of rules and regulations to follow. The bureaucracy associated with rules can sometimes feel crushing, and compliance has become an industry unto itself. Countless hours in the professional world are spent on regulations training, enforcement and documenting compliance.

Of course, rules and regulations exist because someone, at some time, did something unethical, or at the very least, someone at some time imagined a world in which someone would do that unethical thing. Organizations certainly need rules, or something that keeps stakeholders operating on the straight and narrow. There are so many past and recent examples of unethical choices. Funds are misspent. Personal information becomes publically available. Environments are harmed. And when people within an organization make poor choices, there invariably follows a cry for more rules and regulations.

Compliance-based governance is one way to maintain proper functioning of a complex organization. Yet, compliance is regularly met with blank stares, nodding heads, and frustration over red tape. So is there a better way to ensure that organizations function ethically?

We explore why people should care about ethics, if and how ethics can be explicitly taught, and how one cultivates an ethical culture within an organization. We think that it is important to recognize that rules and ethics are distinct. Adding more rules and regulations does not always prevent unethical choices and bad outcomes. Adding rules certainly increases bureaucracy. And rules are most often reactive rather than proactive; they originate as a response to unethical behavior, with normally punitive restrictions on action.

Importantly, missing from compliance and regulatory structures are the 鈥渨hys鈥: the justification for rules that help community members understand why rules are ethically important (provided, of course that they are ethically important). Without such justification, individuals are apt to follow the notorious Capt. Barbossa of the Pirates of the Caribbean film series in believing that rules are 鈥渕ore what you’d call 鈥榞uidelines鈥 than actual rules.鈥 And nobody wants a community of pirates…except other pirates.

Ethics education and training provides that important justification. And we know it works. There is a growing body of evidence that organizations and their members make better choices when they have explicit training in ethics. Note that this is not the same as learning about rules and compliance. Ethics training is about actively engaging stakeholders in thinking about how their actions impact others, both within the organization and in the broader communities in which they operate. Ethics education is about creating the space to think about the underlying justification for rules and regulations, and taking up that opportunity.

There is a growing body of evidence that organizations and their members make better choices when they have explicit training in ethics.

Here is a concrete example: At the 麻豆原创, we want our faculty and students to engage in research ethically. We can and do talk about important rules 鈥 such as the strict prohibitions against plagiarism, falsifying information and fabricating data. But we also lead workshops, discussion groups and other modes of formative training that focus on the underlying ethics behind these rules. We are working to give each other space to think.

Reviewing and discussing case studies is a particularly effective means for teaching ethics because participants can discover not only the sequence of events that led to poor outcomes (and more rules), but also the subtleties of how limited information, insufficient consultation or incomplete consideration of downstream impacts enables people to make poor choices. It is equally useful to consider cases in which people did the right thing, and sometimes made tough choices 鈭 that may not have led to short-term professional gains 鈭 yet which upheld high ethical standards and generated greater societal benefit.

By shifting the emphasis in training from rules and compliance toward a focus on core values, the students, faculty and all participants can develop an innate recognition of the need to always operate ethically. This furthers the goals of 麻豆原创 by cultivating a culture of ethical behavior that enables collaboration, sustains research and ensures that those outside 麻豆原创 hold our products in high regard 鈭 particularly our main product, which is well-trained students.

By thinking actively about the ethical underpinning of our work, we are cultivating a culture of ethical behavior that enables members of our communities to choose well, even when there are no clear governing rules. Engaging ethics encourages each of us to think about the culture of our organizations and how individual actions shape its integrity. It helps ensure that we are following the rules, because we know that they are encouraging us to do the right thing in the first place.

Stephen M. Kuebler is an associate professor of chemistry and optics in the 麻豆原创鈥檚 Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the 麻豆原创鈥檚 Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu.

The 麻豆原创 Forum is a weekly series of opinion columns presented by 麻豆原创 Communications & Marketing. A new column is posted each Wednesday at /news/ and then broadcast between 7:50 and 8 a.m. Sunday on W麻豆原创-FM (89.9). The columns are the opinions of the writers, who serve on the 麻豆原创 Forum panel of faculty members, staffers and students for a year.

]]>
麻豆原创 Gill column
Can鈥檛 We Make Better Decisions to Ensure Ethical Outcomes? /news/cant-make-better-decisions-ensure-ethical-outcomes/ Wed, 12 Sep 2018 16:38:39 +0000 /news/?p=90481 Ethics is not just for deep philosophical discussion. Check out the news on any given day and you are apt to find a report that makes you wish people acted more ethically.

Our contributions to the 麻豆原创 Forum are a series of conversations about ethics. We are exploring why people should care about ethics, if and how ethics can be explicitly taught, and how one cultivates an ethical culture within an organization.

If we think about unethical behavior, our first instinct might be to point fingers at politicians and governments. But these are easy targets. There are many other examples in which one or more people made an unethical choice by breaking laws or explicit policies. Think about the scandals surrounding diesel vehicles with rigged emissions systems; the water supply of Flint, Michigan; or discredited reports that erroneously link autism and vaccinations.

But there are also important examples in which no explicit law or policy was broken, and yet a poor choice by one or more individuals led to harmful outcomes. Think about the management practices in NASA that led to the Challenger disaster; the creation and propagation of fake news; and how data-sharing by some firms doing DNA testing has weakened public trust.

Frequently poor outcomes result not because of malicious intent or a bad actor, but because a choice was made that seemed right at the time, but later turned out to have unethical implications. This can happen when decisions are made with limited information, insufficient consultation, or inadequate consideration of downstream effects.

Social media provides one of the best and most timely examples. The creators of social media platforms may not have broken any laws, but clearly they did not think through the broader ethical implications of their services, and how these could become platforms for digital misinformation.

We and many others working in academic and professional ethics are asking, “What training, structures, and decision-making skills could lead to better choices and avoid unethical outcomes? And can we structure training and education, either in the workplace or in academia, to help cultivate ethical awareness that leads to better choices?”

We come to this challenge from different but connected disciplines. (Jonathan’s expertise is in the ethics of science and engineering and how that is informed and shaped by emerging digital media. And Steve researches in the field of optical materials 鈭 think fiber optics and lasers.)

So although we practice different disciplines, we are both actively engaged in trying to promote the best practices of ethical science through our research, teaching, and service, and trying to pass those best practices on to our students. In doing so, we have thought about and discussed the ethics of research, ethical training, and how standards and perceptions of ethics can vary between students, faculty, disciplines, and national cultures.

Our discussions evolved into a project to help foster a culture of ethics at 麻豆原创. We are raising awareness of ethics through workshops, discussions, research, community-building, and other activities. Our goal is to shift thinking across our institution, so that ethics moves from being a second thought to becoming second nature.

The exercise is not limited to students. We are engaging faculty, staff, administrators, and stakeholders across Central Florida, because thinking and training at a university such as ours has a major impact on the entire community. Projects like these can also serve as national models for other organizations.

Ethical challenges are always complicated, so we cannot expect simple solutions. Yet, our work and that of others keeps drawing us back to a simple but powerful finding. There are many commonalities across major moral codes, ethical theories, and value commitments that distill down to something akin to the Golden Rule 鈭 and maybe this is the strongest foundation upon which to cultivate ethical cultures.

Faced with an increasingly complex world, and constant challenges to the things we value, organizations that want ethical outcomes may need to develop policies and procedures that focus on “thinking about the other person.”

Then maybe we can all become better, together.

Stephen M. Kuebler is an associate professor of chemistry and optics in the 麻豆原创鈥檚 Department of Chemistry and the College of Optics and Photonics. He can be reached at Stephen.Kuebler@ucf.edu.

Jonathan Beever is an assistant professor of ethics and digital culture in the 麻豆原创鈥檚 Department of Philosophy and the Texts & Technology doctoral program. He can be reached at Jonathan.Beever@ucf.edu

 

 

]]>
Assistant Professor Recognized for Work in Promoting Engineering Ethics /news/undefined-8/ Wed, 02 Mar 2016 14:14:56 +0000 /news/?p=71007 麻豆原创 assistant professor Jonathan Beever recently was recognized by the National Academy of Engineering for his role in collaborating with a multidisciplinary team of engineering, communication and ethics educators from Purdue University in developing and testing a program for enhancing engineering students鈥 ethical reasoning skills.

Beever, who began the work with the Purdue team before he came to 麻豆原创 last summer, developed a series of case-based online modules that help engineers develop ethical reasoning skills about contemporary professional and social issues. The project was just recognized this month by the academy鈥檚 Center for Engineering Ethics and Society as one of 25 exemplary models of ethics in engineering.

Beever, an assistant professor of ethics and digital culture with the 麻豆原创 Department of Philosophy and a member of the faculty of the texts and technology program, recently represented the development team on a panel at a national ethics conference.

The team is supported by grants from the National Science Foundation and Purdue through the end of this year, and Beever has submitted two other proposals to build off this work, one with him as the primary investigator from 麻豆原创.

Beever said his involvement with the project will help boost 麻豆原创鈥檚 image as being engaged with engineering ethics at a national level.

Beever also held postdoctoral positions with Penn State’s Rock Ethics Institute and with Purdue University’s Weldon School of Biomedical Engineering before joining 麻豆原创. He has held fellowships with the Kaufmann Foundation, the Aldo Leopold Foundation, and the Global Sustainable Soundscape Network. He works and publishes about environmental ethics and bioethics, focusing on questions of ethics, science, and representation.

]]>