Table 11-1: Features of On-line Communities .fr

Convergence Affecting Ethics. ▫ Three converging technologies that raise ethical concerns are: ..... a branch of engineering dedicated to the development of.
812KB taille 37 téléchargements 209 vues
Converging Technologies and Pervasive Computing 





Cybertechnology is converging with noncybertechnologies, including biotechnology and nanotechnology. Cybertechnology is also becoming pervasive as computing devices now pervade our public and private spaces. Pervasive computing and technological convergence are both facilitated by the miniaturization of computing devices.

Technological Convergence 

Howard Rheingold (1992) notes that technological convergence occurs when: unrelated technologies or technological paths intersect or ―converge unexpectedly‖ to create an entirely new field.



Convergence re cybertechnology is not new. 



E.g., Rheingold notes that virtual-reality (VR) technology resulted from the convergence of video technology and computer hardware in the 1980s. But cybertechnology is now converging with noncybertechnologies at an unprecedented pace.

Three Areas of Technological Convergence Affecting Ethics 

Three converging technologies that raise ethical concerns are: 





Ambient Intelligence (AmI), the convergence of

(a) pervasive computing, (b) ubiquitous communication, and (c) intelligent user interfaces; Bioinformatics (the convergence of biotechnology and information technology); Nanocomputing (the convergence of nanotechnology and computing).

Ambient Intelligence (AmI) 

Ambient intelligence (or AmI) is defined as a technology that

enables people to live and work in environments that respond to them in ―intelligent ways‖ (Aarts and Marzano, 2003; Brey, 2005; and Weber et al., 2005). 

Review the hypothetical example of the ―intelligent home‖ (Raisinghani, et al., 2004) described in the textbook.

AmI (Continued) 



AmI is made possible by developments in artificial intelligence (AI), described in Chapter 11. Three key technological components also make AmI possible:   

pervasive computing; ubiquitous communication; intelligent user interfaces (IUIs).

Pervasive Computing 

According to the Centre for Pervasive Computing, pervasive computing is defined as: a computing environment where information and communication technology are ―everywhere, for everyone, at all times.‖



Computer technology is integrated in our environments – i.e., from ―toys, milk cartons and desktops, to cars, factories, and whole city areas.‖

Pervasive Computing (Continued) 



Pervasive computing is made possible because of the increasing ease with which circuits can be embedded into objects, including wearable, even disposable items. Bütschi, Courant, and Hilty (2005) note that computing has already begun to pervade many dimensions of our lives. 

E.g., it pervades the work sphere, cars, public transportation systems, the health sector, the market, and our homes.

Pervasive Computing (Continued) 



Pervasive computing is sometimes also referred to as ubiquitous computing (or ubicomp). ―Ubiquitous computing‖ was coined by Mark Weiser (1991), who envisioned ―omnipresent computers‖ that serve people in their everyday lives, both at home and at work.

Pervasive Computing (Continued) 



Adam Greenfield (2005) believes that ubiquitous or pervasive computing will insinuate itself much more thoroughly into our day-to-day activities than current Internet- and Web-based technologies. But for pervasive computing to operate at its full potential, continuous and ubiquitous communication between devices is also needed.

Ubiquitous Communication 

Ubiquitous communication aims at ensuring flexible and omnipresent communication between interlinked computer devices (Raisinghani et al., 2004) via:    

wireless local area networks (W-LANs), wireless personal area networks (W-PANs), wireless body area networks (W-BANs), Radio Frequency Identification (RFID).

Intelligent User Interfaces (IUIs) 



Intelligent User Interfaces (or IUIs)

have been made possible by developments in AI. Brey (2005) notes that IUIs go beyond traditional interfaces such as a keyboard, mouse, and monitor.

IUIs (Continued) 



IUIs improve human interaction with technology by making it more intuitive and more efficient than was previously possible with traditional interfaces. With IUIs, computers can ―know‖ and sense far more about a person – including information about that person‘s situation, context, or environment – than was possible with traditional interfaces.

IUIs (Continued) 



With IUIs, AmI remains in the background and is virtually invisible to the user. Brey notes that with IUIs, people can be 



surrounded with hundreds of intelligent networked computers that are ―aware of their presence, personality, and needs.‖ But users may not be aware of the existence of IUIs in their environments.

Ethical and Social Issues Affecting AmI 

Three ethical and social issues affecting AmI include:   

freedom and autonomy; technological dependency; privacy, surveillance, and the ―Panopticon.‖

Autonomy and Freedom Involving AmI 





Will human autonomy and freedom be enhanced or diminished as a result of AmI technology? AmI‘s supporters suggest humans will gain more control over the environments with which they interact because technology will be more responsive to their needs. Brey notes a paradoxical aspect of this claim, pointing out that ―greater control‖ is presumed to be gained through a ―delegation of control to machines.‖

Autonomy and Freedom (Continued) 

Brey describes three ways in which AmI may make the human environment more controllable, noting that it can: 





(1) become more responsive to the voluntary actions, intentions, and needs of users; (2) supply humans with detailed and personal information about their environment; (3) do what people want without having to engage in any cognitive or physical effort.

Autonomy and Freedom (Continued) 

Brey also describes three ways that AmI can diminish the amount of control that humans have over their environments, where users may lose control because a smart object can: 





(1) make incorrect inferences about the user, the user‘s actions, or the situation; (2) require corrective actions on the part of the user; (3) represent the needs and interests of parties other than the user.

Technological Dependency 



We have come to depend a great deal on cybertechnology in conducting many activities in our day-to-day lives. In the future, will humans depend on the kind of smart objects and smart environments made possible by AmI technology in ways that exceed our current dependency on cybertechnology?

Technological Dependency (Continued) 

IUIs could relieve us of: 



(a) having to worry about performing many of our routine day-to-day tasks, which can be considered tedious and boring, and (b) much of the cognitive effort that has, in the past, enabled us to be fulfilled and to flourish as humans.

Technological Dependency (Continued) 



What would happen to us if we were to lose some of our cognitive capacities because of an increased dependency on cybertechnology? Review the futuristic scenario (in the textbook) described by E. M. Forster about what happens to a society when it becomes too dependent on machines.

Privacy, Surveillance, and the Panopticon 

Marc Langheinrich (2001) argues that with respect to privacy and surveillance, four features differentiate AmI from other kinds of computing applications:   



ubiquity, invisibility, sensing, memory application.

Privacy, Surveillance, and the Panopticon (Continued) 

Langheinrich notes that because: 



(1) computing devices are ubiquitous or omnipresent in AmI environments, privacy threats are more pervasive in scope. (2) computers are virtually invisible in AmI environments, it is likely that users will not always realize that computing devices are present and are being used to collect and disseminate personal data.

Privacy, Surveillance, and the Panopticon (Continued) 

Langheinrich also believes that AmI poses a more significant threat to privacy than earlier computing technologies because: 



Sensing devices associated with IUIs may become

so sophisticated that they will be able to sense (private) human emotions like fear, stress, and excitement. AmI has the unique potential to create a memory or ―life-log‖ – i.e., a complete record of someone‘s past.

Surveillance and the Panopticon 



Johann Čas (2004) notes that in AmI environments, no one can be sure that he or she is not being observed. An individual cannot be sure whether information about his or her presence at any location is being recorded.

Surveillance and the Panopticon (Continued) 





Čas believes that the only realistic attitude is to assume that any activity (or inactivity) is being monitored and that this information may be used in any context in the future. So, people in AmI environments are subject to a virtual ―panopticon.‖ Review the scenario on Bentham‘s Panopticon (described in the textbook). 

Does it anticipate any threats posed by AmI?

Table 12-1 Ambient Intelligence Technological Components

Ethical and Social Issues Generated

Pervasive Computing

Freedom and Autonomy

Ubiquitous Communication

Privacy and Surveillance

Intelligent User Interfaces (IUIs)

Technological Dependence

Bioinformatics  



Bioinformatics is a branch of informatics. Informatics involves the ―acquisition, storage, manipulation, analyses, transmission, sharing, visualization, and simulation of information on a computer‖ (Goodman, 1998). Bioinformatics is the application of the informatics model to the management of biological information.

Ethical Aspects of Bioinformatics 

Three kinds of social and ethical concerns arise in bioinformatics research and development:   

Privacy and Confidentiality; Autonomy and Informed Consent; Information Ownership and Property Rights.

Privacy, Confidentiality, and the Role of Data Mining 



Review the scenario involving deCODE Genetics (described in the textbook). Many individuals who donated DNA samples to deCODE had the expectation that their personal genetic data was:  

confidential information, protected by the company‘s privacy policies and by privacy laws.

Privacy, Confidentiality, and the Role of Data Mining (Continued) 

Anton Vedder (2004) notes that privacy protection that applies to personal information about individuals does not necessarily apply to that information once it is:  

aggregated, and crossed referenced with other information (via data mining).

Privacy, Confidentiality, and Data Mining (Continued) 



Research subjects could be denied employment, health insurance, or life insurance based on the results of data-mining technology used in genetic research. For example, a person could end up in a ―risky‖ category based on arbitrary associations and correlations that link trivial non-genetic information with sensitive information about one‘s genetic data.

Privacy, Confidentiality, and Data Mining (Continued) 



Individuals who eventually become identified or associated with newly-created groups may have no knowledge that the groups to which they have been assigned actually exist. These people may also have no chance to correct any inaccuracies or errors that could result from their association with that group.

Autonomy and Informed Consent 



The process of informed consent used in getting permissions from human research subjects who participate in genetic studies that use data mining is questionable. In the deCODE case, did the genetic information acquired from ―informed‖ volunteers meet the required conditions for valid informed consent?

Autonomy and Informed Consent (Continued) 

According to the Office of Technology Assessment (OTA) Report, entitled Protecting Privacy in Computerized Medical Information, individuals must: 



(i) have adequate disclosure of information about the data dissemination process; (ii) be able to fully comprehend what they are being told about the procedure or treatment.

Autonomy and Informed Consent (Continued) 

Because of the way data-mining technology can manipulate personal information that is authorized for use in one context only, the process of informed consent has become less transparent. 



O‘Neill (2002) describes this process as opaque.

The conditions required for ―valid‖ informed consent are difficult, if not impossible, to achieve in cases that involve secondary uses of personal genetic information.

Intellectual Property Rights and Ownership Issues 



DeCODE Genetics was given exclusive rights (for 12 years) to the information included in the Icelandic health-records database. This factor raises property-rights issues that also affect who should (and should not) have access to the information in that database.

Intellectual Property Rights and Ownership Issues (Continued) 

Who should own the personal genetic information in deCODE‘s database? 





Should deCODE have exclusive ownership rights to all of the personal genetic information that resides in its databases? Should individuals retain (at least) some rights over their personal genetic data when it is stored in a privately owned database? Should the Icelandic Government play a custodial role?

Intellectual Property Rights and Ownership Issues (Continued) 



Have individuals that donated their DNA samples to deCODE necessarily lost all rights to their personal genetic data, once it was stored in that company‘s databases? Should deCODE hold rights to this data in perpetuity, and should deCODE be permitted to do whatever it wishes with that data?

Intellectual Property Rights and Ownership Issues (Continued) 

To see why questions involving the ownership of personal genetic information stored in commercial databases are so controversial from an ethical point of view 



Recall our discussion in Chapter 5 of a commercial database containing personal information about customers that was owned by Toysmart.com, a now defunct online business. DeCode Genetics, Inc. is now also in the process of filing for bankruptcy.

Table 12-2: Ethical Issues Associated with Bioinformatics Personal Privacy and Confidentiality

The aggregation of personal genetic data, via data mining, can generate privacy issues affecting ―new groups‖ and ―new facts‖ about individuals.

Informed Consent and Autonomy

The nontransparent (or ―opaque‖) consent process can preclude ―valid‖ or ―fully informed‖ consent, thereby threatening individual autonomy.

Intellectual Property Rights/Ownership

The storage of personal genetic data in electronic databases raises questions about who should have ownership rights and access to the data.

Ethical Guidelines and Legislation for Genetic Data/Bioinformatics 



In genomics research, ELSI (Ethical, Legal, and Social Implications) Guidelines have been established for federally-funded genomics research. But ELSI requirements do not apply to genomics research in the commercial sector.

Ethical Guidelines and Legislation (Continued) 



Some genetic-specific privacy laws and policies have been passed in response to concerns about potential misuses of personal genetic data. In the U.S., laws affecting genetics discrimination (prior to 2008) have been enacted primarily at the state level.

Ethical Guidelines and Legislation (Continued) 

The Health Insurance Portability and Accountability Act (HIPAA), enacted into law in 2003, provides broad protection at the federal level in the U.S. for personal medical information. 

E.g., HIPAA protects the privacy of ―individually identifiable health information‖ from ―inappropriate use and disclosure.‖

Ethical Guidelines and Legislation (Continued) 

Critics worry that HIPAA does not provide any special privacy protection for personal genetic information. 

E.g., it is not clear that HIPAA adequately addresses concerns affecting nonconsensual secondary uses of personal medical and genetic information (Baumer, Earp, and Payton, 2006).

U.S. Genetics Laws at the Federal Level 



In 2008, the Genetic Information Nondiscrimination Act (GINA) was signed into law in the U.S. GINA‘s aim is to protect Americans against forms of discrimination based on their genetic information.

GINA (Continued) 



GINA was designed to protect individuals with respect to genetic discrimination affecting employment and health insurance. But GINA‘s critics believe that it this law is ―overly broad‖ and that it will not succeed in rectifying many of the existing state laws in the U.S. that tend to be inconsistent regarding genetic discrimination.

Nanotechnology 

Rosalyn Berne (2005) defines nanotechnology as: the study, design, and manipulation of natural phenomena,

artificial phenomena, and technological phenomena at the nanometer level. 

K. Eric Drexler, who coined the term nanotechnology in the 1980s, describes the field as: a branch of engineering dedicated to the development of extremely small electronic circuits and mechanical devices built at the molecular level of matter.

Nanotechnology and Nanocomputing 





Drexler (1991) predicted that developments in nanotechnology will result in computers at the nano-scale, no bigger in size than bacteria, called nanocomputers. Nanocomputers can be designed using various types of architectures. An electronic nanocomputer would operate in a manner similar to present-day computers, differing primarily in terms of size and scale.

Nanotechnology and Nanocomputers (continued) 



To appreciate the scale of future nanocomputers, imagine a mechanical or electronic device whose dimensions are measured in nanometers (billionths of a meter, or units of 10-9 meter). Ralph Merkle (2001) predicts that nano-scale computers will be able to deliver a billion billion instructions per second – i.e., a billion times faster than today‘s desktop computers.

Nanotechnology and Nanocomputing (continued) 





Although still in its early stages of development, some primitive nanocomputing devices have already been tested. At Hewlett Packard, computer memory devices with eight platinum wires that are 40 nanometers wide on a silicon wafer have been developed. James Moor and John Weckert (2004) note that it would take more than one thousand of these chips to be the width of a human hair.

Optimistic View of Nanotechnology 

Bert Gordijn (2003) considers an ―optimistic view,‖ where nanotechnology would:   







be self-sufficient and ―dirt free‖; create unprecedented objects and materials; enable the production of inexpensive high quality products; be used to fabricate food rather than having to grow it; provide low priced and superior equipment for healthcare; enable us to enhance our human capabilities and properties.

Pros of Nanotechnology 



Nanites could be used to clean up toxic spills and to eliminate other kinds of environmental hazards. Nanites could also dismantle or "disassemble" garbage at the molecular level and recycle it again as food.

Pros of Nanotechnology (continued) 





Nano-particles inserted into bodies could diagnose diseases and directly treat diseased cells. Doctors could use nanites to make microscopic repairs on areas of the body that are difficult to operate on with conventional surgical tools. With nanotechnology tools, the life signs of a patient could be better monitored.

Pessimistic View of Nanotechnology 

Gordign also considers the pessimistic view, where nanotechnology developments could result in:     



severe economic disruption; premeditated misuse in warfare and terrorism; surveillance with nano-level tracking devices; extensive environmental damage; uncontrolled self replication (sometimes referred to as the ―grey goo scenario‖); misuse by criminals and terrorists (sometimes referred to as the ―black goo scenario‖).

Cons of Nanotechnology 





All matter (objects and organisms) could theoretically be disassembled and reassembled by nanite assemblers and disassemblers. Since nanites could be created to be selfreplicating, what would happen if strict "limiting mechanisms" were not built into them? Theoretically, they could multiply endlessly like viruses.

Cons of Nanotechnology (Continued) 



Our movements could be tracked so easily by others via nanoscopic devices such as molecular sized microphones, cameras, and homing beacons. Our privacy and freedom could be further eroded because governments, businesses, and ordinary people could use these devices to monitor us.

Cons of Nanotechnology (continued) 



Nanite assemblers and disassemblers could be used to create weapons. Nanites themselves could be used as weapons. 

E.g., Andrew Chen (2002) notes that guns, explosives, and electronic components of weapons could all be miniaturized.

Nanoethics: Identifying and Analyzing Ethical Issues in Nanotechnology 





Moor and Weckert (2004) believe that assessing ethical issues that arise at the nano-scale is important because of the kinds of ―policy vacuums‖ that are raised. They do not argue that a separate field of applied ethics called nanoethics is necessary. But they make a strong case for why an analysis of ethical issues at the nano-level is now critical.

Nanoethics (Continued) 

Moor and Weckert identify three distinct kinds of ethical concerns at the nano-level that warrant analysis:  privacy and control;  

longevity; runaway nanobots.

Ethical Aspects of Nanotechnology: Privacy Issues 





Moor and Weckert note that we will be able to construct nano-scale information gathering systems. It will become extremely easy to put a nanoscale transmitter in a room, or onto someone‘s clothing. Individuals may have no idea that these devices are present or that they are being monitored and tracked by them.

Ethical Aspects of Nanotechnology: Longevity Issues 



Moor and Weckert note that while many see longevity as a good thing, there could also be negative consequences. There could be a population problem if the life expectancy of individuals were to change dramatically.

Ethical Aspects of Nanotechnology: Longevity Issues (Continued) 



Moor and Weckert point out that if fewer children are born relative to adults, there could be a concern about the lack of new ideas and ―new blood.‖ They also note that questions could arise with regard to how many ―family sets‖ couples, whose lives could be extended significantly, would be allowed to have during their expanded lifetime.

Ethical Aspects of Nanotechnology: Runaway Nanobots 





Moor and Weckert point out that when nanobots work to our benefit, they build what we desire. But when nanobots work incorrectly, they can build what we don‘t want. Some worry that the replication of these bots could get out of hand.

Should Computer Scientists Participate in Nanocomputing Research/Development? 



Joseph Weizenbaum (1984) argues that computer science research that can have ―irreversible and not entirely unforeseeable side effects‖ should not be undertaken. Bill Joy (2000) argues that because developments in nanocomputing are threatening to make us an ―endangered species,‖ the only realistic alternative is to limit its development.

Future Nanotechnology Research (Continued) 



Ralph Merkle (2001) argues that if research in nanotechnology is prohibited, or even restricted, it will be done underground. If this happens, nano research would not be regulated by governments and by professional agencies concerned with social responsibility.

Should Research Continue in Nanotechnology? 

Weckert (2006) argues that potential disadvantages that can result from research in a particular field are not in themselves sufficient grounds for halting research. 



He suggests that there should be a presumption in favor of freedom in research.

But Weckert also argues that it should be permissible to restrict or even forbid research where it can be clearly shown that harm is more likely than not to result from that research.

Assessing Nanotechnology Risks: Applying the Precautionary Principle 



Questions about how best to proceed in scientific research when there are concerns about harm to the public good are often examined via the Precautionary Principle. Weckert and Moor (2004) interprer the precautionary principle in the following way: If some action has a possibility of causing harm, then that action should not be undertaken or some measure should be put in its place to minimize or eliminate the potential harms.

The Precautionary Principle (Continued) 

Weckert offers the following strategy:

If a prima facie case can be made that some research will likely cause harm...then the burden of proof should be on those who want the research carried out to show that it is safe.



He also says that there should be:

...a presumption in favour of freedom until such time a prima facie case is made that the research is dangerous. The burden of proof then shifts from those opposing the research to those supporting it. At that stage the research should not begin or be continued until a good case can be made that it is safe.

Nanotechnology and the Precautionary Principle (Continued) 

Weckert and Moor believe that when the precautionary principle is applied to questions about nanotechnology research and development, it needs to be analyzed in terms of three different ―categories of harm‖:   



direct harm, harm by misuse, harm by mistake or accident.

The kinds of risks involved in each differs.

The Need for Clear Ethical Guidelines for Nanocomputing and Nanotechnology 



Ray Kurzweil (2005) has suggested that an ELSI-like model should be developed and used to guide researchers working in nanotechnology. Many consider the ELSI framework to be an ideal model because it is a ―proactive‖ ethics framework.

The Need for Ethical Guidelines (Continued) 



Moor (2006) notes that in most scientific research areas, ethics has had to play ―catch up,‖ because guidelines were developed in response to cases where serious harm had already resulted. Prior to the ELSI Program, ethics was typically ―reactive‖ in the sense that it has followed scientific developments rather than informing scientific research.

Ethical Guidelines (Continued) 

Moor and Weckert (2004) are critical of the ELSI model because it employs a scheme that they call an ―ethics-first‖ framework. 



They believe that this framework has problems because it depends on a ―factual determination‖ of the specific harms and benefits of a technology before an ethical assessment can be done. They also note that in the case of nanotechnology, it is very difficult to know what the future will be.

Ethical Guidelines (Continued) 



If we developed an ELSI-like ethics model, it might seem appropriate to put a moratorium on nanotechnology research until we get all of the facts. But Moor and Weckert argue that while a moratorium would halt technology developments, it will not advance ethics in the area of nanotechnology.

Ethical Guidelines (Continued) 





Moor and Weckert also argue that turning back to an ―ethics-last model‖ is not desirable either. They note that once a technology is in place, much unnecessary harm may already have occurred. So, for Moor and Weckert, neither an ethicsfirst nor an ethics-last model is satisfactory for nanotechnology.

Ethical Guidelines (Continued) 

Moor and Weckert argue that ethics is something that needs to be done continually as:  



technology develops, and its potential consequences become better understood.

They also point out that ethics is ―dynamic‖ in that the factual component on which it relies has to be continually updated.

Ethical Guidelines (Continued) 



Thus far, nanotechnology guidelines in the private sector have been implemented by the Foresight Institute. The U.S. Government has created the National Nanotechnology Initiative (NNI) to monitor and guide federallyfunded research in nanotechnology.

Ethical Guidelines (Continued) 



Some worry that conflicts of interest involving the military and national defense initiatives can easily arise. Much of the funding for nanotechnology research has come from government agencies, including the:  

National Science Foundation (NSF), Defense Advanced Research Projects Agency (DARPA).

Ethical Guidelines (Continued) 

Andrew Chen (2002) believes that in addition to NSF and DARPA, other stakeholders include:   

researchers (independent and privately-funded), nanotechnology users, potentially everyone (since all of us will eventually be affected by developments in nanotechnology).

Ethical Guidelines (Continued) 

Chen proposes that a non-government advisory council be formed to:  



monitor the research, and help formulate a broader set of ethical guidelines and policies.

The ethical guidelines would need to be continually updated in light of ongoing developments in nanotechnology.

Future Challenges Affecting AI 



We considered some implications of AI (artificial intelligence) for our sense of self, and we defined AI in Chapter 11. Three kinds of AI-related concerns affecting future challenges include:  



Cyborgs and bionic chip implants; Expanding the sphere of moral consideration to include AI entities; Designing ―moral machines.‖

AI and Controversies Involving Bionic Chip Implants 

Future chip plants made possible by AI could be designed to make a normal person ―super human.‖ 



Weckert notes that ―conventional‖ implants designed to ―correct‖ deficiencies have been around and used for some time. The purpose of these ―therapeutic implants‖ has been to assist patients in their goal of achieving ―normal‖ states of vision, hearing, heartbeat, etc.

AI and Implants (Continued) 

James Moor (2005) notes that because the human body has ―natural functions,‖ some will argue that implanting chips in a body is acceptable as long as these implants maintain and restore the body‘s ―natural functions.‖ 



While therapeutic implants will be accepted, ―enhancement implants‖ will be controversial. Moor believes that many will find a policy based on a therapeutic-enhancement distinction to be appealing.

AI and Implants (Continued) 

Moor believes: 



therapeutic chip implants such as pacemakers, defibulators, and bionic eyes that maintain and restore natural bodily functions will most likely be accepted; enhancement implants, such as giving patients added arms or infrared vision, will most likely be prohibited.

AI and Implants (Continued) 

Moor believes that such a policy would likely endorse:  



the use of a chip that reduced dyslexia; a chip implant to assist memory of Alzheimer patients.

But Moor also believes that this kind of policy would not endorse: 



the implanting of a ―deep blue‖ chip for superior chess play; the implanting of a miniature digital camera that would record and playback what a person had just seen.

AI and Implants (Continued) 



We need to assess now some of the advantages and disadvantages of bionic implants that AI will make possible. Weckert (2006) asks us whether we: want to be ‗superhuman‘ relative to our current abilities with implants that enhance our senses, our memories, and our reasoning ability? What would such implants do to our view of what it is to be human?

AI and Cyborgs 





Some now worry that with bionic parts, humans and machines could soon begin to merge into cyborgs. Ray Kurzweil (1999) suggests that the distinction between machines and humans may no longer be useful. Moor believes the question we must continually reevaluate is not whether we should become cyborgs, but rather:

What sort of cyborgs should we become?

AI and Issues of Moral Responsibility 

AI research and development raises two questions about moral responsibility: 



(1) should we continue to develop artificialintelligent entities? (2) if we do, what are our moral responsibilities to them?

Expanding the Sphere of Moral Consideration 



Will we need to expand the domain of moral consideration to include artificialintelligent entities? If the answer to that question is ―yes,‖ two additional questions arise: 



(A) Which kinds of things deserve moral consideration? (B) Why do they warrant it?

Expanding Our Sphere of Moral Consideration (Continued) 



Many humans, especially in the Western world, viewed these resources simply as something to be used and disposed of as they saw fit. They also assumed that they had no moral obligations toward these ―resources.‖

Do We Need to Expand the Sphere of Moral Consideration Because of AI? 



Prior to the 20th century, we generally assumed that only human beings deserved ethical consideration. All other entities—animals, trees, natural objects, etc.—were assumed to be mere resources for humans to use (and abuse) as they saw fit.

Expanding Our Sphere of Moral Consideration (Continued) 

By the mid-twentieth century, the assumption that moral consideration should be granted only to humans was challenged by both:  

animal-rights activists and groups, and environmentalists.

Expanding Our Sphere of Moral Consideration (Continued) 



Animal-rights advocates point out that animals, like humans, are capable of feeling pain and suffering. Many advocates also argued that because animals can suffer, we should grant them moral consideration.

Expanding Our Sphere of Moral Consideration (Continued) 

Some environmentalists have argued that we also should extend moral consideration to include:   

trees, plants, the entire ecosystem.

Expanding Our Sphere of Moral Consideration (Continued) 



Our thinking about what kinds of entities deserve moral consideration has evolved significantly in the past fifty or so years. Soon, there may be compelling reasons to once again expand our sphere of moral consideration to include ―artificialintelligent entities.‖

Expanding Our Sphere of Moral Consideration (Continued) 



Floridi and Sanders (2004) suggest that we need to grant moral consideration to ―information entities‖ such as ―artificial autonomous agents.‖ We have seen that some artificialintelligent entities now exhibit a form of rationality that parallels, and in some cases exceeds, that of humans.

Expanding Our Sphere of Moral Consideration (Continued) 



Assume that certain artificial-intelligent entities are (or will soon become) ―rational‖ and ―autonomous‖ agents. If our criterion for granting moral consideration to humans is because humans are rational agents, then: Should we also grant moral consideration to artificial entities that exhibit rationality?

Expanding Our Sphere of Moral Consideration (Continued) 



Review the scenario (in the textbook) on the ―artificial boy‖ from the movie AI. At least three important questions arise: 





Should the ―boy‖ (or artificial creatures like ―him‖) ever have been developed in the first place? Does this ―boy‖ deserve moral consideration (especially from the human parents who adopted ―him‖? Is it morally permissible for the ―boy‘s‖ adopted parents to discard ―him‖?

Designing ―Moral Machines‖ of the Future 

Assuming that we go ahead with the development of artificial entities, including intelligent robots, 



―Who should be responsible for teaching these entities right from wrong?‖

Also, Can the appropriate instruction for these ―moral machines‖ be accomplished through software code designed to implement agreed-upon ethical rules or guidelines?

Designing ―Moral Machines‖ of the Future (Continued) 



In the 1940s, Isaac Assimov anticipated the need for ethical rules that would guide the robots of the future. He then formulated his (now classic) Three Laws of Robots: 1. A robot may not injure a human being, or through inaction, allow a

human being to come to harm. 2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

Are Assimov‘s ―three laws‖ (from science fiction) adequate to meet the kinds of challenges that robots, (soft) bots, and AI entities of the near future will likely pose for us?