• Insights on Important Social Issues
  • Artwork in responce to social issues ..
  • Moral Distress in Social Work Practice — When …

Hypothesis 1: Moral issues are those which involve a difference of belief and not a matter of preference.

This social issue can be handled only if another issue i.e.

Reamer, F. (2014). Moral injury in social work. Retrieved from .

Hypothesis 2: Moral issues are those which involve the experience of a special kind of feeling.
Military robotics has proven to be one of the most ethicallycharged robotics applications. Today these machines are largelyremotely operated (telerobots) or semi-autonomous, but over time thesemachines are likely to become more and more autonomous due to thenecessities of modern warfare (Singer 2009). In the first decade of warin the 21st century robotic weaponry has been involved innumerous killings of both soldiers and noncombatants, and this factalone is of deep moral concern. Gerhard Dabringer has conductednumerous interviews with ethicists and technologists regarding theimplications of automated warfare (Dabringer 2010). Manyethicists are cautious in their acceptance of automated warfare withthe provision that the technology is used to enhance just warfarepractices (see Lin et al. 2008; Sullins 2009b) but others have beenhighly skeptical of the prospects of a just autonomous war due toissues like the risk to civilians (Asaro 2008; Sharkey 2011).

Social Issues Breadth | Artwork in responce to social issues


There have already been a number of valuable contributions to thegrowing field of robotic ethics (roboethics). For example, in Wallachand Allen's book Moral Machines: Teaching Robots Right fromWrong (2010), the authors present ideas for the design andprogramming of machines that can functionally reason on moralquestions as well as examples from the field of robotics whereengineers are trying to create machines that can behave in a morallydefensible way. The introduction of semi and fully autonomous machinesinto public life will not be simple. Towards this end, Wallach (2011)has also contributed to the discussion on the role of philosophy inhelping to design public policy on the use and regulation ofrobotics.

 

Are environmental issues moral issues? Moral identity …


Scientists at the J. Craig Venter institute were able to synthesizean artificial bacterium called JCVI-syn1.0 in May of 2010. While media paid attention to this breakthrough, they tended tofocus on the potential ethical and social impacts of the creation ofartificial bacteria. Craig Venter himself launched a publicrelations campaign trying to steer the conversation about issuesrelating to creating life. This first episode in the synthesis oflife gives us a taste of the excitement and controversy that will begenerated when more viable and robust artificial protocells aresynthesized. The ethical concerns raised by Wet ALife, as thiskind of research is called, are more properly the jurisdiction ofbioethics (see entry on ). But it does have some concern for us here in that Wet ALife is part ofthe process of turning theories from the life sciences into informationtechnologies. This will tend to blur the boundaries betweenbioethics and information ethics. Just as software ALife mightlead to dangerous malware, so too might Wet ALife lead to dangerousbacteria or other disease agents. Critics suggest that there arestrong moral arguments against pursuing this technology and that weshould apply the precautionary principle here which states that ifthere is any chance at a technology causing catastrophic harm, andthere is no scientific consensus suggesting that the harm will notoccur, then those who wish to develop that technology or pursue thatresearch must prove it to be harmless first (see Epstein 1980). Mark Bedau and Mark Traint argue against a too strong adherenceto the precautionary principle by suggesting that instead we should optfor moral courage in pursuing such an important step in humanunderstanding of life (2009). They appeal to the Aristoteliannotion of courage, not a headlong and foolhardy rush into the unknown,but a resolute and careful step forward into the possibilities offeredby this research.


For argument's sake, assume Turing is correct even if he isoff in his estimation of when AI will succeed in creating a machinethat can converse with you. Yale professor David Gelernterworries that that there would be certain uncomfortable moral issuesraised. “You would have no grounds for treating it as abeing toward which you have moral duties rather than as a tool to beused as you like” (Gelernter 2007). Gelerntersuggests that consciousness is a requirement for moral agency and thatwe may treat anything without it in any way that we want without moralregard. Sullins (2006) counters this argument by noting thatconsciousness is not required for moral agency. Forinstance, nonhuman animals and the other living and nonlivingthings in our environment must be accorded certain moral rights, andindeed, any Turing capable AI would also have moral duties as well asrights, regardless of its status as a conscious being (Sullins2006).


Social and Moral Development Index LGBT ..

Artificial Intelligence (AI) refers to the many longstandingresearch projects directed at building information technologies thatexhibit some or all aspects of human level intelligence and problemsolving. Artificial Life (ALife) is a project that is not as oldas AI and is focused on developing information technologies and orsynthetic biological technologies that exhibit life functions typicallyfound only in biological entities. A more complete description of logicand AI can be found in the entry on .ALife essentially sees biology as a kind of naturally occurringinformation technology that may be reverse engineered and synthesizedin other kinds of technologies. Both AI and ALife are vastresearch projects that defy simple explanation. Instead thefocus here is on the moral values that these technologies impact andthe way some of these technologies are programmed to affect emotion andmoral concern.

Current Ethical Issues and Cultural Diversity

An information technology has an interesting growth pattern that hasbeen observed since the founding of the industry. Intel engineerGordon E. Moore noticed that the number of components that could beinstalled on an integrated circuit doubled every year for a minimaleconomic cost and he thought it might continue that way for anotherdecade or so from the time he noticed it in 1965 (Moore 1965).History has shown his predictions were rather conservative. Thisdoubling of speed and capabilities along with a halving of cost hasproven to continue every 18 or so months since 1965 and shows littleevidence of stopping. And this phenomenon is not limited to computerchips but is also present in all information technologies. Thepotential power of this accelerating change has captured theimagination of the noted inventor Ray Kurzweil who has famouslypredicted that if this doubling of capabilities continues and more andmore technologies become information technologies, then there willcome a point in time where the change from one generation ofinformation technology to the next will become so massive that it willchange everything about what it means to be human, and at this momentwhich he calls “the Singularity” our technology will allowus to become a new post human species (2006). If this is correct,there could be no more profound change to our moral values. There hasbeen some support for this thesis from the technology community withinstitutes such as the Singularity Institute, the Acceleration StudiesFoundation, Future of Humanity Institute, and H+.[]Reaction to this hypothesis from philosophy has been mixed but largelycritical. For example Mary Midgley (1992) argues that the belief thatscience and technology will bring us immortality and bodilytranscendence is based on pseudoscientific beliefs and a deep fear ofdeath. In a similar vein Sullins (2000) argues that there is aquasi-religious aspect to the acceptance of transhumanism and theacceptance of the transhumanist hypothesis influences the valuesembedded in computer technologies that are dismissive or hostile to thehuman body. While many ethical systems place a primary moral value onpreserving and protecting the natural, transhumanists do not see anyvalue in defining what is natural and what is not and considerarguments to preserve some perceived natural state of the human body asan unthinking obstacle to progress. Not all philosophers arecritical of transhumanism, as an example Nick Bostrom (2008) of theFuture of Humanity Institute at Oxford University argues that puttingaside the feasibility argument, we must conclude that there are formsof posthumanism that would lead to long and worthwhile lives and thatit would be overall a very good thing for humans to become posthuman ifit is at all possible.