THINKING ACTIVITY - Why are we so scared of robots / ais?
First and foremost, you’ll hear about how robots aren’t safe. While it’s true that fenced robots in manufacturing are behind said fences for a reason, that’s not to say that their collaborative counterparts are dangerous as well. While nothing is completely safe, collaborative robots are in fact built using a standard set of guidelines known as the ISO TS 15066.
These provide some standards for collaborative robotics to abide by, which helps manufacturers ensure that their robots are as safe as can be. Today’s standards, combined with safety technology, has resulted in several excellent options for safety with collaborative robots:
● Power and force limiting
● Speed and separation monitoring
● Hand guiding
New safety technology continues to emerge as time goes on, quickly revealing that humans have no need to fear their safety around robots in the workplace, so long as they are well informed.
Throughout history, people have always feared technology, because they were scared it would make their jobs obsolete. Cars, the printing press, industrial technology, all of these things were met with fear in the past. People were afraid these things would put them out of work, but in every case, they did not.
Instead, technology creates new industries, new jobs, and more prosperity as a whole. With robots, the same thing is happening today. People in manufacturing are afraid their jobs will be taken, but new jobs are already being created.
Whether it’s someone to program the robots, or a human to work on more intricate tasks that robots can’t perform, new roles are emerging as robots increase production and lower costs. Back-breaking jobs that humans hate can now be given to robots, thus freeing them up to do more rewarding work.
There’s nothing to fear, because technology creates far more than it destroys when it comes to industries, jobs, and careers.
We’ve made some huge strides in regards to artificial intelligence, but we’re a long ways off from robots that are as smart, or smarter than humans. People who think about decades in the future may have concerns about the intelligence of robots being used against us, but that’s not something we need to worry about now.
The current landscape of robotics puts us squarely in control. As a result, the idea of robots turning on us or becoming sentient is still in the realm of science fiction. Major technology figures like Elon Musk, Stephen Hawking, and others are already thinking about how to properly harness A.I.
Geoff Hinton, known as the “godfather of deep learning,” told the BBC that “You can see things clearly for the next few years, but look beyond 10 years and we can’t really see anything. It’s just a fog.”
Instead, we should focus on the fact that today’s A.I systems can process information too vast or too complex for humans to grasp. There’s nothing to fear from the growth of A.I as we are still in the infancy of the technology.
The most common fears surrounding robots are often birthed from a lack of knowledge. They're not alone, as many experts across fields as diverse as technology and economics are also expressing their fears over the rise of the robots. While these fears are certainly valid, it's important to note that these concerns are being voiced in the hopes that technology can be improved, not prohibited.
To ensure the safe use of A.I., human monitoring of their interactions is essential, and is something many scientists will now be wary of moving forward.
A 2017 paper by researchers Sandra Wachter, Brent Mittelstadt, and L uciano Floridi warned that one of the biggest concerns regarding A.I. isn't actually the technology itself, but the biases we transfer onto it. They suggested that in order to create technology that could adequately serve and protect all of mankind, it would have to be free from the biases we possess as humans.
They voiced their concerns regarding security robots that can report incidences to the authorities, and whether or not these robots will be programmed with the same racial biases we see across some aspects of human law enforcement. In order to create safe technology, we first have to examine our own social ills, lest we pass them onto our machines.
It is within human nature to fear the unknown. And regardless of how much debate there is around artificial intelligence, most of it still remains mysterious. The concept of AI exists since the 1950s, however, it has gained considerable ground and attention starting with 2000.
Catchy headlines in well-known publications constantly make us aware of the advancements in the field and the raising concerns as well as dangers awaiting just around the corner. What is a person to do in this situation? Panic seems to be the appropriate response. But what are we actually panicking about?
Even though we continue to have a lot of questions when it comes to artificial intelligence, it is also quite clear that sophisticated AI could make the world a better place. It can help us fight climate change, discover new treatments for diseases, better understand our customers and take over most of the traditional activities that we have to do now. But this does not come lightly. Because, what we see as mundane activities that do not bring any value, others see as their only way towards making a living.
Knowing that smart machines are coming to handle all these tasks makes a lot of people anxious. For some of them, this is a legitimate fear. However, a lot of people nurture skills that technology is unlikely to replicate. These are related to creativity, abstract or critical thinking, social or emotional intelligence and, of course, programming and other roles tied to developing the AI systems we're afraid of.
Although they are quite entertaining and interesting, we should admit that most of the AI-based movies share a negative image of a future in which super-intelligent machines will take over the world. The number of humans has already been surpassed by mobile devices since 2014, so, a world where we are no longer dominant does not seem so far-fetched.
Needless to say, AI holds the potential for improving a lot of domains, including controlling robots that can enhance efficiency and precision in different areas. Regardless of the positive impact robots have in automating numerous tasks, individuals continue to have mixed feelings about them. The more robots we have around us, the more reasons we seem to find to be afraid of them. And accidental deaths in which machines have been involved sure did not help.
The first time a human died because of a robot was in 1979. The machine that was supposed to retrieve parts from a storage area malfunctioned and the incident caused the death of a worker. The last registered such case was in 2015.
A contractor working at one of Volkswagen's production plants was killed by the very robot he was setting up to grab and manipulate auto parts. Even though robots were involved in these accidents, later investigations concluded that they caused harm not because of bugs or malice, but because the people that created and handled them did not properly validate the software. So artificial intelligence has nothing to do with it. Moreover, studies show that in the US the number of industrial accidents has decreased since technology has increased and artificial intelligence and automation have been widely adopted.
Another reason artificial intelligence is seen as dangerous is because people believe it may be used to weaponize nations. On a certain level, the idea of having machines fighting machines and saving human lives doesn't sound too bad, but there are a lot of other implications to be considered. What if the machines malfunction, like we've already seen it happen? And how about the countries that do not have the necessary tech resources to build such systems? In a, hopefully unlikely, military event involving AI weapons how are people supposed to protect themselves? These are just some of the questions that keep certain people awake at night.
Among them is also Max Tegmark, a Swedish-American physicist and the leader of the Future of Life Institute. In 2015, he addressed his worry about autonomous weapons in an open letter together with Stuart Russell, a computer scientist known for his contributions to artificial intelligence. In the end, the letter was signed by over 17,000 individuals, including Stephen Hawking. The letter asked governments to refrain from building AI weapons and pointed out that AI researchers do not want to tarnish their field by allowing the use of AI that is not beneficial for the human race.
Comments
Post a Comment