3 Critical Challenges of Artificial Intelligence in Today’s World

Before understanding the impact of Artificial Intelligence on Human life and how artificial intelligence is controlling human life, let us first understand what AI really means.

Well, few call it as Industrial revolution 4.0 as well. Artificial intelligence is going to be everywhere in our day to day activities in next 1 decade.

The Definition

There is no particular definition of Artificial intelligence. It’s all about “creating a technology that allows computers do the work what humans are doing and gradually take over to everything. Sounds scary? Yeah, it’s true.

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds

Artificial Intelligence

Types of Artificial Intelligence

There are three types of artificial intelligence we generally see:

Artificial Narrow Intelligence:

It’s really the first stage of AI and it’s where we are at right now. This primarily refers to the computer’s ability to perform a single task extremely well. Whatever the machine intelligence we see and experience today is part of Narrow Intelligence. Google Assistant, Google Translate, Siri, Alexa and other natural language processing tools are examples of Narrow AI.

Some might assume that these tools aren’t “weak” because of their ability to interact with us and process human language, but the reason that we call it “Weak” AI is because these machines are nowhere close to having human-like intelligence. They lack the self-awareness, consciousness, and genuine intelligence to match human intelligence. In other words, they can’t think for themselves.

Benefits of Narrow Artificial Intelligence:

  • Artificial Narrow Intelligence or ANI systems can process data and complete tasks much faster than human can, enabling the overall productivity, efficiency & quality of life in human life.
  • It reduces the boring, routine, mundane tasks that we don’t want to do. From increasing efficiency in our personal life like shopping online with help of Alexa to ordering Pizza with help of Siri, to rifting through mounds of data and analyzing it to build patterns and produce results.

“Narrow AI isn’t just getting better at processing its environment, it’s also understanding the difference between what a human says and what a human want.”

Artificial General Intelligence

Artificial general intelligence (AGI), also referred to as strong AI or deep AI, is the ability of machines to think, comprehend, learn, and apply their intelligence to solve complex problems, much like humans. Strong AI uses a theory of mind AI framework to recognize other intelligent systems’ emotions, beliefs, and thought processes. A theory of mind-level AI refers to teaching machines to truly understand all human aspects, rather than only replicating or simulating the human mind.

AGI would be where the artificial intelligence rivals human intelligence. In that, it’s capable of doing lots of different things all at once, the way humans can. A company called Deepmind which is owned by google, said it had taken a step in that direction. It’s AI model called GATO was able to carry out more than 600 tasks like play video games, search for images and use a robotic arm to stack blocks all at the same time.

Artificial Super Intelligence

ASI (Artificial Super Intelligence) is a type of Artificial Intelligence that surpasses human intelligence and can perform any task better than a human. ASI systems not only understand human sentiments and experiences but can also evoke emotions, beliefs, and desires of their own, similar to humans.

Although the existence of ASI is still hypothetical, the decision-making and problem-solving capabilities of such systems are expected to be far more superior to those of human beings. Typically, an ASI system can think, solve puzzles, make judgments, and take decisions independently.

Recently, Google’s AI Model – LAMDA was in news because of it’s ability to talk to human and answer judgement based questions super intelligently. Though, google tried to hide it with general audience, it’s engineer who conversed with LaMDA shared the snippet of discussion. If you read it, you would understand where Artificial Intelligence is heading. Here is the link of the complete discussion Blake Lemoine had with LaMDA at google

Now, since you have understood the concept and levels of AI, let’s understand how it is going to take over to human life.

Now the idea that we might one day have, machines that can think like humans raises a whole bunch of ethical and philosophical questions about what it could mean for society.

ASI would be closer to those sci-fi movies we have seen wherein machines taking over to human completely.

Problems with AI Today:

  • Privacy & civil liberties

Often we talk about facial recognition systems and it’s uses. Although, it has benefits in our day to day uses, it can go in other direction (and some countries are already following this practice). For example, facial recognition now creates a super powerful tool to identify people just through security cameras in the street. How governments or police might use or abuse that tech is an important debate.

In Russia, for example police appear to be using it to identify anti-government protesters and the Chinese government is probably taking it further than anyone else is. It’s using facial recognition cameras on a mass scale to track people and monitor their behaviors’. There are several reports that it’s also using the technology for racial profiling to identify Uighur Muslims and Tibetans.

  • BIAS and discrimination

There are also big questions about bias and discrimination while using AI. So, again you’ve got this really powerful AI technology, which can amplify biases that already exist in society. If let’s say, the AI has been trained on data that includes historical inequalities and there are plenty of ways that can play out. In the US, for instance a widely used algorithm to help assess health care needs was shown to have a built-in racial bias.

  • Design faults & other mistakes

Next is danger of design faults or problematic assumptions about the way that an AI system is intended to work. For example, the Artificial Intelligence system was found to be a factor in the two plane crashes of the Boeing 737 max, one in Indonesia and one in Ethiopia. 346 people died. The autopilot was ai-powered and it wasn’t designed to allow for a human override, so it gave dangerous nose-down commands and it forced a plane to crash.

What happens if a self-driving car kills someone? who’s responsible? The thing is, these problems aren’t necessarily with the AI itself. It’s about how it’s applied, how much we rely on it and whether we’re even able to think through all the possible consequences. Right now, we’re really worried about things –  like mistakes, algorithmic mistakes, not a mistake by the computer.

It’s a mistake about how it was written, and whether or not the processes were put in place for adequate recourse, so that you can tell if something’s gone wrong. The stakes get even higher when we’re talking about war. And Artificial Intelligence being used in lethal weapons. We already have armed drones controlled by people that use Artificial Intelligence technologies.

But it’s not a big stretch to get to fully autonomous weapons. So machines making life and death decisions about who or what to target. It’s probably inevitable, but it’s really quite scary. You can imagine some really bad situations happening.

Imagine in the Taiwan, for example having a large group of American drones facing off Chinese drones and they’re trained on classified data. How these two swarms of drones interact and whether they might accidentally fire at one another because they mistake, maybe some light reflecting of a drone for an attack, that could mean that you accidentally end up in a war or even a nuclear conflict.

The Regulations

Proposals for an international ban on autonomous weapons have languished at UN since 2017. There has been plenty of talking but no agreement. US says, it wants a non-binding code of conduct instead of an outright ban. But it’s not just the Artificial Intelligence used in weapons that people want to control. Almost every major jurisdiction is thinking about regulating Artificial Intelligence in some shape or form. China’s been moving fast. In march 2022, it put a pretty ambitious law in place to regulate AI.

it’s focused on the private sector, especially tech companies forcing them to be transparent about how Artificial Intelligence is being used. The EU is working on its own set of rules. Its proposals categorize AI technology according to risk, so there there’s a list of banned practices, like real-time facial recognition by police.

There are high risk uses, like Artificial Intelligence being used to hire people or to manage essential services like electricity and water. That kind of tech would need to follow strict rules on transparency, especially around the software development and testing. Then there’s the limited risk. Stuff like spam filters and customer service bots which would need to be clearly labelled so that you would always know if you’re talking to a person or a machine.

The EU law is still at least three years away because there’s still a long bureaucratic process to go through. But experts say it could set a precedent for other countries. We think there are going to be a lot of different rules and different governance styles and they need to be somewhat compatible but we also need to respect that different societies have the right to determine exactly how they want to trade off security and privacy and those kinds of things. So is Artificial Intelligence out of control?

Well in some ways, yes! The technology is moving so fast that any meaningful regulation is still years away. But in the end, AI is not that different from other kinds of technology. It’s got enormous power for good and enormous power for bad. So it’s up to us and our governments to figure out how we want to use it.

“the genie is out of the bottle, and there is no turning back at this point”Anonymous

Do you want to know about how AI would be adapted in transportation by 2026? Click here

Leave a Reply

Your email address will not be published. Required fields are marked *

?>