Google's CEO, Sundar Pichai, has expressed concerns about the potential dangers of artificial intelligence (AI) and called for a global regulatory framework for AI, similar to those used to regulate nuclear arms use.

He warned that the competition to produce advancements in AI technology could lead to safety concerns being overlooked and that the improper deployment of AI could have harmful consequences.

Pichai's comments highlight the need for responsible and ethical use of AI to prevent unintended consequences.

During an interview on CBS's 60 Minutes program, Sundar Pichai, CEO of Google, expressed his concerns about the potential negative impact of AI technology.

He stated that the deployment of AI could be very harmful if done incorrectly and that the rapid pace of technological advancements means that we do not yet have all the answers.

Pichai's remarks underscore the need for caution and responsible implementation of AI to prevent unintended negative consequences.

Alphabet, Google's parent company, owns UK-based AI firm DeepMind and has developed an AI-powered chatbot called Bard to compete with OpenAI's popular chatbot, ChatGPT.

However, Google's CEO, Sundar Pichai, acknowledged the potential risks associated with AI technology and called for global regulatory frameworks to be established to govern its development.