London, February 7: Artificial Intelligence is a new technology that has raised concerns about its use. From concerns like Deepfake to taking over human jobs, AI has been considered both good and evil by industry experts. According to the latest report, the world's biggest AI tech companies, Google and DeepMind, clash over safety tests in the UK. The report said that the companies are pushing the government to "speed up the safety tests" for AI systems. 

According to the report by Financial Times, the top tech companies, including OpenAI, Google, Microsoft, Meta, and DeepMind, signed voluntary commitments in November 2023 for GenAI model review in Britain's AI Safety Institute. The companies reportedly pledged to adjust their models if the institute found flaws in the technology. Now, these AI companies are reportedly seeking clarification on AISI tests and about the time it would take, including feedback if the risks are found. Microsoft Chairman and CEO Satya Nadella Announces To Train More Than Two Million People in India in Generative AI Skills and Further Invest in Country.

The report further mentioned that the AI companies were not "legally obliged" to either change or delay the release of their produce following the outcome of the AISI safety tests. However, the AISI reportedly said that the companies should test their models before releasing them. The report said that the UK government also mentioned that the testing of the model is underway and will access the capable "AI models for pre-deployment testing". 

Amid these ongoing developments in the UK, the report highlighted that the dispute with these AI tech companies shows the limitation of "relying on voluntary agreements" to set parameters that will aid the fast-paced tech development. As per the report, the UK government also outlined "future binding requirements" to lead the AI developers, ensuring their accountability for the safety of the systems. Elon Musk-Run SpaceX Under Investigation Over Discrimination and Sexual Harassment of Workers, Says Report.

The report said that UK PM Rishi Sunak's ambition for the country is to tackle risks such as cyber-attacks and designing bioweapons. The government has reportedly spent £1 million on testing "jailbreaking" to coax AI chatbots to bypass their guardrails. Google DeepMind commented that the government of the UK has access to the capable models for research and safety for the long term building expertise and capability; OpenAI and Meta reportedly declined to comment,

(The above story first appeared on LatestLY on Feb 07, 2024 12:39 PM IST. For more news and updates on politics, world, sports, entertainment and lifestyle, log on to our website latestly.com).