• Latest
  • Trending
  • All
  • BUSINESS
  • ENTERTAINMENT
  • POLITICAL
  • TECHNOLOGY
First international AI safety report published

First international AI safety report published

January 31, 2025
Indices: Already not extreme fear

Indices: Already not extreme fear

April 24, 2025
Eurozone: Tariff reversal is some relief, but no game changer – ABN AMRO

Eurozone: Tariff reversal is some relief, but no game changer – ABN AMRO

April 24, 2025
US: The US has already lost the trade war – ABN AMRO

US: The US has already lost the trade war – ABN AMRO

April 24, 2025
Predictive Analytics Promise the End of ‘Gut Feelings’ in Construction

Predictive Analytics Promise the End of ‘Gut Feelings’ in Construction

April 24, 2025
First Border Wall Contracts of Second Trump Term Awarded in Texas, San Diego

First Border Wall Contracts of Second Trump Term Awarded in Texas, San Diego

April 24, 2025
Construction Economics for April 28, 2025

Construction Economics for April 28, 2025

April 24, 2025
AI startups backed to boost construction productivity

AI startups backed to boost construction productivity

April 24, 2025
Why is building safety litigation on the rise?

Why is building safety litigation on the rise?

April 24, 2025
Severfield to cut 6 per cent of staff despite ‘solid’ order book

Severfield to cut 6 per cent of staff despite ‘solid’ order book

April 24, 2025
Bovis promotes operations head to board

Bovis promotes operations head to board

April 24, 2025
China expresses condolences over death of Pope Francis, World News

China expresses condolences over death of Pope Francis, World News

April 24, 2025
Pope Francis’ body taken in procession to St Peter’s for lying in state, World News

Pope Francis’ body taken in procession to St Peter’s for lying in state, World News

April 24, 2025
  • About
  • Advertise
  • Privacy & Policy
  • Contact
Friday, June 6, 2025
No Result
View All Result
  • HOME
  • BUSINESS
  • ENTERTAINMENT
  • POLITICAL
  • TECHNOLOGY
  • ABOUT US
  • Login
  • Register
  • HOME
  • BUSINESS
  • ENTERTAINMENT
  • POLITICAL
  • TECHNOLOGY
  • ABOUT US
No Result
View All Result
Huewire
No Result
View All Result
Home TECHNOLOGY

First international AI safety report published

by huewire
January 31, 2025
in TECHNOLOGY
0
First international AI safety report published
491
SHARES
1.4k
VIEWS
Share on FacebookShare on Twitter

The first International AI safety report will be used to inform upcoming diplomatic discussions around how to mitigate a variety of dangers associated with artificial intelligence (AI), but it highlights there is a still a high degree of uncertainty around the exact nature of many threats and how to best deal with them.

Commissioned after the inaugural AI Safety Summit hosted by the UK government at Bletchley Park in November 2023 – and headed by AI academic Yoshua Bengio – the report covers a wide range of threats posed by the technology, including its impact on jobs and the environment, its potential to proliferate cyber attacks and deepfakes, and how it can amplify social biases.

It also examines the risks associated with market concentrations over AI and the growing “AI R&D [Research and Development] divide”, but is limited to looking at all of these risks in the context of systems that can perform a wide variety of tasks, otherwise known as general-purpose AI.

For each of the many risks assessed, the report refrained from drawing definitive conclusions, highlighting the high degree of uncertainty around how the fast-moving technology will develop. It called for further monitoring and evidence gathering in each area.

“Current evidence points to two central challenges in general-purpose AI risk management,” it said. “First, it is difficult to prioritise risks due to uncertainty about their severity and likelihood of occurrence. Second, it can be complex to determine appropriate roles and responsibilities across the AI value chain, and to incentivise effective action.”

However, the report is clear in its conclusion that all of the potential future impacts of AI it outlines are a primarily political question, which will be determined by the choices of societies and governments today.

“How general-purpose AI is developed and by whom, which problems it is designed to solve, whether we will be able to reap its full economic potential, who benefits from it, and the types of risks we expose ourselves to – the answers to these and many other questions depend on the choices that societies and governments make today and in the future to shape the development of general-purpose AI,” it said, adding there is an urgent need for international collaboration and agreement on these issues.

“Constructive scientific and public discussion will be essential for societies and policymakers to make the right choices.”

The findings of the report – which build on an interim AI safety report released in May 2024 that showed a lack of expert agreement over the biggest risks – are intended to inform discussion at the upcoming AI Action Summit in France, slated for early February 2025, which follows on from the two previous summits in Bletchley and Seoul, South Korea.

“Artificial intelligence is a central topic of our time, and its safety is a crucial foundation for building trust and fostering adoption. Scientific research must remain the fundamental pillar guiding these efforts,” said Clara Chappaz, the French minister delegate for AI and digital technologies.

“This first comprehensive scientific assessment provides the evidence base needed for societies and governments to shape AI’s future direction responsibly. These insights will inform crucial discussions at the upcoming AI Action Summit in Paris.”

Systemic risks

In examining the broader societal risks of AI deployment – beyond the capabilities of any individual model – the report said the impact on labour markets in particular is “likely to be profound”.

It noted that while there is considerable uncertainty in exactly how AI will affect labour markets, the productivity gains made by the technology “are likely to lead to mixed effects on wages across different sectors, increasing wages for some workers while decreasing wages for others”, with the most significant near-term impact being on jobs that mainly consist of cognitive tasks.

Improved general-purpose AI capabilities are also likely to increase current risks to worker autonomy and well-being, it said, highlighting the harmful effects “continuous monitoring and AI-driven workload decisions” can have, particularly for logistics workers.

In line with a January 2024 assessment of AI’s impacts on inequality by the International Monetary Fund (IMF) – which found AI is likely to worsen inequality without political intervention – the report said: “AI-driven labour automation is likely to exacerbate inequality by reducing the share of all income that goes to workers relative to capital owners.”

Inequality could be further deepened as a result of what the report terms the “AI R&D divide”, in which development of the technology is highly concentrated in the hands of large countries located in countries with strong digital infrastructure.

“For example, in 2023, the majority of notable general-purpose AI models (56%) were developed in the US. This disparity exposes many LMICs [low- and middle-income countries] to risks of dependency and could exacerbate existing inequalities,” it said, adding that development costs are only set to rise, exacerbating this divide further.

The report also highlighted the rising trend of “ghost work”, which refers to the mostly hidden labour performed by workers – often in precarious conditions in low-income countries – to support the development of AI models. It added that while this work can provide people with economic opportunities, “the contract-style nature of this work often provides few benefits and worker protections and less job stability, as platforms rotate markets to find cheaper labour”.

Related to all of this is the “high degree” of market concentration around AI, which allows a small handful of powerful companies to dominate decision-making around the development and use of the tech.

On the environmental impact, the report noted while datacentre operators are increasingly turning to renewable energy sources, a significant portion of AI training globally still relies on high-carbon energy sources such as coal or natural gas, and uses significant amounts of water as well.

It added that efficiency improvements in AI-related hardware alone “have not negated the overall growth in energy use of AI and possibly further accelerate it because of ‘rebound effects’,” but that “current figures largely rely on estimates, which become even more variable and unreliable when extrapolated into the future due to the rapid pace of development in the field”.

Risks from malfunction

Highlighting the “concrete harms” that AI can cause as a result of its potential to amplify existing political and social biases, the report said it could “lead to discriminatory outcomes including unequal resource allocation, reinforcement of stereotypes, and systematic neglect of certain groups or viewpoints”.

It specifically noted how most AI systems are trained on language and image datasets that disproportionately represent English-speaking and western cultures, that many design choices align to particular worldviews at the expense of others, and that current bias mitigation techniques are unreliable.

“A holistic and participatory approach that includes a variety of perspectives and stakeholders is essential to mitigate bias,” it said.

Echoing the findings of the interim report around human loss of control of AI systems – which some are worried could cause an extinction-level event – the report acknowledged such fears but noted that opinion varies greatly.

“Some consider it implausible, some consider it likely to occur, and some see it as a modest likelihood risk that warrants attention due to its high severity,” it said. “More foundationally, competitive pressures may partly determine the risk of loss of control … [because] competition between companies or between countries can lead them to accept larger risks to stay ahead.”

Risks from malicious use

In terms of malicious AI use, the report highlighted issues around cyber security, deepfakes and its use in the development of biological or chemical weapons.

On deepfakes, it noted the particularly harmful effects on children and women, who face distinct threats of sexual abuse and violence.

“Current detection methods and watermarking techniques, while progressing, show mixed results and face persisting technical challenges,” it said. “This means there is currently no single robust solution for detecting and reducing the spread of harmful AI-generated content. Finally, the rapid advancement of AI technology often outpaces detection methods, highlighting potential limitations of relying solely on technical and reactive interventions.”

On cyber security, it noted while AI systems have shown significant progress in autonomously identifying and exploiting cyber vulnerabilities, these risks are, in principle, manageable, as AI can also be used defensively.

“Rapid advancements in capabilities make it difficult to rule out large-scale risks in the near term, thus highlighting the need for evaluating and monitoring these risks,” it said. “Better metrics are needed to understand real-world attack scenarios, particularly when humans and AIs work together. A critical challenge is mitigating offensive capabilities without compromising defensive applications.”

It added that while new AI models can create step-by-step guides for creating pathogens and toxins that surpass PhD-level expertise, potentially contributing to a lowering of the barriers to developing biological or chemical weapons, it remains a “technically complex” process, meaning the “practical utility for novices remains uncertain”.

Read More

Share196Tweet123
huewire

huewire

Recent Comments

No comments to show.

Recent Posts

  • Indices: Already not extreme fear
  • Eurozone: Tariff reversal is some relief, but no game changer – ABN AMRO
  • US: The US has already lost the trade war – ABN AMRO
  • Predictive Analytics Promise the End of ‘Gut Feelings’ in Construction
  • First Border Wall Contracts of Second Trump Term Awarded in Texas, San Diego
Huewire

Copyrights © 2024 Huewire.com.

Navigate Site

  • About
  • Advertise
  • Privacy & Policy
  • Contact

Follow Us

Welcome Back!

Login to your account below

Forgotten Password? Sign Up

Create New Account!

Fill the forms below to register

All fields are required. Log In

Retrieve your password

Please enter your username or email address to reset your password.

Log In

Add New Playlist

No Result
View All Result
  • HOME
  • BUSINESS
  • ENTERTAINMENT
  • POLITICAL
  • TECHNOLOGY
  • ABOUT US

Copyrights © 2024 Huewire.com.