4 hours ago 2

Google AI's Weaponization Could Help Trigger 'Flash-wars' Escalating Too Quickly to Stop

Google chipped further away at its defunct "Don't be evil" motto this week, dropping a passage from its principles on AI development committing not to use the technology for weaponry or surveillance. A leading independent cybersecurity expert told Sputnik why the move is fraught with grave risks.
“AI systems may interact with other network-connected infrastructure in unpredictable ways,” veteran independent cybersecurity expert and digital strategy specialist Lars Hilse explained.

This unpredictability “could potentially trigger flash-wars, which escalate too quickly for the human mind to comprehend, and for the human being to intervene,” Hilse said, highlighting the immense risks of handing defense-related issues over to AI to manage.

That’s not to mention the proliferation threat, the analyst, who recently authored a book, ‘Dominance on the Digital Battlefield’, dedicated to these very issues, said.
Humanity is only starting to understand the dangers and “unknown risks” associated with AI’s weaponization, Hilse said. “And particularly in a time where global conflict is imminent, we might want to resort to leaving that Genie in the bottle for now,” he urged.
A soldier of Ukraine's 57th motorized brigade operates an FPV drone on the front line in the Kharkov region, Ukraine, Thursday, Nov. 7, 2024.  - Sputnik International, 1920, 16.11.2024
Military
Dystopian Nightmare Meets Reality: US Media Cheerleads ‘Killer Robots Filling Ukrainian Skies’

Why Did Google Do It?

But the observer isn’t surprised by Google’s policy shift, with the “recalibration” dictated by the need “to align with market realities and geopolitical demands” and the “insanely lucrative” nature of the defense market.
Google’s new policy means it will be able to participate in these “highly lucrative defense contracts and government surveillance projects and strengthen its position in the AI race, particularly against their Chinese competitors,” Hilse said.
“The policy shift indicates a broader realignment of Silicon Valley with national defense aspirations and may even suggest that previous ethical barriers to military AI development are being systematically removed industry-wide to make allow for quicker reaction to market shifts in this - again - extremely lucrative, and previously unexplored field of business,” the expert summed up.
A man walks past the rubble of a destroyed building in Khan Yunis - Sputnik International, 1920, 09.04.2024
World
Israel Uses Military AI in Gaza: Tool of Genocide or 'Simply a Database'?
Read this article on source website