On Monday night, at the newly renovated residence of the British ambassador, Rishi Sunak’s government introduced Washington to its latest soft power push: owning the global conversation on AI regulation.
Over Bombay gin cocktails and fish-and-chips hors d’oeuvres, attendees toasted “the future” at a reception to preview the UK’s upcoming AI Safety Summit, set for the beginning of November.
Now, a lot of people and institutions would like to influence the development of AI. And to much of the world, Britain symbolizes the past more than the future.
But in that potential weakness the Sunak government sees its selling point: Pedigree.
The UK’s storied past, so the argument goes, gives it a special claim on guiding humanity’s navigation of an AI-powered future. “We have a good reputation for innovation,” is how Ambassador Karen Pierce put it to DFD.
(In a nod to the difficulties many government officials have in overseeing the development of fast-moving technologies, she also confessed “I may be the least AI-literate person in the room.”
In remarks to the crowd — which included representatives of the British military, the American private sector and well-known AI ethicist Rumman Chowdhury — Pierce expounded on her government’s case for taking the global lead on AI safety: The UK’s tech sector was the third in the world to achieve a cumulative valuation of $1 trillion, a milestone its government boasted of last year.
She also spent some time plugging the historic location of next month’s two-day summit: Bletchley Park, 50 miles northwest of London, known as the site where computing pioneer Alan Turing made early computing breakthroughs in his successful quest to break German codes in World War II.
In fact, Britain’s claims on the origins of computing go back another century, to the creation of the idea of a digital computer by the inventor Charles Babbage and his aristocratic collaborator, Ada Lovelace, the daughter of romantic poet Lord Byron.
So that’s the present and the past. How about the future? Today, the world’s sixth-largest economy can lay claim to one of the world’s best-known AI firms, London-based DeepMind, and some its best-known AI researchers, like the London-born, Cambridge-educated Geoffrey Hinton.
But laying its claim on the future of AI remains a challenge: Google acquired DeepMind a decade ago, and Hinton has made his professional home in North America. Regulators in the EU, which the UK left almost four years ago, oversee a far larger domestic market.
Sunak is reportedly angling to use the summit to launch a multilateral AI Safety Institute, though a British tech official downplayed summit’s ambitions last night.
“The important part of my brief for tonight was to manage expectations,” said Emran Mian, the Director General for Digital Technologies and Telecoms at the UK’s Department for Science, Innovation and Technology (the British equivalent of the director of the White House Office of Science and Technology Policy). He spoke to POLITICO Tech podcast host Steven Overly at the event, and the full podcast episode will air in the coming days.
Britain has been angling for its own spot in the tech conversation for some time now, even installing an “ambassador” to Silicon Valley, making the case that the UK is a congenial home for tech firms worried about tougher EU regulations.
When it comes to AI, however, some Americans in the room were skeptical of the soft power play: One attendee, citing survey results that have not yet been released, said the British public is not especially concerned about AI safety, suggesting a lack of domestic interest could hobble the effort.
Another attendee, who deals with AI regulations around the world, said that Japan and Singapore have made more notable progress in formulating AI policies in recent years, though they’ve done so more quietly.
Of Singapore, he suggested the UK could make a different nod to history: “They should look to their former colony.”
With cities and states known as the “laboratories of democracy,” some local leaders are starting to take that quite literally when it comes to AI.
As part of the Mayors Innovation Studio hosted at Bloomberg’s CityLab conference later this week, 100-plus mayors will discuss how they hope to use AI to optimize city governance. I spoke ahead of the conference with James Anderson, Bloomberg Philanthropies’ leader of government innovation, to get an early look into what the mayors are most excited about tinkering with.
“When cities understand how other cities are doing things, where they’re using it, where they’re gaining efficiency, and also where they’re striking out, we see more effective interventions and fewer mayors striking out and re-creating the wheel in ways that waste time, energy and resources,” Anderson said, saying that the mayors surveyed pre-conference are most interested in learning how to use AI to tweak transportation, infrastructure, and public safety.
He said he hopes to send the mayors home with some action items: “Eighty percent of the mayors expressed interest in using AI but only 11 percent of them are actually using it or experimenting with it…. very few mayors have implemented regulations, very few mayors have appointed policy leads, and very few mayors have started training their staff,” Anderson said, adding that after the conference ends Friday Bloomberg Philanthropies plan to track progress as cities start to implement the programs discussed there.
The nineties are back! … in court.
Pras Michel, the Fugees rapper who found himself at the center of a massive financial and diplomatic scandal that led to his conviction in April for conspiring to make straw campaign donations, witness tampering and acting as an unregistered foreign agent for China, is now countersuing. He argues that his lawyer relied on AI to an extent that deprived him of the right to competent counsel.
As POLITICO’s Josh Gerstein noted, in Michel’s motion he says his lawyer “used an experimental artificial intelligence (AI) program to draft the closing argument, ignoring the best arguments and conflating the charged schemes, and he then publicly boasted that the AI program ‘turned hours or days of legal work into seconds.’” He then, allegedly, “proudly stated at the end of the trial words to the effect of ‘AI wrote our closing.’”
That being said, he did not win. One might also recall a story from June, when a New York City law firm was slapped with a fine for using ChatGPT to write a brief that contained references to non-existent case law. Good rule of thumb: If you’d double-check a piece of software when using it to help with your kid’s homework, it’s probably not worth jeopardizing your own legal reputation over.
Source: Politico