- Apple, Microsoft among those releasing on-device AI offerings
- How local AI will handle more complex tasks is uncertain
Artificial intelligence and other technology companies are pushing their large language models out of the cloud and onto users’ personal devices in a move they say will enhance privacy and security.
“The center of gravity of AI processing is gradually shifting from the cloud to the edge, on devices,” said Durga Malladi, senior vice president and general manager of technology and edge solutions at
On-device LLMs are among the latest attempts at a privacy-enhancing approach to generative AI. Until now, companies have relied on cloud-based enterprise accounts with consumer-facing generative AI tools such as OpenAI’s ChatGPT Enterprise, or have built and customized their own internal solutions. By running generative AI tools directly on devices, the shift from the cloud “removes previous limitations on things like latency, cost and even privacy,” Microsoft wrote in a blog post.
The push toward on-device AI comes as some users have sought a more personalized generative AI experience that doesn’t require trading their personal data. But the latest remedy is no panacea. Even local models will require the same amount of training and can still produce inaccurate outputs or inappropriately expose training data. Adoption will also yield new privacy and security concerns between users, their providers, and third parties.
“There’s no magic answer or perfect answer” to protecting privacy while using LLMs, said Brian Hengesbaugh, chair of Baker McKenzie’s global data privacy and security business unit.
On-device AI “looks like a good step in the right direction,” he said. “I would caution, though, not to let anybody think, ‘Oh, great! It’s on device. And now we don’t have anything to worry about.’”
Privacy Risks
Tech companies including Qualcomm Inc., Nvidia Corp., Apple, and Microsoft Corp. say running AI models on individual devices, like an iPhone or Windows laptop, should ensure that prompts fed into generative AI tools remain private. Big tech’s privacy promise is attractive to any organization working with confidential information, whether it be employees’ personal data, trade secrets or copyrighted material.
“You’ve completely eliminated the risk of sending it to the company to process it, to send it back,” said Mark McCreary, chair of Fox Rothschild’s artificial intelligence practice and co-chair of its privacy and data security practice. “You’ve eliminated the concern that, in transit, it’s going to have a problem because it never leaves the device.”
Still, Noah Johnson, co-founder and chief technology officer of data security and governance company Dasera, noted that many of the core perils associated with generative AI remain.
“How do you ensure that the model doesn’t allow someone to learn more than they should about the sensitive data?” he said. “That’s really the crux of the issue and those challenges, again, are pretty fundamental to just machine learning in general and not specific to where the model is running.”
On-device AI tools’ ability to reduce an organization’s risk will ultimately depend on their use case for the technology. If a company leverages generative AI in any automated decision-making capacity, such as for human resources or hiring purposes, “you can still have privacy implications, even if that use case sits on an app, on a phone, or on a device,” said Hengesbaugh.
If a large language model is running on a personal device that’s where its accompanying data willlive. Hosting such consequential amounts of data—both from the original training set and any new information aggregated through inferences—will test companies’ existing cybersecurity practices.
“This is a totally new type of technology that we are going to be running on our own devices. And whenever we implement a new system, process, or technology on anything, it opens up a new attack vector as well,” said Alex Urbelis, general counsel and chief information security officer at ENS Labs Ltd., a nonprofit developer of blockchain routing technology. “So, these AI systems may be great because we’re localizing a lot of private data, but on the other hand, are they an attack vector for third-parties?”
Those with mature AI governance policies built on the standards set by the National Institute of Standards and Technology or the International Organization for Standardization may be best situated to pivot to on-device AI, Hengesbaugh said.
These companies will now grapple with essential questions at the intersection of AI, privacy, and security, he said. “What’s the process of putting in the prompts? And how long is the data kept? And what are the controls around it? But it wouldn’t be so novel from a cyber perspective that you couldn’t address it with your security impact assessment.”
Still, for many entities, efforts to develop data governance are ongoing. For example, safeguarding company data, especially while navigating “Bring Your Own Device” policies, could become more complicated in the age of on-device AI.
“Every organization constantly fights people taking company data onto more personal devices, usually mobile devices. And maybe as those new, cool features become available, if you do the task on your mobile device,” McCreary said, “there’s an increased risk of that data walking over to the mobile device.”
‘A Change in Perspective’
Beyond privacy and security assurances, providers say that availability, transmission delays, server costs, and data locality are some of the strongest appeals for on-device AI.
But that comes with a trade-off in the models’ size. Today’s most well-known generative AI models were built to run on the cloud; they’re too large to operate on a single device. A model like
To operate on a device, models must go through what’s known as quantization, a process that compresses models with the goal of preserving their performance. Whether AI models undergoing quantization can handle enterprise-level tasks, such as processing thousands of contracts or providing personalized, automated customer support, remains unclear.
Advertisements for on-device AI, so far, pitch smaller-scale productivity or creative applications—such as a new emoji generator for Apple.
“You can put a large language model on the device, but at a certain point, you’re going to be constrained about what exactly you can do on-device,” said Frank Dickson, group vice president for market intelligence firm IDC’s security & trust research practice. “So I can code pictures or I can understand language, but I can’t put the internet on the device.”
Tech providers themselves may already be accounting for the limitations of locally-run models. Apple, for example, has added the option to tap into a privately owned cloud server for more complex requests, an Apple spokesperson told Bloomberg Law. However, the tech giant couldn’t yet give practical examples of what those requests might look like.
Ultimately, widespread deployment of on-device AI, especially for larger-scale tasks, may still be far off and will be heavily dictated by advances and limitations in that hardware capability. Companies will make these decisions as they acquire a new generation of devices equipped with chips and AI tools, McCreary said.
Such a transition will represent a reversal of the years-long trend pushing more and more services—and data—into the cloud, sources said. Organizations adopting localized AI systems or using a hybrid approach mixing cloud-based and device-based processes will have to navigate the nuances created in their AI governance programs—including determining when and how data enters and leaves their systems.
“It’s going to require a change in perspective,” Urbelis said. “If a lot of the processing and manipulation of data is happening on-device, we have to make sure that we understand what’s happening under the hood.”
To contact the reporter on this story:
To contact the editors responsible for this story:
Learn more about Bloomberg Law or Log In to keep reading:
Learn About Bloomberg Law
AI-powered legal analytics, workflow tools and premium legal & business news.
Already a subscriber?
Log in to keep reading or access research tools.