Inside Apple Intelligence With An AI CEO

The announcement has generated a lot of discussion on social channels.

Samara Lynn
clock • 7 min read
Inside Apple Intelligence With An AI CEO

At Apple's Worldwide Developers Conference this year, the biggest news surrounded the announcement of Apple Intelligence and the Cupertino, Calif.-based company's partnership with OpenAI.  

Apple Intelligence is Apple's artificial intelligence technology combined with ChatGPT and integrated into writing tools, image creation and Siri across iPhones, iPads and Macs. The beta version is due out in fall, Apple said.  

The announcement has generated a lot of discussion on social channels and among tech analysts.  Much of the conversation is about the implications for data privacy and security: 

 

 

 

MES Computing spoke with an AI expert and the founder and CEO of Originality.ai Jon Gillham about Apple Intelligence and its impact on not only consumers, but business users. Originality.ai provides AI-generated text detection for plagiarism, fact checking and readability. 

Why is the Apple Intelligence announcement causing so much stir online? 

I think everyone has been waiting on the sidelines to see what Apple was going to do in the world of AI. Siri had been consistently a pretty big disappointment to most people in terms of its usefulness. And there were some internal struggles at Apple that contributed to Siri being kind of in the state that it was in. And when this generative AI wave came, the major player that hadn't made a significant move yet was Apple. And this was their sort of big step into the generative AI market with Apple Intelligence. And the format of the partnership is Apple will be integrating OpenAI ... sort of combining both Apple's, homegrown AI tools with OpenAI's. 

What are the major benefits of this partnership for consumers and for business users? 

I think it's the same for both consumers and business users is that it will have access to what is currently the world-class, large language model in the form of ChatGPT natively within their Apple devices.  

And whether you're an Apple fan or not, they produce pretty impressive products in a pretty closed ecosystem. So this sort of native experience of using AI is going to feel pretty seamless. I think we're going to experience that moment in technology where people are using generative AI without even realizing that they're using it and it's making their experience with their phone even better. 

The real sort of efficiency and benefit to businesses occurs when they're their staff generally adopts the efficiency lift that comes from the use of a generative AI -- writing emails quicker, doing it in a way that ensures that they're communicating the message that they're intending to communicate, but doing it just faster. 

In what ways? 

Like asking Siri questions [that may not give effective or concise responses] -- something along the lines of, "summarize what I have to do today." Siri doesn't do very well at that right now.  

Jon Gillham, Founder and CEO, Originality.ai

What are the down sides?  

The problem I think is going to be [what] we're seeing with Google, Google Search and Google AI ... where the number of use cases are going to be so high, that there's no scenario that I can see where answers that it provides won't have hallucinations that could be anywhere from incorrect and funny to wrong and dangerous.  

Most studies around the level of accuracy and factual correctness of LLMs show that accuracy is significantly below 99.9 percent. That's still a massive number of people that are going to be getting incorrect information, and not having enough context of the question to know if that is incorrect information.

[AI] can come up with some creative answers that are wrong, and you don't want creativity when you're asking about doctor's appointments or medicine to be taken. 

Let's talk about the implications for cybersecurity and data privacy. What are your thoughts on that? And do you think Apple will take the right provisions with this partnership to put data privacy controls in place? 

I think it is a very, very valid concern that I'm personally concerned with. I think Apple has historically done a pretty admirable job related to data privacy and security. I would say more so than any other big tech company.  

I think that the big challenge that they're going to face is that same ubiquity that I've been talking about with this integration of these tools into Apple devices, it's going to be really hard to delineate when you are feeding data into an AI, and that AI is consuming it, versus now where it's pretty clear: I've opened my OpenAI app, and I'm interacting with that app. It's clear at that moment. 

But they're trying to drive the ubiquity of it and make it everywhere in the device, while at the same time trying to control and make the user aware of when their data is being sent to large language models, and when it's not. I think that's going to be an interesting challenge for them to address.   

Does Apple Intelligence open up new cyberattack vectors? 

For sure. I mean I have no idea as in the forms that that will take, but anytime new technology is implemented, there will be bad actors that will look to exploit it. 

MES Computing also discussed with Gillham the AI-generated text detection that his company performs. 

Is there an LLM model that is harder to detect than others? 

What we're seeing is that most models are all converging around a similar capability of detection. The short answer is no, we see our effectiveness has very minimal drop off with each new leading language model, whether it be LLama3, Mistral, Gemini or Claude – similar detection capabilities regardless of the latest model.  

Can the average person eyeball text and see if it is AI written?  

Humans have two biases that lead to this problem. We have this overconfidence bias. If you go into a room and you say," put up your hand, if you're an above average driver," studies have shown 70-80% of the room will put up their hands, which is obviously flawed, it should be 50-50 percent of the room, if everyone is truthful. 

And then humans have a pattern recognition bias. We think we can look at noise and recognize patterns. Talk to a gambler playing a game that is absolutely random. They might tell you they have a system because they can recognize patterns. I think there's these two biases at play that lead to humans believing that they are more capable than they are at recognizing patterns when in noise.  

In all studies that that we've looked at, and that have been published, humans' ability to detect AI-generated content, unless significant controls are put in place, like you get to see their previous work, and then you get to see their new work, that increases the ability for humans to detect AI.  

But if you're just given random samples, it is no better than flip of a coin.   

You may also like
AI Gaining Rapid Momentum Across Enterprises: Report

Artificial Intelligence

IT leaders seem to be embracing the technology.

clock 06-18-2024 • 3 min read
OpenAI Outage Boosts Google Gemini Searches

Artificial Intelligence

The trend may lend evidence that Gemini is a strong competitor

clock 06-05-2024 • 2 min read

More on Artificial Intelligence

Microsoft AI Chief Makes Questionable Claims About Copyright And Online Content

Microsoft AI Chief Makes Questionable Claims About Copyright And Online Content

Says web content is 'freeware' for training AI

clock 07-03-2024 • 3 min read
Report Ranks This US City As World's Top AI Hub. I Was Shocked: Opinion

Report Ranks This US City As World's Top AI Hub. I Was Shocked: Opinion

It's not on the West Coast

Samara Lynn
clock 06-26-2024 • 2 min read
AI Gaining Rapid Momentum Across Enterprises: Report

AI Gaining Rapid Momentum Across Enterprises: Report

IT leaders seem to be embracing the technology.

Samara Lynn
clock 06-18-2024 • 3 min read