The head of the signal, Meridith Whitker, calls the agent of artificial intelligence that he suffers from safety and “deep” specific issues.

On Friday, signal chief Meridith Whitaker warned that the agent of artificial intelligence could come with a threat to the user’s privacy.
Speaking on the stage at the SXSW conference in Austin, Texas, the defenders of safe communications pointed to the use of artificial intelligence agents as “putting your mind in a jar”, and warned that this new computing model – where AI performs tasks on behalf of users – has a “deep issue” with both discrimination and security.
Wittker explained how artificial intelligence agents are marketed as a way to add value to your life by dealing with various online tasks to the user. For example, artificial intelligence agents will be able to face tasks such as searching for concerts, booking tickets, scheduling the event in your calendar, and corresponding to your booked friends.
“So we can put our brain in a jar because the thing does that and we do not have to touch it, right?” Think Whitker.
Then she explained the type of access that the artificial intelligence agent will need to perform these tasks, including access to our web browser and a way to drive it as well as access to our credit card information to pay for our tickets and calendar and the messaging application to send the text to your friends.
“You should be able to lead this (the process) through our entire system with something similar to the root permission, and reaching each of these databases – perhaps in clarity, because there is no model to do that encrypted.”
“If we are talking about a strong enough model … the artificial intelligence model that works to run this, there is no way to occur on the device.” “This is certainly sent to a cloud server where it is treated and sent again. Therefore, there is a deep problem with the security and privacy that has shattered this noise around the agents, and this eventually threatens to break the blood barrier in the brain between the application layer and the OS layer by placing all these separate services (and) disturbing their data.”
She said that if the messaging application like Signal is integrated with artificial intelligence agents, this will undermine the privacy of your messages. The agent must access the application to send a text message to your friends as well as withdraw data to summarize these texts.
Her comments followed the notes she made earlier during the committee on how to build the artificial intelligence industry on a monitoring model with collective data collection. “The largest model of Amnesty International is the best” – it means more data, the better – the more possible consequences you did not think is good.
She concluded that Whittaker warned of AIC, Whittaker warned that we would undermine more privacy and security as “Bot Magic Genie Bot who will be concerned with life taxes.”