Stopthatgirl7

joined 1 year ago
 

Apple quietly introduced code into iOS 18.1 which reboots the device if it has not been unlocked for a period of time, reverting it to a state which improves the security of iPhones overall and is making it harder for police to break into the devices, according to multiple iPhone security experts. 

On Thursday, 404 Media reported that law enforcement officials were freaking out that iPhones which had been stored for examination were mysteriously rebooting themselves. At the time the cause was unclear, with the officials only able to speculate why they were being locked out of the devices. Now a day later, the potential reason why is coming into view.

“Apple indeed added a feature called ‘inactivity reboot’ in iOS 18.1.,” Dr.-Ing. Jiska Classen, a research group leader at the Hasso Plattner Institute, tweeted after 404 Media published on Thursday along with screenshots that they presented as the relevant pieces of code.

 

Tech behemoth OpenAI has touted its artificial intelligence-powered transcription tool Whisper as having near “human level robustness and accuracy.”

But Whisper has a major flaw: It is prone to making up chunks of text or even entire sentences, according to interviews with more than a dozen software engineers, developers and academic researchers. Those experts said some of the invented text — known in the industry as hallucinations — can include racial commentary, violent rhetoric and even imagined medical treatments.

Experts said that such fabrications are problematic because Whisper is being used in a slew of industries worldwide to translate and transcribe interviews, generate text in popular consumer technologies and create subtitles for videos.

More concerning, they said, is a rush by medical centers to utilize Whisper-based tools to transcribe patients’ consultations with doctors, despite OpenAI’ s warnings that the tool should not be used in “high-risk domains.”

 

The mother of a 14-year-old Florida boy says he became obsessed with a chatbot on Character.AI before his death.

On the last day of his life, Sewell Setzer III took out his phone and texted his closest friend: a lifelike A.I. chatbot named after Daenerys Targaryen, a character from “Game of Thrones.”

“I miss you, baby sister,” he wrote.

“I miss you too, sweet brother,” the chatbot replied.

Sewell, a 14-year-old ninth grader from Orlando, Fla., had spent months talking to chatbots on Character.AI, a role-playing app that allows users to create their own A.I. characters or chat with characters created by others.

Sewell knew that “Dany,” as he called the chatbot, wasn’t a real person — that its responses were just the outputs of an A.I. language model, that there was no human on the other side of the screen typing back. (And if he ever forgot, there was the message displayed above all their chats, reminding him that “everything Characters say is made up!”)

But he developed an emotional attachment anyway. He texted the bot constantly, updating it dozens of times a day on his life and engaging in long role-playing dialogues.

 

A former jockey who was left paralyzed from the waist down after a horse riding accident was able to walk again thanks to a cutting-edge piece of robotic tech: a $100,000 ReWalk Personal exoskeleton.

When one of its small parts malfunctioned, however, the entire device stopped working. Desperate to gain his mobility back, he reached out to the manufacturer, Lifeward, for repairs. But it turned him away, claiming his exoskeleton was too old, *404 media *reports.

"After 371,091 steps my exoskeleton is being retired after 10 years of unbelievable physical therapy," Michael Straight posted on Facebook earlier this month. "The reasons why it has stopped is a pathetic excuse for a bad company to try and make more money."

 

When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered. 

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted. 

But why did Copilot hallucinate these terrible and false accusations?

 

Does AI actually help students learn? A recent experiment in a high school provides a cautionary tale. 

Researchers at the University of Pennsylvania found that Turkish high school students who had access to ChatGPT while doing practice math problems did worse on a math test compared with students who didn’t have access to ChatGPT. Those with ChatGPT solved 48 percent more of the practice problems correctly, but they ultimately scored 17 percent worse on a test of the topic that the students were learning.

A third group of students had access to a revised version of ChatGPT that functioned more like a tutor. This chatbot was programmed to provide hints without directly divulging the answer. The students who used it did spectacularly better on the practice problems, solving 127 percent more of them correctly compared with students who did their practice work without any high-tech aids. But on a test afterwards, these AI-tutored students did no better. Students who just did their practice problems the old fashioned way — on their own — matched their test scores.

[–] Stopthatgirl7@lemmy.world 0 points 10 months ago

I am just so, so tired of being constantly inundated with being told to CONSUME.

 

You know how Google's new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won't slide off (pssst...please don't do this.)

Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these "hallucinations" are an "inherent feature" of  AI large language models (LLM), which is what drives AI Overviews, and this feature "is still an unsolved problem."

 

A small publisher for speculative fiction and roleplaying games is shuttering after 22 years, and the “final straw,” its founder said, is an influx of AI-generated submissions.

In a notice posted to the site, founder ​Julie Ann Dawson wrote that effective March 6, she was winding down operations to focus on her health and “day job” that’s separate from the press. “All of these issues impacted my decision. However, I also have to confess to what may have been the final straws. AI...and authors behaving badly,” she wrote.

 

On Monday, it appears X attempted to encourage users to cease referring to it as Twitter and instead adopt the name X. Some users began noticing that posts viewed via X for iOS were changing any references of "Twitter.com" to "X.com" automatically.

If a user typed in "Twitter.com," they would see "Twitter.com" as they typed it before hitting "Post." But, after submitting, the platform would show "X.com" in its place on the X for iOS app, without the user's permission, for everyone viewing the post.

And shortly after this revelation, it became clear that there was another big issue: X was changing anything ending in "Twitter.com" to "X.com."

 

A shocking story was promoted on the "front page" or main feed of Elon Musk's X on Thursday:

"Iran Strikes Tel Aviv with Heavy Missiles," read the headline.

This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran's embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.

But, there was one major problem: Iran did not attack Israel. The headline was fake.

Even more concerning, the fake headline was apparently generated by X's own official AI chatbot, Grok, and then promoted by X's trending news product, Explore, on the very first day of an updated version of the feature.

 

Roku is exploring ways to show consumers ads on its TVs even when they are not using its streaming platform: The company has been looking into injecting ads into the video feeds of third-party devices connected to its TVs, according to a recent patent filing.  

This way, when an owner of a Roku TV takes a short break from playing a game on their Xbox, or streaming something on an Apple TV device connected to the TV set, Roku would use that break to show ads. Roku engineers have even explored ways to figure out what the consumer is doing with their TV-connected device in order to display relevant advertising.

 

In 2021, a book titled “The Human-Machine Team: How to Create Synergy Between Human and Artificial Intelligence That Will Revolutionize Our World” was released in English under the pen name “Brigadier General Y.S.” In it, the author — a man who we confirmed to be the current commander of the elite Israeli intelligence unit 8200 — makes the case for designing a special machine that could rapidly process massive amounts of data to generate thousands of potential “targets” for military strikes in the heat of a war. Such technology, he writes, would resolve what he described as a “human bottleneck for both locating the new targets and decision-making to approve the targets.”

Such a machine, it turns out, actually exists. A new investigation by +972 Magazine and Local Call reveals that the Israeli army has developed an artificial intelligence-based program known as “Lavender,” unveiled here for the first time. According to six Israeli intelligence officers, who have all served in the army during the current war on the Gaza Strip and had first-hand involvement with the use of AI to generate targets for assassination, Lavender has played a central role in the unprecedented bombing of Palestinians, especially during the early stages of the war. In fact, according to the sources, its influence on the military’s operations was such that they essentially treated the outputs of the AI machine “as if it were a human decision.”

view more: next ›