We start with some serious dissing and end up dishing.
Between the start and end state, we get served up by Julia and rerouted by the Benz.
We learn about a new voicebox that we can’t have and get the perfect example of a self-licking ice cream cone.
We have new controls on what can be sold and to whom and a new way to use that screwdriver to unlock your Lenovo.
Finally, we have a group of Swiss farmers that think Apple has taken one bite too many.
This week’s update comes with a side of Julia, so let’s order up!
Global: Alexa dishes the dirt.
Ask Amazon’s digital assistant, “Hey, Alexa, is Amazon a monopoly?”
“Hmm, I don’t know that one,” it answers.
But ask about any of the other tech giant’s business practices, and it’s ready to critique them.
Surfacing answers from across the internet, Alexa describes Apple as an “oligopoly” and cites Alphabet’s Google as violating privacy rights, according to Bloomberg News tests of the software on three devices…
[Alexa] won’t label Amazon a monopoly, but it tends to respond in the affirmative when asked the same question about Google, Meta’s Facebook, Microsoft, and Walmart.
When Alexa is asked if Amazon has broken antitrust law, it says, “I don’t have an answer to the question I just heard.”
Google’s Assistant and Apple’s Siri, Alexa’s two closest competitors, each cite news stories on government antitrust lawsuits against their companies.
Siri, for the most part, offers up criticisms of the iPhone maker. But it evades at least one question about Apple’s power.
When asked whether Apple is a monopoly, Siri often replies, “I can’t answer that, but Apple.com should be able to…”
Alexa also cites alleged privacy lapses by its rivals, bringing up a Facebook privacy settlement with the FTC and allegations that Apple’s finger-scanning technology violates constitutional rights.
When users ask whether Amazon violates users’ privacy, Alexa sticks up for its safeguards: “Amazon builds multiple layers of privacy protections into your Alexa experience.”
It also links to an Amazon website with more information about Alexa’s privacy settings.
So what’s the upshot for you? Could this be the start of a tech giant digital assistant dissing war? Stay tuned.
This is the most fun we’ve had since getting Alexa to turn on the kitchen lights.
And if this comes to pass we think the winner will be the one with the Ru Paul voice.
US: AI comes to the drive-thru.
Julia, who works the drive-through at a White Castle in Merrillville, Ind., is in many ways a model employee: polite, prompt, and doesn’t mind working the overnight shift.
Still, something seemed off to John Lewis, a retired carpenter from nearby Lowell.
For one thing, he had to repeat his order for onion rings.
For another, Julia wasn’t human.
Julia is among a freshman class of artificial-intelligence-enabled chatbots being put to work in fast-food drive-throughs.
The robots, which employ conversational-style AI algorithms used by technologies such as OpenAI’s ChatGPT, can take burger orders, substitute cheddar for American cheese and thank customers for their patronage.
They also are programmed to encourage customers to binge on an extra burger or a dessert.
Unlike a human, chatbots are never shy about selling more, nor do they need a break or get distracted by other business, said Michael Guinan, White Castle’s vice president of operations services.
Some customers say they are sick of cranky fast-food workers who can’t hear their orders through defective speaker boxes, and look forward to the robot revolution.
A number of restaurant workers think the same.
White Castle has retooled Julia since first putting it to work in 2020, and it now sounds more conversational, saying things like “you betcha” and “gotcha,” Guinan said.
Del Taco is testing its own chatbot, also named Julia, at five locations.
Executives said that programmers are teaching it to not be thrown off by the weird orders the California-based chain’s drive-throughs occasionally encounter late at night.
“The employees will step in when that guest has a really crazy request, like ‘I want large fries inside of a chocolate shake,’ ” said Del Taco President Chad Gretzema. “They know that the AI is going to go, ‘I’m not quite sure what that is.’ ”
So what’s the upshot for you? It sounds like a trend starting when two fast food vendors call their AI Julia.
Perhaps it’s time for Mcdonalds’ to upgrade Flippy, getting her off the grill and in front of customers and we think Julia would be the perfect name.
Eu/US: Mercedes Is Adding ChatGPT To Its Infotainment System
Mercedes is adding OpenAI’s ChatGPT to its MBUX infotainment system.
“U.S. owners of models that use MBUX could opt into a beta program starting from, June 16, activating ChatGPT functionality.”
"This will enable the highly versatile large language model to augment the car’s conversation skills.
You can join up simply by telling your car ‘Hey Mercedes, I want to join the beta program.’"
Mercedes describes the capabilities thusly: "Users will experience a voice assistant that not only accepts natural voice commands but can also conduct conversations.
Soon, participants who ask the Voice Assistant for details about their destination, to suggest a new dinner recipe, or to answer a complex question, will receive a more comprehensive answer – while keeping their hands on the wheel and eyes on the road."
So what’s the upshot for you?
If you’re worried about privacy, you should be.
Although Mercedes loudly expresses its concern over user data, it’s clear that it retains and uses your conversations: "The voice command data collected is stored in the Mercedes-Benz Intelligent Cloud, where it is anonymized and analyzed.
Mercedes-Benz developers will gain helpful insights into specific requests, enabling them to set precise priorities in the further development of voice control.
Findings from the beta program will be used to further improve the intuitive voice assistant and to define the rollout strategy for large language models in more markets and languages."
Global: ‘AI or Not’ is a Free Web App That Claims to Detect AI-Generated Photos
“AI or Not” is a free web-based app that claims to be able to identify images generated by artificial intelligence (AI) simply by uploading them or providing a URL.
Powered by Optic, the company says its technology is the smartest content recognition engine for Web3 and claims it is capable of identifying images made using Stable Diffusion, Midjourney, Dall-E, or GAN.
“Optic AI or Not is a web service that helps users quickly and accurately determine whether an image has been generated by artificial intelligence (AI) or created by a human.
If the image is AI-generated, our service identifies the AI model used (mid-journey, stable diffusion, or DALL-E),” Optic says.
“Our mission is to bring transparency to the media on blockchains so all communities can realize their creative and economic potential.”
The platform, spotted by DIY Photography, is very easy to use. Anyone can upload an image or provide a link to an AI-generated image’s hosted location and Optic AI or Not is able to provide feedback on if the image is real or generated by AI in a matter of seconds.
The company says that AI or Not uses “advanced algorithms and machine learning techniques” to analyze images and then detect signs of AI generation.
“Our service compares the input image to known patterns, artifacts, and characteristics of various AI models and human-made images to determine the origin of the content,” Optic explains.
So what’s the upshot for you? Does it work? It does pretty well overall.
The article notes that it failed to determine that the picture of Donald Trump kissing Dr. Anthony Fauci (the top pandemic expert in the US during the Covid outbreak who Trump constantly fought with) was fake (AI), but, everyone/thing has their off days!
EU: EU Votes To Ban AI In Biometric Surveillance, Require Disclosure From AI Systems
On Wednesday, European Union officials voted to implement stricter proposed regulations concerning AI.
The updated draft of the “AI Act” law includes a ban on the use of AI in biometric surveillance and requires systems like OpenAI’s ChatGPT to reveal when content has been generated by AI.
While the draft is still non-binding, it gives a strong indication of how EU regulators are thinking about AI.
The new changes to the European Commission’s proposed law – which have not yet been finalized – intend to shield EU citizens from potential threats linked to machine learning technology.
The new draft of the AI Act includes a provision that would ban companies from scraping biometric data (such as user photos) from social media for facial recognition training purposes.
News of firms like Clearview AI using this practice to create facial recognition systems drew severe criticism from privacy advocates in 2020.
However, Reuters reports that this rule might be a source of contention with some EU countries who oppose a blanket ban on AI in biometric surveillance.
The new EU draft also imposes disclosure and transparency measures on generative AI.
Image synthesis services like Midjourney would be required to disclose AI-generated content to help people identify synthesized images.
The bill would also require that generative AI companies provide summaries of copyrighted material scraped and utilized in the training of each system.
While the publishing industry backs this proposal, according to The New York Times, tech developers argue against its technical feasibility.
Additionally, creators of generative AI systems would be required to implement safeguards to prevent the generation of illegal content, and companies working on “high-risk applications” must assess their potential impact on fundamental rights and the environment.
The current draft of the EU law designates AI systems that could influence voters and elections as “high-risk.”
It also classifies systems used by social media platforms with over 45 million users under the same category, thus encompassing platforms like Meta and Twitter.
So what’s the upshot for you? Experts say that after considerable debate over the new rules among EU member nations, a final version of the AI Act isn’t expected until later this year.
Global: Meta introduces VoiceBox
Meta AI introduces Voicebox, a generative AI model for speech that generalizes tasks with state-of-the-art performance.
- Voicebox can synthesize speech in six languages, and perform noise removal, content editing, style conversion, and diverse sample generation.
- The model is based on Flow Matching, outperforming current state-of-the-art models like VALL-E and YourTTS in terms of intelligibility and audio similarity.
- Potential use cases include in-context text-to-speech synthesis, cross-lingual style transfer, speech denoising and editing, and diverse speech sampling.
- The researchers also developed a highly effective classifier to distinguish between authentic speech and audio generated with Voicebox.
So what’s the upshot for you? Due to potential risks of misuse, Meta AI is not making the Voicebox model or code publicly available at this time.
Global: Researchers Warn of ‘Model Collapse’ As AI Trains On AI-Generated Content
As those following the burgeoning industry and its underlying research know, the data used to train the large language models (LLMs) and other transformer models underpinning products such as ChatGPT, Stable Diffusion, and Midjourney comes initially from human sources – books, articles, photographs and so on – that were created without the help of artificial intelligence.
Now, as more people use AI to produce and publish content, an obvious question arises: What happens as AI-generated content proliferates around the internet, and AI models begin to train on it, instead of on primarily human-generated content?
A group of researchers from the UK and Canada have looked into this very problem and recently published a paper on their work in the open-access journal arXiv.
What they found is worrisome for current generative AI technology and its future: “We find that use of model-generated content in training causes irreversible defects in the resulting models.”
Specifically looking at probability distributions for text-to-text and image-to-image AI generative models, the researchers concluded that “learning from data produced by other models causes model collapse – a degenerative process whereby, over time, models forget the true underlying data distribution … this process is inevitable, even for cases with almost ideal conditions for long-term learning.”
“Over time, mistakes in generated data compound and ultimately force models that learn from generated data to misperceive reality even further,” wrote one of the paper’s leading authors, Ilia Shumailov, in an email to VentureBeat.
“We were surprised to observe how quickly model collapse happens: Models can rapidly forget most of the original data from which they initially learned.”
In other words: as an AI training model is exposed to more AI-generated data, it performs worse over time, producing more errors in the responses and content it generates, and producing far less non-erroneous variety in its responses.
As another of the paper’s authors, Ross Anderson, professor of security engineering at Cambridge University and the University of Edinburgh, wrote in a blog post discussing the paper: "Just as we’ve strewn the oceans with plastic trash and filled the atmosphere with carbon dioxide, so we’re about to fill the Internet with blah.
This will make it harder to train newer models by scraping the web, giving an advantage to firms that already did that, or which control access to human interfaces at scale.
Indeed, we already see AI startups hammering the Internet Archive for training data."
So what’s the upshot for you? schwit1 writes: “… and if this paper is correct, generative AI is turning into the self-licking ice cream cone of garbage generation.”
Global: Microsoft Says Early June Disruptions To Outlook, Cloud Platform, Were Cyberattacks
In early June, sporadic but serious service disruptions plagued Microsoft’s flagship office suite – including the Outlook email and OneDrive file-sharing apps – and cloud computing platform.
A shadowy hacktivist group claimed responsibility, saying it flooded the sites with junk traffic in distributed denial-of-service attacks.
Initially reticent to name the cause, Microsoft has now disclosed that DDoS attacks by the murky upstart were indeed to blame.
But the software giant has offered few details – and did not immediately comment on how many customers were affected and whether the impact was global.
A spokeswoman confirmed that the group that calls itself Anonymous Sudan was behind the attacks.
It claimed responsibility on its Telegram social media channel at the time. Some security researchers believe the group to be Russian.
Microsoft’s explanation in a blog post Friday evening followed a request by The Associated Press two days earlier.
Slim on details, the post said the attacks “temporarily impacted the availability” of some services.
It said the attackers were focused on “disruption and publicity” and likely used rented cloud infrastructure and virtual private networks to bombard Microsoft servers from so-called botnets of zombie computers around the globe.
So what’s the upshot for you? Update your computers and your routers and you can help quell the Zombies!
CN: Mandiant Says China-backed Hackers Exploited Barracuda Zero-Day To Spy on Governments
Security researchers at Mandiant say China-backed hackers are likely behind the mass exploitation of a recently discovered security flaw in Barracuda Networks’ email security gear, which prompted a warning to customers to remove and replace affected devices.
Mandiant, which was called in to run Barracuda’s incident response, said the hackers exploited the flaw to compromise hundreds of organizations likely as part of an espionage campaign in support of the Chinese government.
Almost a third of the targeted organizations are government agencies, Mandiant said in a report published Thursday.
Last month, Barracuda discovered a security flaw affecting its Email Security Gateway (ESG) appliances, which sit on a company’s network and filter email traffic for malicious content.
Barracuda issued patches and warned that hackers had been exploiting the flaw since October 2022.
But the company later recommended customers remove and replace affected ESG appliances, regardless of patch level, suggesting the patches failed or were unable to block the hacker’s access.
In its latest guidance, Mandiant also warned customers to replace affected gear after finding evidence that the China-backed hackers gained deeper access to networks of affected organizations.
So what’s the upshot for you? Ouch! This Barracuda attack is more painful than we first imagined.
EU: EU To Air Ideas on Guarding Prized Technology (reuters.com)
The European Commission will unveil on Tuesday possible measures, such as screening of outbound investments and export controls, to keep prized EU technology from countries such as China and prevent it from being put to military use by rivals.
The European Union executive will present its Economic Security Strategy as a “communication” to EU lawmakers and countries, whose leaders are set to discuss relations with China in Brussels next week.
While not a formal legislative proposal, the communication will lay out strategies the 27-nation EU should consider as it seeks to “de-risk” from China and avoid sensitive technology leaking out through exports or investments abroad.
The Commission will need to tread carefully because granting export licenses and weighing security interests are national competencies that EU governments will want to retain.
A Dutch plan that effectively bars Chinese companies from buying the most advanced lithography tools of ASML, which are used to make semiconductors, is a case in point.
The Dutch acted alone but wanted restrictions throughout the EU.
EU officials point out there is no clear way to do this.
So what’s the upshot for you? Tread lightly in the Chinese year of the rabbit.
US/CN: Would you like some Soy sauce with those chips?
TikTok to Huawei routers to DJI drones, rising tensions between China and the US have made Americans – and the US government – increasingly wary of Chinese-owned technologies.
But thanks to the complexity of the hardware supply chain, encryption chips sold by the subsidiary of a company specifically flagged in warnings from the US Department of Commerce for its ties to the Chinese military have found their way into the storage hardware of military and intelligence networks across the West.
In July of 2021, the Commerce Department’s Bureau of Industry and Security added the Hangzhou, China-based encryption chip manufacturer Hualan Microelectronics, also known as Sage Microelectronics, to its so-called “Entity List,” a vaguely named trade restrictions list that highlights companies “acting contrary to the foreign policy interests of the United States.”
Specifically, the bureau noted that Hualan had been added to the list for “acquiring and … attempting to acquire US-origin items in support of military modernization for [China’s] People’s Liberation Army.”
Yet nearly two years later, Hualan – and in particular its subsidiary known as Initio, a company originally headquartered in Taiwan that it acquired in 2016 – still supplies encryption microcontroller chips to Western manufacturers of encrypted hard drives, including several that list as customers on their websites Western governments’ aerospace, military, and intelligence agencies: NASA, NATO, and the US and UK militaries.
Federal procurement records show that US government agencies from the Federal Aviation Administration to the Drug Enforcement Administration to the US Navy have bought encrypted hard drives that use the chips, too.
The disconnect between the Commerce Department’s warnings and Western government customers means that chips sold by Hualan’s subsidiary have ended up deep inside sensitive Western information networks, perhaps due to the ambiguity of their Initio branding and its Taiwanese origin prior to 2016.
The chip vendor’s Chinese ownership has raised fears among security researchers and China-focused national security analysts that they could have a hidden backdoor that would allow China’s government to stealthily decrypt Western agencies’ secrets. And while no such backdoor has been found, security researchers warn that if one did exist, it would be virtually impossible to detect it.
“If a company is on the Entity List with a specific warning like this one, it’s because the US government says this company is actively supporting another country’s military development,” says Dakota Cary, a China-focused research fellow at the Atlantic Council, a Washington, DC-based think tank.
“It’s saying you should not be purchasing from them, not just because the money you’re spending is going to a company that will use those proceeds in the furtherance of another country’s military objectives, but because you can’t trust the product.”
The mere fact that so many Western government agencies are buying products that include chips sold by the subsidiary of a company on the Commerce Department’s trade restrictions list points to the complexities of navigating the computing hardware supply chain, says the Atlantic Council’s Cary.
“At a minimum, it’s a real oversight. Organizations that should be prioritizing this level of security are apparently not able to do so, or are making mistakes that have allowed for these products to get into their environments,” he says. “It seems very significant. And it’s probably not a one-off mistake.”
So what’s the upshot for you? We think many companies will find a multitude of instances like this if they take the time to look.
Global: Security Expert Defeats Lenovo Laptop BIOS Password With a Screwdriver
Cybersecurity experts at CyberCX have demonstrated a simple method for consistently accessing older BIOS-locked laptops by shorting pins on the EEPROM chip with a screwdriver, enabling full access to the BIOS settings, and bypassing the password.
Before we go further, it is worth pointing out that CyberCX’s BIOS password bypass demonstration was done on several Lenovo laptops that it had retired from service.
The blog shows that the easily reproducible bypass is viable on the Lenovo ThinkPad L440 (launched Q4 2013) and the Lenovo ThinkPad X230 (launched Q3 2012).
Other laptop and desktop models and brands that have a separate EEPROM chip where passwords are stored may be similarly vulnerable. […]
From reading various documentation and research articles, CyberCX knew that it needed to follow the following process on its BIOS-locked Lenovo laptops: Locate the correct EEPROM chip; Locate the SCL and SDA pins; and Short the SCL and SDA pins at the right time.
Checking likely looking chips on the mainboard and looking up series numbers eventually lead to being able to target the correct EEPROM.
In the case of the ThinkPad L440, the chip is marked L08-1 X (this may not always be the case).
An embedded video in the CyberCX blog post shows just how easy this ‘hack’ is to do. Shorting the L08-1 X chip pins requires something as simple as a screwdriver tip being held between two of the chip legs. Then, once you enter the BIOS, you should find that all configuration options are open to be changed. There is said to be some timing needed, but the timing isn’t so tight, so there is some latitude. You can watch the video for a bit of ‘technique.’
CyberCX includes some quite in-depth analysis of how its BIOS hack works and explains that you can’t just short the EEPROM chips straight away as you turn the machine on (hence the need for timing).
Some readers may be wondering about their own laptops or BIOS-locked machines they have seen on eBay and so on.
CyberCX says that some modern machines with the BIOS and EEPROM packages in one Surface Mount Device (SMD) would be more difficult to hack in this way, requiring an “off-chip attack.”
The cyber security firm also says that some motherboard and system makers do indeed already use an integrated SMD.
Those particularly worried about their data, rather than their system, should implement “full disk encryption [to] prevent an attacker from obtaining data from the laptop’s drive,” says the security outfit.
So what’s the upshot for you? er… would you mind passing us that screwdriver, please?
Global: Apple Wants to Own Rights to Images of Real Apples
In its filing, Apple sought to obtain rights for a wide range of uses for the image, including electronic, digital, and audiovisual consumer goods and hardware.
This trademark has been widely granted and the World Intellectual Property Organization (WIPO), a UN agency that promotes and protects IP intentionally, shows that Apple now has protected use of it in a lengthy list of countries that includes Australia, Canada, Japan, the UK, and many more.
In Switzerland, Apple filed to trademark the Granny Smith image in 2017 with the Swiss Institute of Intellectual Property (IPI).
After a drawn-out process, the protection was granted in the fall of 2022 for some of the goods and services Apple wanted to use the trademark for. For others, the IPI concluded that generic images of common goods such as apples are in the public domain.
Apple appealed that decision, and now the legal case is moving through the Swiss courts.
Now a 111-year-old organization called Fruit Union Suisse, the oldest and largest of its kind that represents roughly 8,000 apple farmers, is voicing concern that it may soon have to change its logo if Apple prevails.
The group’s logo is of a red apple (without a bite) with the white cross from the Swiss national flag superimposed on it.
So what’s the upshot for you? “We have a hard time understanding this because it’s not like they’re trying to protect their bitten apple,” Fruit Union Suisse director Jimmy Mariéthoz tells Wired.
“Their objective here is really to own the rights to an actual apple, which, for us, is something that is really almost universal … that should be free for everyone to use.”
And the quote of the week - “If everything is under control, you are going too slow.” — Mario Andretti
That’s it for this week. Stay safe, stay secure, wave to Julia, and see you in se7en.