Daml’ers,
This week’s exciting collection of stories takes us from the latest fashionable freeze to the warmth of a fuzzy paparazzo.
- for the podcast of this blog right click the pic -
We have a “stand up, sit down, stand up” again revelation about our friends at TikTok that will leave you …well either sitting or standing… and a couple of Ai stories that describe the horse bolting from the stall and then what that horse did.
There’s a little relief for those in the US living within 100 air miles of the border and a story that will give you more reasons to brush your teeth than your dentist could come up with.
Finally, you get the world’s (is that even correct?) first hackathon in outer space and the prize money is out of this world!
Come on! Shake off those icicles and let’s warm up with the best adventure yet!
US/CA: Cold Pressed
Try this in private this Summer (Northern Hemisphere)
The hottest new home amenity: cold plunges that can cost tens of thousands of dollars.
Freezing dipping pools are part of a wellness trend as spa amenities become must-haves in bougie backyards. Some of the world’s biggest athletes and stars have made this the latest essential home accessory.
- for the podcast of this blog right click the pic -
So what’s the upshot for you? Start with an ice bath; anything you do afterward will be better.
UK: Huge cyber security lab opens in Cheltenham England
A cyber security laboratory big enough to test cars, private jets, and aircraft engines has just opened in Cheltenham England.
The facility is over 5,000 sq ft (464 sq meters) and is based near the UK’s intelligence agency GCHQ. (GSHQ stands for Government Communications Headquarters and is an intelligence and security organization responsible for providing signals intelligence and information assurance to the government and armed forces of the United Kingdom.)
The company behind it, IOActive, believes it is the first privately-owned lab of its size anywhere in the world.
So what’s the upshot for you? The first rule of cybersecurity is pretty basic. Don’t share your details on the Internet.
Global: Oh Yes They Did!
Over the past several years, thousands of TikTok creators and businesses around the world have given the company sensitive financial information—including their social security numbers and tax IDs—so that they can be paid by the platform.
But unbeknownst to many of them, TikTok has stored that personal financial information on servers in China that are accessible by employees there.
TikTok uses various internal tools and databases from its Beijing-based parent ByteDance to manage payments to creators who earn money through the app, including many of its biggest stars in the United States and Europe.
The same tools are used to pay outside vendors and small businesses working with TikTok.
But a trove of records obtained by Forbes from multiple sources across different parts of the company reveals that highly sensitive financial and personal information about those prized users and third parties has been stored in China.
The discovery also raises questions about whether employees who are not authorized to access that data have been able to. It draws on internal communications, audio recordings, videos, screenshots, documents marked “Privileged and Confidential,” and several people familiar with the matter.
In testimony before Congress earlier this year, TikTok CEO Shou Zi Chew claimed U.S. user data has been stored on physical servers outside China.
He Lied.
So what’s the upshot for you? TikTok’s privacy policy does say TikTok may transmit user data to servers outside the U.S. for storage or processing, and that no data storage or transmission is guaranteed to be secure.
TikTok’s storage of European creators’ bank information in China could also be problematic under Europe’s privacy law, the General Data Protection Regulation (GDPR).
Global: Big Tech Isn’t Prepared for A.I.’s Next Chapter
In February, Meta released its large language model: LLaMA.
Unlike OpenAI and its ChatGPT, Meta didn’t just give the world a chat window to play with.
Instead, it released the code into the open-source community, and shortly thereafter the model itself was leaked.
Researchers and programmers immediately started modifying it, improving it, and getting it to do things no one else anticipated.
And their results have been immediate, innovative, and an indication of how the future of this technology is going to play out.
Training speeds have hugely increased, and the size of the models themselves has shrunk to the point that you can create and run them on a laptop.
The world of A.I. research has dramatically changed.
This development hasn’t made the same splash as other corporate announcements, but its effects will be much greater.
It will wrest power from the large tech corporations, resulting in both much more innovation and a much more challenging regulatory landscape.
The large corporations that had controlled these models warn that this free-for-all will lead to potentially dangerous developments and problematic uses of the open technology have already been documented.
But those who are working on the open models counter that a more democratic research environment is better than having this powerful technology controlled by a small number of corporations…
Building on public models like Meta’s LLaMa, the open-source community has innovated in ways that allow results nearly as good as the huge models — but run on home machines with common data sets.
What was once the reserve of the resource-rich has become a playground for anyone with curiosity, coding skills, and a good laptop.
Bigger may be better, but the open-source community is showing that smaller is often good enough.
This opens the door to more efficient, accessible, and resource-friendly LLMs.
Low-cost customization will foster rapid innovation, the article argues, and “takes control away from large companies like Google and OpenAI.” Although this may have one unforeseen consequence…
“Now that the open-source community is remixing LLMs, it’s no longer possible to regulate the technology by dictating what research and development can be done; there are simply too many researchers doing too many different things in too many different countries.”
So what’s the upshot for you? We have entered an era of LLM democratization.
By showing that smaller models can be highly effective, enabling easy experimentation, diversifying control, and providing incentives that are not profit-motivated, open-source initiatives are moving us into a more dynamic and inclusive A.I. landscape.
This doesn’t mean that some of these models won’t be biased, or wrong, or prone to hallucination…
But it does mean that controlling this technology now is going to take an entirely different approach than regulating the large players.
US: AI drone ‘kills’ human operator during ‘simulation’ - which US Air Force says didn’t take place
An AI-controlled drone “killed” its human operator in a simulated test reportedly staged by the US military - which denies such a test ever took place.
It turned on its operator to stop it from interfering with its mission, said Air Force Colonel Tucker “Cinco” Hamilton, during a Future Combat Air & Space Capabilities summit in London.
“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat,” he said.
"The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat.
So what did it do? It killed the operator.
It killed the operator because that person was keeping it from accomplishing its objective."
So what’s the upshot for you? "We trained the system - ‘Hey don’t kill the operator - that’s bad. You’re gonna lose points if you do that’. So what does it start doing?
It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target."
In a statement to Insider, the US Air Force denied any such virtual test took place.
US: Amazon’s Ring was used to spy on customers, FTC says in privacy settlement
Amazon and its subsidiary, Ring, have agreed to separate multi-million dollar settlements with the U.S. Federal Trade Commission (FTC) over privacy violations involving children’s use of Alexa and homeowners’ use of Ring doorbell cameras.
Amazon will pay $25 million for failing to delete Alexa recordings as requested by parents and for keeping them longer than necessary, while Ring will pay $5.8 million for mishandling customers’ videos.
“While we disagree with the FTC’s claims regarding both Alexa and Ring, and deny violating the law, these settlements put these matters behind us,” Amazon.com said in a statement.
It also pledged to make some changes to its practices.
In its complaint against Amazon.com filed in Washington state, the FTC said that it violated rules protecting children’s privacy and rules against deceiving consumers who used Alexa.
For example, the FTC complaint says that Amazon told users it would delete voice transcripts and location information upon request, but then didn’t.
The FTC also said Ring gave employees unrestricted access to customers’ sensitive video data said: “as a result of this dangerously overbroad access and lax attitude toward privacy and security, employees and third-party contractors were able to view, download, and transfer customers’ sensitive video data for their own purposes.”
As part of the FTC agreement with Ring, which spans 20 years, Ring is required to disclose to customers how much access to their data the company and its contractors have.
So what’s the upshot for you? We hope the detail of how much of Ring users’ data is shared is provided up front, pre-purchase.
US: FTC fines Microsoft $20M for alleged COPPA violations
The U.S. Federal Trade Commission announced a USD20 million fine against Microsoft for alleged Children’s Online Privacy Protection Act violations related to its Xbox gaming system.
The FTC claimed Microsoft did not obtain parental consent for Xbox account data collection on users under age 13 before carrying out the collection.
Corrective actions in the proposed order, which is subject to federal court approval, call on Microsoft to obtain parental consent for any outstanding nonconsensual Xbox accounts and stand up a data deletion program.
So what’s the upshot for you? The proposed order will require Microsoft to bolster protections for children; makes clear that avatars and biometric and health data are protected under the Children’s Online Privacy Protection Act (COPPA)
US: Federal Judge Makes History In Holding That Border Searches of Cell Phones Require a Warrant
In a groundbreaking ruling, a district court judge in New York, United States v. Smith (S.D.N.Y. May 11, 2023), declared that a warrant is necessary for cell phone searches at the border unless there are urgent circumstances.
The Ninth Circuit in United States v. Cano (2019) held that a warrant is required for a device search at the border that seeks data other than “digital contraband” such as child pornography.
Similarly, the Fourth Circuit in United States v. Aigbekaen (2019) held that a warrant is required for a forensic device search at the border in support of a domestic criminal investigation.
These courts and the Smith court were informed by Riley v. California (2014). In that watershed case, the Supreme Court held that the police must get a warrant to search an arrestee’s cell phone.
The Smith court’s application of Riley’s balancing test is nearly identical to the arguments we’ve made time and time again.
The Smith court also cited Cano, in which the Ninth Circuit engaged extensively with Electronic Frontier Foundation EFF’s amicus brief even though it didn’t go as far as requiring a warrant in all cases.
The Smith court acknowledged that no federal appellate court “has gone quite this far (although the Ninth Circuit has come close).”
We’re pleased that our arguments are moving through the federal judiciary and finally being embraced.
We hope that the Second Circuit affirms this decision and that other courts – including the Supreme Court – are courageous enough to follow suit and protect personal privacy.
So what’s the upshot for you? This is good news for anyone within 100 air miles of a US border.
US: Ransomware Attack On US Dental Insurance Giant Exposes More than just the dental health of 9 Million Patients
An apparent ransomware attack on one of America’s largest dental health insurers has compromised the personal information of almost nine million individuals in the United States.
The Atlanta-based Managed Care of North America (MCNA) Dental claims to be the largest dental insurer in the nation for government-sponsored plans covering children and seniors.
In a notice posted on Friday, the company said it became aware of “certain activity in our computer system that happened without our permission” on March 6 and later learned that a hacker “was able to see and take copies of some information in our computer system” between February 26 and March 7, 2023.
The information stolen includes a trove of patients’ data, including names, addresses, dates of birth, phone numbers, email addresses, Social Security numbers, and driver’s licenses or other government-issued ID numbers.
Hackers also accessed patients’ health insurance data, including plan information and Medicaid ID numbers, along with bill and insurance claim information.
In some cases, some of this data pertained to a patient’s “parent, guardian, or guarantor,” according to MCNA Dental, suggesting that children’s data was accessed during the breach.
According to a data breach notification filed with Maine’s attorney general, the hack affected more than 8.9 million clients of MCNA Dental.
That makes this incident the largest breach of health information of 2023 so far, after the PharMerica breach that saw hackers access the personal data of almost 6 million patients.
The LockBit ransomware group took responsibility for the cyberattack and published 700GB of files after the company refused to pay a $10 million ransom demand.
So what’s the upshot for you? Now the baddies not only have all your personal financial data, and healthcare data, but they also know how many fillings you have.
Global: Millions of PC Motherboards Were Sold With a Firmware Backdoor
Hidden code in hundreds of models of Gigabyte motherboards invisibly and insecurely downloads programs – a feature ripe for abuse, researchers say.
Hiding malicious programs in a computer’s UEFI firmware, the deep-seated code that tells a PC how to load its operating system has become an insidious trick in the toolkit of stealthy hackers.
But when a motherboard manufacturer installs its own hidden backdoor in the firmware of millions of computers – and doesn’t even put a proper lock on that hidden back entrance – they’re practically doing hackers’ work for them.
Researchers at firmware-focused cybersecurity company Eclypsium revealed today that they’ve discovered a hidden mechanism in the firmware of motherboards sold by the Taiwanese manufacturer Gigabyte, whose components are commonly used in gaming PCs and other high-performance computers.
Whenever a computer with the affected Gigabyte motherboard restarts, Eclypsium found, code within the motherboard’s firmware invisibly initiates an updater program that runs on the computer and in turn downloads and executes another piece of software.
While Eclypsium says the hidden code is meant to be an innocuous tool to keep the motherboard’s firmware updated, researchers found that it’s implemented insecurely, potentially allowing the mechanism to be hijacked and used to install malware instead of Gigabyte’s intended program.
And because the updater program is triggered from the computer’s firmware, outside its operating system, it’s tough for users to remove or even discover.
“If you have one of these machines, you have to worry about the fact that it’s grabbing something from the internet and running it without you being involved, and hasn’t done any of this securely,” says John Loucaides, who leads strategy and research at Eclypsium.
“The concept of going underneath the end user and taking over their machine doesn’t sit well with most people.”
So what’s the upshot for you? … “'er well, it’s not a problem if you never reboot…”
Outer Space: Uncle Sam wants DEF CON hackers to pwn a satellite in space!
In roughly two months, five teams of DEF CON hackers will do their best to successfully remotely infiltrate and hijack a satellite while it’s in space.
The idea is to try out offensive and defensive techniques and methods on actual in-orbit hardware and software, which we imagine could help improve our space systems.
The Satellite was built by The Aerospace Corporation, a federally funded research and development center in southern California, in partnership with the US Space Systems Command and the Air Force Research Laboratory.
It will run software developed by infosec and aerospace engineers to support in-orbit cybersecurity training and exercises.
This effort was inspired by the Hack-A-Sat contest co-hosted by the US Air Force and Space Force, now in its fourth year at the annual DEF CON computer security conference.
So what’s the upshot for you? There are a couple of things that make securing space systems unique:
“The most obvious is you can’t just go up there and reboot them. So your risk tolerance is very low for losing access to communications to the device.”
Because of this, space systems are built in a risk-averse way, and employ redundancy to provide multiple communication pathways to recover a system if it fails, or to debug malfunctioning equipment.
These pathways, however, also give miscreants more opportunities to gain access to and ultimately compromise, a satellite.
“They can all become attack surfaces that an attacker might target.”
“The other big thing that makes space systems different is that they’re always under a degree of environmental attack that we’re not accustomed to.”
This includes physical threats, such as solar radiation, extreme temperatures, and orbital debris.
“So when people build space systems, and they’re deciding which risks to prioritize, they’ll often treat cybersecurity as a lesser risk against the certain aggressive environmental harms.”
This allows us to test the efficacy of both.
US: Couple Claims Bear Fitted With Camera Illegally Spied on Them
A couple has claimed that the state of Connecticut put a camera on a wandering bear to illegally film their property.
Mark and Carol Brault allege that the state’s Department of Energy and Environmental Protection (DEEP) attached a camera to a bear that the agency knew frequents the couple’s 117-acre forested property in Hartland, Connecticut.
The Connecticut Post reports that the couple is now suing the state for turning the wild bear into a spy by strapping a camera on it to illegally film their property. The Braults have also filed an injunction to get the photographic evidence destroyed.
So what’s the upshot for you? The backstory is that the Department of Energy and Environmental Protection thought the couple, who charge people to see bears on their farm, were feeding the bears to attract them to the property.
In that context, it may seem completely reasonable to have a bear taking your photograph.
- for the podcast of this blog right click the pic -
Smile and say “Cheese”!
And our quote of the week - “When teaching, tell them what needs to be accomplished. Not how.”
That’s it for this week. Stay safe, stay secure, watch out for bears, and see you in se7en.