The IT Privacy and Security Weekly “Roadtrip” for the week ending April 11th., 2023

Daml’ers,

This update starts in the middle of Interstate 84 and ends with a possible answer as to what’s happening with the ghost orders for bottled water.
i84-conneticut
click the road sign to hear this as a podcast

We hit the road with some car updates; from what might soon be missing from our cars to wait… where is the car?

Then we move on to phones, where the Federal Bureau of Investigation has a tip for you the next time you consider topping up at the airport, and Google reveals its plans to make your experience with applications a little bit cleaner going forward.

There’s even a battle brewing in a large US university as the students returned to classes and found the building wired for way more than just higher learning.

Road trips are always an adventure, and this update is no exception, so get comfy, buckle up, and let’s put the pedal to the metal.


US: Found yourself overwhelmed during your last road trip? You are not alone.

A bald eagle found itself without privacy or security in the middle of some serious traffic last Saturday afternoon.

The downed bird of prey had landed in the center median of Interstate-84 in the US state of Connecticut.

The responding troopers slowed traffic down and made their way to the median to try to coax the bird to the right shoulder of the interstate.

When the eagle wouldn’t budge they decided to call in Troop C Dispatcher Gambacorta, who also serves as a local Animal Control officer who helped to rescue the eagle.

So what’s the upshot for you? Apparently once the Police had escorted it to a more private, secure location, the Eagle took wing, resuming his travels, now we are not suggesting this as a strategy for you, but we think the bird might be on to something.


CN: China Plans To Ban Exports of Rare Earth Magnet Tech

China is considering banning the export of technologies used to produce high-performance rare earth magnets deployed in electric vehicles, wind turbine motors, and other products, citing “national security” as a reason.

With the global trend toward decarbonization driving a shift toward the use of electric motors, China is believed to be seeking to seize control of the magnet supply chain and establish dominance in the burgeoning environment sector.

Beijing is currently in the process of revising its Catalogue of Technologies Prohibited and Restricted from Export – a list of manufacturing and other industrial technologies subject to export controls – and released a draft of the revised catalog for public comment in December.

In the draft, manufacturing technologies for high-performance magnets using such rare earth elements as neodymium and samarium cobalt were added to the export ban.

The solicitation of comments ceased late January and the revisions are expected to be adopted as early as this year.

So what’s the upshot for you? This could impact everything from recharging and road trips to many of the alternate energy sources being planned around the world.


US/CN: How Much Data Did the Chinese Spy Balloon Collect?

The Chinese spy balloon that flew across the U.S. was able to gather intelligence from several sensitive American military sites, despite the Biden administration’s efforts to block it from doing so, according to two current senior U.S. officials and one former senior administration official.

China was able to control the balloon so it could make multiple passes over some of the sites (at times flying figure-eight formations) and transmit the information it collected back to Beijing in real-time, the three officials said.

The intelligence China collected was mostly from electronic signals, which can be picked up from weapons systems or include communications from base personnel, rather than images, the officials said.

The three officials said China could have gathered much more intelligence from sensitive sites if not for the administration’s efforts to move around potential targets and obscure the balloon’s ability to pick up their electronic signals by stopping them from broadcasting or emitting signals.

America’s Department of Defense “directed NBC News to comments senior officials made in February that the balloon had ‘limited additive value’ for intelligence collection by the Chinese government ‘over and above what [China] is likely able to collect through things like satellites in low earth orbit.’”

So what’s the upshot for you? After statements saying that nothing new was being collected, and then apparently finding the US military was busy shuffling things around in the background. At the same time, the Chinese read the signals coming off the army bases and missile silos.
If it’s as easy reading US military effluent as it is hacking a garage door opener, it might be time for the US to consider being a bit more careful.


Global: Now thieves are stealing cars by pulling off smart headlights to get at the Controller Area Network (CAN) bus controller.

A Controller Area Network (CAN) bus is present in nearly all modern cars, and is used by microcontrollers and other devices to talk to each other within the vehicle and carry out the work they are supposed to do.

In a CAN injection attack, thieves access the network, and introduce bogus messages as if it were from the car’s smart key receiver.

These messages effectively cause the security system to unlock the vehicle and disable the engine immobilizer, allowing it to be stolen.

To gain this network access, the crooks can, for instance, break open a headlamp and use its connection to the bus to send messages.

From that point, they can simply manipulate other devices to steal the vehicle.

“In most cars on the road today, these internal messages aren’t protected: the receivers simply trust them,” [Ken Tindell, CTO of Canis Automotive Labs] detailed in a technical write-up this week.

The discovery followed an investigation by Ian Tabor, a cybersecurity researcher and automotive engineering consultant working for EDAG Engineering Group.

It was driven by the theft of Tabor’s RAV4.

Leading up to the crime, Tabor noticed the front bumper and arch rim had been pulled off by someone, and the headlight wiring plug removed.

The surrounding area was scuffed with screwdriver markings, which, together with the fact the damage was on the curbside, seemed to rule out damage caused by a passing vehicle.

More vandalism was later done to the car: gashes in the paintwork, molding clips removed, and malfunctioning headlamps.

A few days later, the Toyota was stolen.

Refusing to take the pilfering lying down, Tabor used his experience to try to figure out how the thieves had done the job. The MyT app from Toyota – which among other things allows you to inspect the data logs of your vehicle – helped out.

It provided evidence that Electronic Control Units (ECUs) in the RAV4 had detected malfunctions, logged as Diagnostic Trouble Codes (DTCs), before the theft.

According to Tindell, “Ian’s car dropped a lot of DTCs.” Various systems had seemingly failed or suffered faults, including the front cameras and the hybrid engine control system. With some further analysis, it became clear the ECUs probably hadn’t failed, but communication between them had been lost or disrupted. The common factor was the CAN bus.

So what’s the upshot for you? This is an important lesson as to why the security team should be at the table in the design phases.

As our cars turn into computers, they are going to have to be securitized, updated and heaven forbid… rebooted!


US: Inside the Bitter Campus Privacy Battle Over Smart Building Sensors

When computer science students and faculty at Carnegie Mellon University’s Institute for Software Research returned to campus in the summer of 2020, there was a lot to adjust to.

Beyond the inevitable strangeness of being around colleagues again after months of social distancing, the department was also moving into a brand-new building: the 90,000-square-foot, state-of-the-art TCS Hall.

The hall’s futuristic features included carbon dioxide sensors that automatically pipe in fresh air, a rain garden, a yard for robots and drones, and experimental super-sensing devices called Mites.

Mounted in more than 300 locations throughout the building, these light-switch-size devices can measure 12 types of data – including motion and sound.

Mites were embedded on the walls and ceilings of hallways, in conference rooms, and in private offices, all as part of a research project on smart buildings led by CMU professor Yuvraj Agarwal.

“The overall goal of this project,” Agarwal explained at an April 2021 town hall meeting for students and faculty, is to “build a safe, secure, and easy-to-use IoT [Internet of Things] infrastructure,” referring to a network of sensor-equipped physical objects like smart light bulbs, thermostats, and TVs that can connect to the internet and share information wirelessly.

Not everyone was pleased to find the building full of Mites.

Some in the department felt that the project violated their privacy rather than protected it.

In particular, students and faculty whose research focused more on the social impacts of technology felt that the device’s microphone, infrared sensor, thermometer, and six other sensors, which together could at least sense when a space was occupied, would subject them to experimental surveillance without their consent.

“It’s not okay to install these by default,” says David Widder, a final-year PhD candidate in software engineering, who became one of the department’s most vocal voices against Mites.

“I don’t want to live in a world where one’s employer installing networked sensors in your office without asking you first is a model for other organizations to follow.”

All technology users face similar questions about how and where to draw a personal line when it comes to privacy.

But outside of our own homes (and sometimes within them), we increasingly lack autonomy over these decisions.

Instead, our privacy is determined by the choices of the people around us. Walking into a friend’s house, a retail store, or just down a public street leaves us open to many different types of surveillance over which we have little control.

Against a backdrop of skyrocketing workplace surveillance, prolific data collection, increasing cybersecurity risks, rising concerns about privacy and smart technologies, and fraught power dynamics around free speech in academic institutions, Mites became a lightning rod within the Institute for Software Research.

Voices on both sides of the issue were aware that the Mites project could have an impact far beyond TCS Hall.

After all, Carnegie Mellon is a top-tier research university in science, technology, and engineering, and how it handles this research may influence how sensors will be deployed elsewhere.

“When we do something, companies [and] other universities listen,” says Widder.

Indeed, the Mites researchers hoped that the process they’d gone through “could actually be a blueprint for smaller universities” looking to do similar research, says Agarwal, an associate professor in computer science who has been developing and testing machine learning for IoT devices for a decade.

But the crucial question is what happens if – or when – the super-sensors graduate from Carnegie Mellon, are commercialized, and make their way into smart buildings the world over.

The conflict is, in essence, an attempt by one of the world’s top computer science departments to litigate thorny questions around privacy, anonymity, and consent.

But it has deteriorated from an academic discussion into a bitter dispute, complete with accusations of bullying, vandalism, misinformation, and workplace retaliation.

So what’s the upshot for you? As in so many conversations about privacy, the two sides have been talking past each other, with seemingly incompatible conceptions of what privacy means and when consent should be required.

Ultimately, if the people whose research sets the agenda for technology choices are unable to come to a consensus on privacy, where does that leave the rest of us?


Global: Open Garage Doors Anywhere In the World By Exploiting This ‘Smart’ Device

A market-leading garage door controller is so riddled with severe security and privacy vulnerabilities that the researcher who discovered them, Sam Sabetan, is advising anyone using one to immediately disconnect it until they are fixed.

Each $80 device, used to open and close garage doors and control home security alarms and smart power plugs, employs the same easy-to-find universal password to communicate with Nexx servers.

The controllers also broadcast the unencrypted email address, device ID, first name, and last initial corresponding to each one, along with the message required to open or shut a door or turn on or off a smart plug or schedule such a command for a later time.

The result: Anyone with a moderate technical background can search Nexx servers for a given email address, device ID, or name and then issue commands to the associated controller. (Nexx controllers for home security alarms are susceptible to a similar class of vulnerabilities.)

Commands allow a door to be opened, a device connected to a smart plug to be turned off, or an alarm to be disarmed.

Worse still, over the past three months, personnel for Texas-based Nexx haven’t responded to multiple private messages warning of the vulnerabilities.

“Nexx has consistently ignored communication attempts from myself, the Department of Homeland Security, and the media,” Sabetan wrote in a post published on Tuesday.

“Device owners should immediately unplug all Nexx devices and create support tickets with the company requesting them to remediate the issue.”

Sabetan estimates that more than 40,000 devices, located in residential and commercial properties, are impacted, and more than 20,000 individuals have active Nexx accounts.

So what’s the upshot for you? There will be a lot of heaving garage doors open over the next few weeks.

We are betting that chiropractors do well out of this one.


IT: OpenAI to offer remedies to resolve Italy’s ChatGPT ban

The company behind ChatGPT will propose measures to resolve data privacy concerns that sparked a temporary Italian ban on the artificial intelligence chatbot, regulators said Thursday.

In a video call last Wednesday between the watchdog’s commissioners and OpenAI executives including CEO Sam Altman, the company promised to set out measures to address the concerns.

The Italian watchdog said it didn’t want to hamper AI’s development but stressed to OpenAI the importance of complying with the 27-nation EU’s stringent privacy rules.

They also questioned whether there’s a legal basis for OpenAI to collect massive amounts of data used to train ChatGPT’s algorithms and raised concerns the system could sometimes generate false information about individuals.

Other regulators in Europe and elsewhere have started paying more attention after Italy’s action.

Ireland’s Data Protection Commission said it’s “following up with the Italian regulator to understand the basis for their action and we will coordinate with all EU Data Protection Authorities in relation to this matter.”

France’s data privacy regulator, CNIL, said it’s investigating after receiving two complaints about ChatGPT.

Canada’s privacy commissioner also has opened an investigation into OpenAI after receiving a complaint about the suspected “collection, use, and disclosure of personal information without consent.”

In a blog post this week, the U.K. Information Commissioner’s Office warned that “organizations developing or using generative AI should be considering their data protection obligations from the outset” and design systems with data protection as a default.

“This isn’t optional – if you’re processing personal data, it’s the law,” the office said.

So what’s the upshot for you? In an apparent response to the concerns, OpenAI published a blog post Wednesday outlining its approach to AI safety.

The company said it works to remove personal information from training data where feasible, fine-tune its models to reject requests for the personal information of private individuals, and acts on requests to delete personal information from its systems.


Global: Google giving Users More Control Over Account Data

Google wants to make it as easy to scrub an app account as it is to create one.

The company has announced that Android apps on the Play Store will soon have to let you delete an account and its data both inside the app and on the web.

The move is meant to “better educate” users on the control they have over their data and to foster trust in both apps and the Play Store at large.

You can delete certain data (such as your uploaded content) without having to completely erase your account, Google says.

The web requirement also ensures that you won’t have to reinstall an app just to purge your info.

The policy is taking effect in stages.

Creators have until December 7th to answer questions about data deletion in their app’s safety form.

Store listings will start showing the changes in early 2024. Developers can file for an extension until May 31st of next year.

Developers will also have to wipe data for an account when users ask to delete the account entirely.

So what’s the upshot for you? This whole initiative probably raises more questions than it answers. Deleting data? That must be on your phone because everything that has left and gone back to the app owner is already out of your control.


US: FBI Warns Against Using Public Phone Charging Stations

The FBI recently warned consumers against using free public charging stations, saying crooks have managed to hijack public chargers that can infect devices with malware, or software that can give hackers access to your phone, tablet or computer.

“Avoid using free charging stations in airports, hotels or shopping centers,” a tweet from the FBI’s Denver field office said.

"Bad actors have figured out ways to use public USB ports to introduce malware and monitoring software onto devices.

Carry your own charger and USB cord and use an electrical outlet instead."

So what’s the upshot for you? You can also use a rechargeable power block and refill your phone from that!


US: New Ultrasound Attack Can Secretly Hijack Phones and Smart Speakers

Academics in the US have developed an attack dubbed NUIT, for Near-Ultrasound Inaudible Trojan, that exploits vulnerabilities in smart device microphones and voice assistants to silently and remotely access smart phones and home devices.

The attacks work by modulating voice commands into near-ultrasound inaudible signals so that humans can’t hear them but the voice assistant will still respond to them.

These signals are then embedded into a carrier, such as an app or YouTube video.

When a vulnerable device picks up the carrier, it ends up obeying the hidden embedded commands.

Attackers can use social engineering to trick the victim into playing the sound clip, Xia explained.

“And once the victim plays this clip, voluntarily or involuntarily, the attacker can manipulate your Siri to do something, for example, open your door.”

So what’s the upshot for you? Why is it that Siri plays deaf when you need directions, or an answer to some burning question, but is on the ball and responding to this? Somehow something seems wrong here.


IL: Sounds emitted by plants under stress are airborne and informative

Now it turns out if you have a plant sitting on the windowsill you might not have as much privacy as you first thought. Not only might they listening to you, but they could be talking to you too.

About a decade ago, Lilach Hadany was outside listening to animals chirping, growling, and buzzing when she had a thought: Why don’t plants make noise, too?

Hadany, a biology professor at Tel Aviv University, did some research but didn’t get a satisfactory answer. A few years later, she launched another study to determine whether plants make noise.

After six years of research, Hadany and her colleagues discovered that plants — when distressed — make ultrasonic noises, which are inaudible to humans. According to a study released last week, plants emit popping sounds when they’re cut or when they become dehydrated or infected — noises that researchers say might be their version of a call for help.

“We always thought that plants are silent,” researcher Yossi Yovel said. “And now we realize that they actually make those sounds quite often, and they are meaningful to some extent.”

The researchers placed plants in soundproof boxes in a quiet room and sat two ultrasonic microphones nearby.

They studied tomato, tobacco, cactus, corn, wheat, and other plants in varying conditions — some had cut stems, some had not been watered for days and others were untouched.

The result: The microphone picked up sounds at frequencies between 40 and 80 kilohertz — far above what the human ear can detect.

The noises sound similar to popcorn kernels popping, the researchers found.

Distressed plants generated dozens of sounds every hour and sometimes roughly one every minute, the team found.

Undamaged plants made less than one sound every hour.

Using artificial intelligence, researchers said, they can identify the type of plant and its condition based on the volume, frequency, and tempo of its sounds.

Researchers don’t know whether other creatures pay attention to the noises.

Previous studies have found that plants interact with their environments.

In 2019, Hadany and Yovel discovered that flowers produced nectar when they detected bees and other pollinators nearby.

A study in May in the Plant Cell found that plants communicate with electrical signals from their leaves.

So what’s the upshot for you? If you play classical music to your plants, perhaps they’re telling you they’d like something more uptempo. Perhaps this answers the question of
who might have been using Alexa to order bottled water over the long holiday weekend!


And the quote of the week: “The future belongs to the curious. The ones who are not afraid to try it, explore it, poke at it, question it, shake it up, and break it.”


RoadTrip

click the road to hear this as a podcast

That’s it for this week. Stay safe, stay secure, drive safely, and see you in se7en.