The IT Privacy and Security Weekly Update from the Passenger Seat on July 12th., 2022



Daml’ers,

We go drifting in the Honda this week with stories ranging from AI writing its own autobiography to why our Anom phone bill was so high.

In between we have lots of car stories: Some of them might have you in tears, so grab a tissue.

We accelerate from beans to Titans and then hit the brakes when we discover another instance of those collecting everything on everyone leaving it all out on the starting line.

We hit a hairpin turn with an up-and-coming U.S. politician who’s proposing to create jobs by wrecking car things.

Then we have a police data raid in India that could put some of us in the pits.

This podcast may not be a Ferrari,
ferrari right small
but we’ve still got some very good lines.

Buckle up. Let’s drift.



Global: We Asked GPT-3 AI to Write an Academic Paper about Itself

“On a rainy afternoon earlier this year, I logged in to my OpenAI account and typed a simple instruction for the company’s artificial intelligence algorithm, GPT-3: Write an academic thesis in 500 words about GPT-3 and add scientific references and citations inside the text.

As it started to generate text, I stood in awe.

Here was novel content written in academic language, with well-grounded references cited in the right places and in relation to the right context.

It looked like any other introduction to a fairly good scientific publication.

Given the very vague instruction I provided, I didn’t have any high expectations: I’m a scientist who studies ways to use artificial intelligence to treat mental health concerns, and this wasn’t my first experimentation with AI or GPT-3, a deep-learning algorithm that analyzes a vast stream of information to create text on command.

Yet there I was, staring at the screen in amazement.

The algorithm was writing an academic paper about itself.

My attempts to complete that paper and submit it to a peer-reviewed journal have opened up a series of ethical and legal questions about publishing, as well as philosophical arguments about nonhuman authorship.”

So what’s the upshot for you? Only time— and peer-review—can tell.

Currently, GPT-3’s paper has been assigned an editor at the academic journal to which we submitted it, and it has now been published at the international French-owned pre-print server HAL.

The unusual main author is probably the reason behind the prolonged investigation and assessment. We are eagerly awaiting what the paper’s publication if it occurs, will mean for academia.


Global: Apple delivers “Lockdown” mode

Apple last week detailed two initiatives to help protect users who may be personally targeted by some of the most sophisticated digital threats, such as those from private companies developing state-sponsored mercenary spyware.

Lockdown Mode — the first major capability of its kind, coming this fall with iOS 16, iPad OS 16, and macOS Ventura — is an extreme, optional protection for the very small number of users who face grave, targeted threats to their digital security.

Apple also shared details about the $10 million cybersecurity grant it announced last November to support civil society organizations that conduct mercenary spyware threat research and advocacy.

“Apple makes the most secure mobile devices on the market. Lockdown Mode is a groundbreaking capability that reflects our unwavering commitment to protecting users from even the rarest, most sophisticated attacks,” said Ivan Krstić, Apple’s head of Security Engineering and Architecture. “While the vast majority of users will never be the victims of highly targeted cyberattacks, we will work tirelessly to protect the small number of users who are.

So what’s the upshot for you? That’s reassurance.


EU: Europe faces Facebook Wipeout

Europeans risk seeing social media services Facebook and Instagram shut down this summer, as Ireland’s privacy regulator doubled down on its order to stop the firm’s data flows to the United States.

The Irish Data Protection Commission on Thursday informed its counterparts in Europe that it will block Facebook-owner Meta from sending user data from Europe to the U.S. The Irish regulator’s draft decision cracks down on Meta’s last legal resort to transfer large chunks of data to the U.S., after years of fierce court battles between the U.S. tech giant and European privacy activists.

The European Court of Justice in 2020 annulled an EU-U.S. data flows pact called Privacy Shield because of fears over U.S. surveillance practices. In its ruling, it also made it harder to use another legal tool that Meta and many other U.S. firms use to transfer personal data to the U.S., called standard contractual clauses (SCCs). This week’s decision out of Ireland means Facebook is forced to stop relying on SCCs too.

Meta has repeatedly warned that such a decision would shutter many of its services in Europe, including Facebook and Instagram.

“If a new transatlantic data transfer framework is not adopted and we are unable to continue to rely on SCCs or rely upon other alternative means of data transfers from Europe to the United States, we will likely be unable to offer a number of our most significant products and services, including Facebook and Instagram, in Europe,” Meta said in a filing to the U.S. Securities and Exchange Commission in March this year.

The Irish blocking order, if confirmed by the group of European national data protection regulators, is likely to send a chill through the wider business community too, which has been scratching its head about how to continue sending data from Europe to the U.S. following the EU’s top court ruling in 2020.

So what’s the upshot for you? Europe and the US really do need to define data handling standards that both sides can live with. This will be tough, but the world is getting smaller and this is a definition that needs to be in place.


US: The Danger of License Plate Readers in Post-Roe America

Automated license plate readers (ALPRs) are employed heavily by police forces across the US, but they’re also used by private actors.

ALPRs are cameras that are mounted on street poles, overpasses, and elsewhere that can identify and capture license plate numbers on passing cars for the purpose of issuing speeding tickets and tolls, locating stolen cars, and more. State and local police maintain databases of captured license plates and frequently use those databases in criminal investigations.

The police have access to not only license plate data collected by their own ALPRs but also data gathered by private companies. Firms like Flock Safety and Motorola Solutions have their own networks of ALPRs that are mounted to the vehicles of private companies and organizations they work with, such as car repossession outfits. Flock, for instance, claims it’s collecting license plate data in roughly 1,500 cities and can capture data from over a billion vehicles every month.

“They have fleets of cars that have ALPRs on them that just suck up data. They sell that to various clients, including repo firms and government agencies. They also sell them to police departments,” says Jay Stanley, a senior policy analyst at the ACLU. “It’s a giant, nationwide mass surveillance system.

A Flock Safety spokesperson said the company does not provide customer data to third parties. “We will never share or sell customer data to any third parties. While we cannot speak for any other vendors, we have never and will never sell data to repossession companies or third-party organizations, including anti-abortion groups," the company said.

However, anyone can become a first-party by purchasing the company’s cameras. (Its customers often include neighborhoods and homeowners associations.)

Police departments around the country regularly share ALPR data with each other, and the data is often shared with little oversight.

“It’s a huge problem that people are sharing data without really being deliberate about who they’re sharing it with and why,” says Dave Maass, director of investigations for the Electronic Frontier Foundation (EFF).

Maass notes that police aren’t the only ones who could utilize ALPR data to track people seeking abortion access. Thanks to the passage of Texas Senate Bill 8 (SB 8), he says anti-abortion groups could use license plate data in litigation against whole swaths of people.

That law allows anyone in the US to sue abortion providers, anyone who “aids or abets” someone seeking an abortion after a fetal heartbeat is detected (typically around six weeks)—or anyone with intent to help someone receive an illegal abortion in the state.

So what’s the upshot for you? “The more densely situated ALPR scanners are, the more they come to resemble GPS tracking,”


Global: Got a Honda? Bad news.

https://www.thedrive.com/news/i-tried-the-honda-keyfob-hack-on-my-own-car-it-totally-worked

Hackers have uncovered ways to unlock and start nearly all modern Honda-branded vehicles by wirelessly stealing codes from an owner’s key fob.

Dubbed “Rolling Pwn,” the attack allows any individual to “eavesdrop” on a remote key fob from nearly 100 feet away and reuse them later to unlock or start a vehicle in the future without the owner’s knowledge.

Despite Honda’s dispute that the technology in its key fobs “would not allow the vulnerability,” The Drive has independently confirmed the validity of the attack with its own demonstration.

Older vehicles used static codes for keyless entry. These static codes are inherently vulnerable, as any individual can capture and replay them at will to lock and unlock a vehicle. Manufacturers later introduced rolling codes to improve vehicle security.

Rolling codes work by using a Pseudorandom Number Generator (PRNG). When a lock or unlock button is pressed on a paired key fob, the fob sends a unique code wirelessly to the vehicle encapsulated within the message.

The vehicle then checks the code sent to it against its internal database of valid PRNG-generated codes, and if the code is valid, the car grants the request to lock, unlock, or start the vehicle.

The database contains several allowed codes, as a key fob may not be in the range of a vehicle when a button is pressed and may transmit a different code than what the vehicle is expecting to be next chronologically.

This series of codes is also known as a “window,” When a vehicle receives a newer code, it typically invalidates all previous codes to protect against replay attacks. This attack works by eavesdropping on a paired keyfob and capturing several codes sent by the fob.

The attacker can later replay a sequence of valid codes and re-sync the Pseudorandom Number Generator (PRNG). This allows the attacker to re-use older codes that would normally be invalid, even months after the codes have been captured.

“Contrary to Honda’s claim, I independently confirmed the vulnerability by capturing and replaying a sequence of lock and unlock requests with my 2021 Honda Accord and a Software-Defined Radio.”

Despite being able to start and unlock the car, the vulnerability doesn’t allow the attacker to drive off with the vehicle due to the proximity functionality of the key fob. However, the fact that a bad actor can get this far is already a bad sign.

At this time, the following vehicles may be affected by the vulnerability: 2012 Honda Civic, 2018 Honda X-RV, 2020 Honda C-RV, 2020 Honda Accord, 2021 Honda Accord, 2020 Honda Odyssey, 2021 Honda Inspire, 2022 Honda Fit, 2022 Honda Civic, 2022 Honda VE-1, and 2022 Honda Breeze.

“We’ve looked into past similar allegations and found them to lack substance,” said a Honda spokesperson in a statement to The Drive.

“While we don’t yet have enough information to determine if this report is credible, the key fobs in the referenced vehicles are equipped with rolling code technology that would not allow the vulnerability as represented in the report. In addition, the videos offered as evidence of the absence of rolling code do not include sufficient evidence to support the claims.”

So what’s the upshot for you? Time to start locking the glovebox. Too many people are talking about this one.


CN/US: Is Your New Car a Threat to National Security?

Starting this week, Teslas won’t be welcome in the Chinese resort town of Beidaihe. The electric cars are strictly banned on the streets of the coastal city for the next two months, as senior Communist leadership descends on the city for a secret conclave.

It’s not the first time, either. The city of Chengdu barred Teslas in advance of a June visit from President Xi Jinping, Reuters reported, while some military sites have similarly forbade Elon Musk’s flagship product. While no official reason was released, the bans seem to be out of concern that the vehicles’ impressive array of sensors and cameras may offer a line of sight into meetings of Beijing’s senior leadership.

China has learned that diverse data, taking into account a wide difference in weather, people, and technology, improves algorithms.

If China gets better at exploiting that data, it could need less of it. So even anonymized, general data being relayed from a fleet of Chinese-made cars in North America could reveal individual patterns and habits but also paint a complex picture of an entire neighborhood or city—be it the daily routine of an urban military base or the schedule of a powerful cabinet minister.

In banning Teslas from certain areas, China is seemingly already preparing for that threat domestically.

Colombo’s white-hat hacking exposes how targeting just one car could lead to a security nightmare. “What if a threat actor such as an international terrorist organization gains the capability to hack the vehicles in a government motorcade?” Colombo wrote on Medium.

It has already happened. The German government believes Russia was behind a 2020 hack of its military transportation authority, which manages logistics for various government officials. The amount of information available from such hacks is only going to grow. “The worst-case scenario?” Le says. “The electric vehicle becomes a missile.”

It’s perhaps China’s clear focus on the automotive industry’s future that has led it to clamp down now.

So what’s the upshot for you? We never had these worries with our single-speed pushbikes.


Global: Mitre advisory spills the beans

A vulnerability advisory published by MITRE for a high-severity information disclosure vulnerability in April ironically disclosed links to over a dozen live IoT devices vulnerable to the flaw.

It isn’t unusual for security advisories to include a “reference” section with several links that validate the existence of a vulnerability. But, any such links typically lead to a proof of concept (PoC) demonstration or writeups explaining the vulnerability rather than to vulnerable systems themselves.

After vulnerabilities are made public, attackers use public IoT search engines like Shodan or Censys to hunt for and target vulnerable devices.

All of which makes this a particularly uncanny case for a public security bulletin to list not one but locations of several vulnerable devices that are still connected to the internet.

BleepingComputer notified MITRE of this issue and why this could be a security concern.

Surprisingly, we were asked by MITRE, why we “think these sites should not be included in the advisory,” and were further told that MITRE had, in the past, “often listed URLs or other points that may be vulnerable” in similar CVE entries.

MITRE’s response prompted BleepingComputer to further contact security experts.

Will Dormann, a vulnerability analyst at the CERT Coordination Center (CERT/CC) called this “both not normal and a very BAD thing” to do.

“The parties involved in the creation of CVE entries should know better. Somewhat surprisingly, according to the GitHub repo for CVE-2022-…, the author was MITRE themselves.”

So what’s the upshot for you? Note, within a few hours of the Bleeping Computer email to MITRE, the CVE advisory was swiftly updated to remove all “reference” links pointing to vulnerable IoT devices, from both MITRE’s CVEProject GitHub repo and the database.

But this update may not remove this information from third-party sources that have already retrieved and published an earlier copy of the entry.


Global: PyPl is rolling out 2FA for critical projects, giving away 4,000 security keys.

PyPI, which is managed by the Python Software Foundation, is the main repository where Python developers can get third-party-developed open-source packages for their projects.

One way developers can protect themselves from stolen credentials is by using two-factor authentication and the PSF is now making it mandatory for developers behind “critical projects” to use 2FA in the coming months.

PyPI hasn’t declared a specific date for the requirement. “We’ve begun rolling out a 2FA requirement: soon, maintainers of critical projects must have 2FA enabled to publish, update, or modify them,” the PSF said on its PyPI Twitter account.

As part of the security drive, it is giving away 4,000 Google Titan hardware security keys to project maintainers gifted by Google’s open-source security team. "In order to improve the general security of the Python ecosystem, PyPI has begun implementing a two-factor authentication (2FA) requirement for critical projects.

This requirement will go into effect in the coming months," PSF said in a statement. "To ensure that maintainers of critical projects have the ability to implement strong 2FA with security keys, the Google Open Source Security Team, a sponsor of the Python Software Foundation has provided a limited number of security keys to distribute to critical project maintainers.

PSF says it deems any project in the top 1% of downloads over the prior six months as critical.

Presently, there are more than 350,000 projects on PyPI, meaning that more than 3,500 projects are rated as critical. PyPI calculates this on a daily basis.

So what’s the upshot for you? The Titan giveaway should go a long way to cover a chunk of key maintainers but …not all of them.


MM: We were afraid this would happen

Myanmar’s junta government is installing Chinese-built cameras with facial recognition capabilities in more cities across the country, three people with direct knowledge of the matter said.

In tenders to procure and install the security cameras and facial recognition technology, the plans are described as safe city projects aimed at maintaining security and, in some cases, preserving civil peace, said the people who are or have been involved in the projects.

Since the February 2021 coup, local authorities have started new camera surveillance projects for at least five cities including Mawlamyine - the country’s fourth-largest city, according to information from the three people who asked not to be identified for fear of reprisals by the junta.

“Surveillance cameras pose a serious risk to (Myanmar’s) democracy activists because the military and police can use them to track their movements, figure out connections between activists, identify safe houses and other gathering spots, and recognize and intercept cars and motorcycles used by activists,” Human Rights Watch Deputy Asia Director Phil Robertson said in a statement to Reuters.

So what’s the upshot for you? Myanmar’s junta is engaged in widespread surveillance. It has installed intercept spyware at telecom and internet providers to eavesdrop on communications of its citizens and deployed “information combat” units to monitor and attack dissenters online


Global: Part 2: China Police Database Was Left Open Online for Over a Year.

Last week we reported the largest data heist ever. The Shanghai police records – containing the names, government ID numbers, phone numbers, and incident reports of nearly 1 billion Chinese citizens – were stored securely, according to cybersecurity experts.

But a dashboard for managing and accessing the data was set up on a public web address and left open without a password, which allowed anyone with relatively basic technical knowledge to waltz in and copy or steal the trove of information.

The database stayed exposed for more than a year, from April 2021 through the middle of last month, when its data was suddenly wiped clean and replaced with a ransom note for the Shanghai police to discover. “your_data_is_safe, contact_for_your_data…recovery10btc,”

So what’s the upshot for you? Last week we had the Chinese gov’t pulling down all references to this. This week, however, it has all been confirmed through credible sources.


US: From the “You could not make this up” section: North Carolina Looks to Remove Public EV Chargers

U.S. Politicians have to run on some kind of platform, and Republican Ben Moss – incoming state House representative in North Carolina’s District 52 – decided that his animating principle is Being Mad at Electricity.

To prove his animosity toward this invisible menace, he’s sponsoring House Bill 1049, which would allocate $50,000 to destroy free public car chargers. There are no free gas/petrol/Benzin stations so why have Electric?

Moss said that taxpayers, especially those who don’t drive an electric vehicle, shouldn’t subsidize free chargers for those who do.

Critics of this bill might point out that increasing the number of electric cars could actually benefit owners of internal combustion vehicles, thanks to reduced demand for petroleum products.

Others point out that electricity is generated domestically, so your transportation dollars are staying in the U.S. rather than going to, say, Saudi Arabia.

But the point here is that you have to be mad at something and carry and conceal gun rights, limiting what is taught in schools, and banning abortion were already taken.

At least from a privacy perspective, he supported a bill that banned booking photographs taken at the time of arrest from becoming public record.

So what’s the upshot for you? “US politicians could create jobs getting rid of things that aren’t equitable…in fact, we could create even more jobs if we extended this philosophy to other public facilities that not everybody uses.

Why should there be a library when I don’t like books?

Why are there schools? I’m not a child.

What about all those roads that go places I’ve never been, and all those town fire trucks that haven’t come to my house except for that one time?"


US: FTC warns tech companies against misusing health data

In particular, the US Federal Trade Commissioner (FTC) said, regulators, will closely scrutinize corporate claims that Americans’ data has been or will be “anonymized,” in light of substantial research showing that it can be trivial to reverse-engineer a person’s identity from anonymized datasets.

The FTC said it has also sued companies in the past for collecting more data than consumers have consented to provide or that retain user data for indefinite periods of time.

So what’s the upshot for you? This is not new behavior for the US FTC, but the FTC statement takes on greater significance as policymakers and digital rights groups have sounded the alarm over the novel risks that geolocation, search, and other data may pose to abortion-seekers in the wake of the recent Supreme Court ruling.


IN: A Privacy Panic Flares Up in India After Police Pull Payment Data

Prasanto K. Roy, a public policy consultant from New Delhi, is worried.

In 2017 he began sending regular donations to the Indian fact-checking organization Alt News to support its work countering online misinformation.

But on July 5 the nonprofit said that Indian payments gateway Razorpay, which it used to receive donations, had shared its donors’ data with New Delhi police following the arrest of Alt News cofounder Mohammed Zubair last month.

“When a payment gateway gives out donor databases on a police demand that is excessive, that information can be misused by the police or others it could reach,” he said. “India doesn’t even have privacy laws in place yet.”

The full extent of the data that Razorpay shared with police remains unclear, but Alt News said that the data it collects from donors includes phone numbers, email addresses, and tax IDs.

So what’s the upshot for you? “The state of privacy in digital payments in India is nonexistent.

The ease with which digital data can be shared and leaked makes privacy challenging around the world, but India’s centralized biometric identity system, Aadhaar, can add extra vulnerabilities."


US/AU: This Is the Code the FBI Used to Wiretap the World

The FBI operation in which the agency intercepted messages from thousands of encrypted phones around the world was powered by cobbled-together code.

Motherboard has obtained that code and is now publishing sections of it that show how the FBI was able to create its honeypot.

The code shows that the messages were secretly duplicated and sent to a “ghost” contact that was hidden from the users’ contact lists.

This ghost user, in a way, was the FBI and its law enforcement partners, reading over the shoulder of organized criminals as they talked to each other.

The app uses XMPP to communicate, a long-established protocol for sending instant messages. On top of that, Anom wrapped messages in a layer of encryption.

XMPP works by having each contact use a handle that in some way looks like an email address. For Anom, these included an XMPP account for the customer support channel that Anom users could contact. Another of these was a bot.

Unlike the support channel, the bot hid from Anom users’ contact lists and operated in the background, according to the code and to photos of active Anom devices obtained by Motherboard. In practice, the app scrolled through the user’s list of contacts, and when it came across the bot account, the app filtered that out and removed it from view.

That finding is corroborated by law enforcement files Motherboard obtained which say that the bot was a hidden or “ghost” contact that made copies of Anom users’ messages.

So what’s the upshot for you? Operation Trojan Shield has been widely successful.

On top of the wave of arrests, authorities were also able to intervene using the messages and stop multiple planned murders.

In June to mark the one-year anniversary of the operation’s announcement, the Australian Federal Police revealed it has shifted some of its focus to investigating thousands of people suspected of being linked to Italian organized crime in Australia and that it is working with international partners.



And our quote of the week: “If you can park it and not turn around to look at it you have not bought the right car… Ferrari left
or it was a Honda…”


That’s it for this week. Stay safe, stay secure, watch the donuts in the car park, and see you in se7en.