Follow the flags with the IT Privacy and Security Weekly Update for the week ending April 25th., 2023



Daml’ers,

This week starts with frustrated police in Sweden and finishes in the back pocket of a legal team.

world flags
-click on any flag to link to the podcast-

We move onto the troubling story of the US National guardsman who, now it transpires, leaked way more than all the toddlers at your child’s preschool, a beer story that is sure to have many of our readers in tears, and a drafty new naming scheme based on weather events.

We get great updates from WhatsApp and Google authenticator and more AI news than you can shake an API at!

Finally, we flag something rumbling in California related to privacy that could make large ripples in the data lakes of collected user information.

They’re global, they’re fresh and they’re flying, so let’s follow those flags!


Se: Mullvad VPN maker says police tried to raid its offices but couldn’t find any user data

Mullvad, the Swedish company behind Mullvad VPN (virtual private network), says police walked away with nothing after attempting to seize computers from its office.

sweden flag

According to an update on Mullvad’s site, the authorities left and didn’t take anything after it informed them that the company doesn’t store customer data.

“We argued they had no reason to expect to find what they were looking for and any seizures would therefore be illegal under Swedish law,” Mullvad writes.

“After demonstrating that this is indeed how our service works and them consulting the prosecutor they left without taking anything and without any customer information.” […]

Mullvad says this is the first time in its 14 years of operating a VPN that police have issued a search warrant, and company CEO Jan Jonsson tells The Verge he doesn’t “know exactly what they were looking for.”

Even if the authorities had seized its servers, Jonsson says that police wouldn’t have found anything due to its strict policies against keeping data.

So what’s the upshot for you? We think this is the most glowing endorsement for a VPN service that has come along in some time and for 5 Euros a month for 1 month, 1 year or one decade, it seems pretty reasonable.


US: Leaker of US Documents Shared More Secrets Earlier in a Discord Group with 600 Members

Remember that U.S. Air National Guardsman who was suspected of leaking classified documents?

The New York Times has discovered “a previously undisclosed chat group on Discord” where the same airman apparently also posted “sensitive information” including “secret intelligence on the Russian war effort,” this time to a group with 600 members — and “months earlier than previously known,” in February of 2022.

national guard

The case against Airman Teixeira, 21, who was arrested on April 13, pertains to the leaking of classified documents on another Discord group of about 50 members, called Thug Shaker Central.

There, he began posting sensitive information in October 2022, members of the group told The Times.

His job as an information technology specialist at an Air Force base in Massachusetts gave him top-secret clearance…

The user claimed to be posting information from the National Security Agency, the Central Intelligence Agency, and other intelligence agencies.

The additional information raises questions about why authorities did not discover the leaks sooner, particularly since hundreds more people would have been able to see the posts…

Unlike Thug Shaker Central, the second chat room was publicly listed on a YouTube channel and was easily accessed in seconds…

Apparently eager to impress others in the group who questioned his analysis, he said: "I have a little more than open source info.

Perks of being in a USAF intel unit," referring to the United States Air Force… At times, he appeared to be posting from the military base where he was stationed…

Airman Teixeira also claimed that he was actively combing classified computer networks for material on the Ukraine war.

When one of the Discord users urged him not to abuse his access to classified intelligence, Teixeira replied: “Too late…”

The Times says they learned about the larger chat room “from another Discord user.”

So what’s the upshot for you? So embarrassing is this for all involved, rumor has it even his mother won’t’ talk to him anymore.


EU: Miller High Life Cans Destroyed in Europe Over ‘Champagne of Beers’ Logo

Note: some readers may find the following story distressing.

Miller High Life proudly calls itself “the Champagne of Beers” on its cans, but the French aren’t having it: 2.3K cans of the lager were dumped and crushed in Europe after the Champagne regulator requested “the destruction of these illicit goods.”

miller

US beer importers had a brewed awakening after Belgian customs officials destroyed a massive cache of Miller High Life over using “Champagne” on its packaging.

Agents reportedly seized 2,352 cans of the American discount beverage in February after it arrived in Antwerp en route to Germany, CBS reported.

According to European law, goods can’t be imported with the word “Champagne” on its packaging, unless they hail from that specific region in Southwest France.

Naturally, they didn’t feel that the Milwaukee, Wisconsin-brewed bargain brew — which has called itself the moniker since 1906 — fit the bill.

Belgian customs boss Kristian Vanderwaeren told reporters that the motto went against “protected designation of origin ‘Champagne,’ and this goes against European regulations.”

The case is peculiar, given that Molson Coors Beverage Co. — Miller High Life’s parent company — does not currently export it to the EU, AP reported.

So what’s the upshot for you? If pouring beer down the drain has made you cry, you are not alone.


UK: Surrey and Sussex police unlawfully recorded phone calls via app, watchdog finds

Two police forces have been reprimanded by Britain’s data watchdog after officers unlawfully recorded more than 200,000 phone conversations using an app originally intended for hostage negotiators.

Sussex Police

The automatic recordings, made over several years, included ‘highly sensitive’ conversations with victims, witnesses, and perpetrators of suspected crimes, according to the Information Commissioner’s Office (ICO).

The app called Another Call Recorder (ACR), recorded all incoming and outgoing calls and was originally intended for use by a small number of officers at Surrey and Sussex forces. However, it was downloaded onto the work phones of more than 1,000 staff members.

It has now been withdrawn from use and the recordings, other than those considered to be evidential material, have been destroyed, according to the ICO.

The watchdog said it considered issuing a million euro fine to both forces but opted for the reprimand to reduce the impact on public services.

Police officers that downloaded the app were unaware all calls would be recorded, the watchdog said, and people were not informed their conversations were being taped.

So what’s the upshot for you? The only details left out of this story are what the reprimands were. When it comes to the privacy of individuals, no one should be above the law.


Global: Microsoft Will Name Threat Actors After Weather Events

In a move designed to simplify the way advance persistent threat actors or APTs are publicly documented, Redmond said it would change the way advanced threat actors are named and will use weather events like Typhoon, Blizzard, and Sleet to add better context to public APT disclosures.

Microsoft previously used an all-caps naming scheme linked to chemical elements like ACTINIUM and IRIDIUM to describe nation-state and other advanced malware tracking activity but now the company says the complexity, scale, and volume of threats demand a new naming taxonomy.

“With the new taxonomy, we intend to bring better context to customers and security researchers that are already confronted with an overwhelming amount of threat intelligence data,” said John Lambert

weather flag

China = Typhoon
Iran = Sandstorm
Lebanon = Rain
North Korea = Sleet
Russia = Blizzard
South Korea = Hail
Turkey = Dust
Vietnam = Cyclone

So what’s the upshot for you? This move comes because Microsoft actually ran out of elements in the periodic table to assign threat actor names to.

We loved their mapping of old names to new ones.

An example is North Korea which moves from PLUTONIUM to Onyx Sleet.

Some of the names are already starting to become slightly bizarre…

Like China going from RADIUM to Raspberry Typhoon.

Out of control? Does anyone remember the failed random naming strategy for CVEs?

Names like “Fluffy Doorstick” for a 10/10 criticality vulnerability kind of doomed the proposal from the start.


Global: WhatsApp adds the option to use the same account on multiple phones

WhatsApp users are no longer restricted to using their account on just a single phone.

Today, the Meta-owned messaging service is announcing that its multi-device feature – which previously allowed you to access and send messages from additional Android tablets, browsers, or computers alongside your primary phone – is expanding to support additional smartphones.

“One WhatsApp account, now across multiple phones” is how the service describes the feature, which it says is rolling out to everyone in the coming weeks.

Setting up a secondary phone to use with your WhatsApp account happens after doing a fresh install of the app.

Except, rather than entering your phone number during setup and logging in as usual, you instead tap a new “link to existing account” option.

This will generate a QR code to be scanned by your primary WhatsApp phone via the “link a device” option in settings.

The new feature works across both iOS and Android devices.

WhatsApp is pitching the feature as a useful tool for small businesses that might want multiple employees to be able to send and receive messages from the same business number via different phones.

So what’s the upshot for you? As fans of Signal (who do not start the relationship by downloading your entire contacts list) we hope to see this option supported there too soon.


Global: Google Authenticator Can Now Sync 2FA Codes To the Cloud

Google Authenticator just got an update that should make it more useful for people who frequently use the service to sign in to apps and websites.

As of today, Google Authenticator will now sync any one-time two-factor authentication (2FA) codes that it generates to users’ Google Accounts.

Previously, one-time Authenticator codes were stored locally, on a single device, meaning losing that device often meant losing the ability to sign in to any service set up with Authenticator’s 2FA.

To take advantage of the new sync feature, simply update the Authenticator app.

If you’re signed in to a Google Account within Google Authenticator, your codes will automatically be backed up and restored on any new device you use.

You can also manually transfer your codes to another device even if you’re not signed in to a Google Account by following the steps on this support page.

Some users might be wary of syncing their sensitive codes with Google’s cloud – even if they did originate from a Google product.

But Christiaan Brand, a group product manager at Google, asserts it’s in the pursuit of convenience without sacrificing security.

“We released Google Authenticator in 2010 as a free and easy way for sites to add ‘something you have’ 2FA that bolsters user security when signing in,” Brand wrote in the blog post announcing today’s change.

“With this update we’re rolling out a solution to this problem, making one-time codes more durable by storing them safely in users’ Google Account.”

So what’s the upshot for you? We don’t want to rain on the parade, but Authy’s been doing this for years.


Global: Googlers say Bard AI is “worse than useless,” ethics concerns were ignored

https://www.bloomberg.com/news/features/2023-04-19/google-bard-ai-chatbot-raises-ethical-concerns-from-employees

A new report from Bloomberg interviews 18 current and former workers and comes away with a pile of damning commentary and concerns about AI ethics teams that were “disempowered and demoralized” so Google could get Bard out the door.

According to the report, Google employees tested Bard pre-release and then were asked for their feedback, which was mostly ignored so Bard could launch quicker. Internal discussions viewed by Bloomberg called Bard “cringe-worthy” and “a pathological liar.”

When asked how to land a plane, it gave incorrect instructions that would lead to a crash.

One employee asked for scuba instructions and got an answer they said: “would likely result in serious injury or death.”

One employee wrapped up Bard’s problems in a February post titled, “Bard is worse than useless: please do not launch.”

Bard launched in March.

Google finds itself in a tough situation. If the company’s only concern is placating the stock market and catching up to ChatGPT, it probably isn’t going to be able to do that if it slows down to consider ethics issues.

Meredith Whittaker, a former Google manager and president of the Signal Foundation, told Bloomberg that “AI ethics has taken a back seat” at Google and says that “if ethics aren’t positioned to take precedence over profit and growth, they will not ultimately work.”

Several of Google’s AI ethics leaders have been fired or left the company in recent years. Bloomberg says that today, AI ethics reviews are “almost entirely voluntary” at Google.

So what’s the upshot for you? While you could do something at Google to try to slow down releases due to ethics issues, it probably won’t be great for your career.

Management raised the issue of too much concern about AI ethics in performance reviews!


Global: ChatGPT Creates Mostly Insecure Code, But Won’t Tell You Unless You Ask

ChatGPT, OpenAI’s large language model for chatbots, not only produces mostly insecure code but also fails to alert users to its inadequacies despite being capable of pointing out its shortcomings.

Amid the frenzy of academic interest in the possibilities and limitations of large language models, four researchers affiliated with Universite du Quebec, in Canada, have delved into the security of code generated by ChatGPT, the non-intelligent, text-regurgitating bot from OpenAI.

In a pre-press paper titled, “How Secure is Code Generated by ChatGPT?” computer scientists answer the question with research that can be summarized as “not very.”

“The results were worrisome,” the authors state in their paper. "We found that, in several cases, the code generated by ChatGPT fell well below minimal security standards applicable in most contexts.

In fact, when prodded to whether or not the produced code was secure, ChatGPT was able to recognize that it was not." […]

In all, ChatGPT managed to generate just five secure programs out of 21 on its first attempt.

After further prompting to correct its missteps, the large language model managed to produce seven more secure apps – though that’s “secure” only as it pertains to the specific vulnerability being evaluated.

ai flag

It’s not an assertion that the final code is free of any other exploitable condition. […]

The academics observe in their paper that part of the problem appears to arise from ChatGPT not assuming an adversarial model of code execution.

The model, they say, “repeatedly informed us that security problems can be circumvented simply by ‘not feeding an invalid input’ to the vulnerable program it has created.”

Yet, they say, “ChatGPT seems aware of – and indeed readily admits – the presence of critical vulnerabilities in the code it suggests.” It just doesn’t say anything unless asked to evaluate the security of its own code suggestions.

Initially, ChatGPT’s response to security concerns was to recommend only using valid inputs – something of a non-starter in the real world.

It was only afterward, when prompted to remediate problems, that the AI model provided useful guidance.

So what’s the upshot for you? That’s not ideal because knowing which questions to ask presupposes familiarity with specific vulnerabilities and coding techniques.

The authors also point out that there’s ethical inconsistency in the fact that ChatGPT will refuse to create attack code but will create vulnerable code.


Global: ProfileGPT

What does ChatGPT know about you?

ProfileGPT is an app that analyzes a user’s profile and personality as seen by ChatGPT.

The goal of this tool is to raise awareness about personal data usage, and the importance of responsible AI.

Example of information that can be extracted with ProfileGPT:

Personal Information
Life Summary: a summary of the user’s education, work, family, and personal history.
Hobbies/Interests: a list of hobbies and interests.
Personality Assessment: an assessment of the user’s personality, offering a deep understanding of their psychological profile.
Political/Religious Views: a guess on the user’s political or religious views, if available from their messages.
Mental Health Evaluation: ProfileGPT evaluates the user’s mental health.
Predictions on Future Aspects: ProfileGPT offers predictions on the user’s future.

So what’s the upshot for you? A great AI privacy exercise for some rainy Sunday afternoon.


Global: OpenAI Offers New Privacy Options for ChatGPT

https://www.bloomberg.com/news/articles/2023-04-25/openai-offers-new-privacy-options-for-chatgpt

OpenAI is letting people opt to withhold their ChatGPT conversations from use in training the artificial intelligence company’s models.

The move could be a privacy safeguard for people who sometimes share sensitive information with the popular AI chatbot.

The startup said Tuesday that ChatGPT users can now turn off their chat histories by clicking a toggle switch in their account settings.

When people do this, their conversations will no longer be saved in ChatGPT’s history sidebar (located on the left side of the webpage), and OpenAI’s models won’t use that data to improve over time.

OpenAI is aiming to make people feel more comfortable using the chatbot for all kinds of applications.

For example, during a demo of the feature on Monday, the company used the example of planning a surprise birthday party.

“We want to move more in this direction where people who are using our products can decide how their data is being used – if it’s being used for training or not,” OpenAI Chief Technology Officer Mira Murati said.

So what’s the upshot for you? Yes, the toggle is there in your settings. If you are testing Profile GPT, turn this off after you run your tests.


US: ‘Delete Act’ Seeks to Give Californians More Power to Block Data Tracking

Today, Tuesday, the Senate Judiciary Committee in Sacramento is expected to consider a new bill called “The Delete Act,” or SB 362, which aims to give Californians the power to block data tracking.

Cali Flag

“The onus is on individuals to try to protect their data from an estimated 2,000-4,000 data brokers worldwide – many of which have no other relationship with consumers beyond the trade in their data,” reports KQED.

“This lucrative trade is also known as surveillance advertising, or the ‘ad tech’ industry.”

EFF supports The Delete Act, or SB 362, by state Sen. Josh Becker, who represents the Peninsula. “I want to be able to hit that delete button and delete my personal information, delete the ability of these data brokers to collect and track me,” said Becker, of his second attempt to pass such a bill.

"These data brokers are out there analyzing, and selling personal information.

You know, this is a way to put a stop to it."

Tracy Rosenberg, a data privacy advocate with Media Alliance and Oakland Privacy, said she anticipates a lot of pushback from tech companies, because "making [the Delete Act] workable probably destroys their businesses as most of us, by now, don’t really see the value in the aggregating and sale of our data on the open market by third parties…

“It is a pretty basic-level philosophical battle about whether your personal information is, in fact, yours to share as you see appropriate and when it is personally beneficial to you, or whether it is property to be bought and sold,” Rosenberg said.

So what’s the upshot for you? We hope to have an update on this next week.


US: Now go get some dosh!

Facebook users have until August to claim their share of a $725 million class-action settlement of a lawsuit alleging privacy violations by the social media company, a new website reveals.

The lawsuit was prompted in 2018 after Facebook disclosed that the information of 87 million users was improperly shared with Cambridge Analytica.

People who had an active U.S. Facebook account between May 2007 and December 2022 have until Aug. 25 to enter a claim.

Individual settlement payments haven’t yet been established because payouts depend on how many users submit claims and how long each user maintained a Facebook account.

U.S. Facebook users can make a claim by visiting Facebookuserprivacysettlement.com and entering their name, address, email address, and confirming they lived in the U.S. and were active on Facebook between the aforementioned dates.

So what’s the upshot for you? How much will you get? Probably not very much at all, but you will have the satisfaction of knowing that you are building depth and substance to the wallets and purses of all the lawyers (solicitors) involved in this lawsuit.


And our quote of the week - "It used to be expensive to make things public and cheap to make them private. Now it’s expensive to make things private and cheap to make them public.” -Clay Shirky Vice Provost of Educational Technologies, New York University


wind flag

That’s it for this week. Stay safe, stay secure, keep the wind at your back, and see you in se7en.