Psyched about the IT Privacy and Security Weekly Update for the week ending February 7th., 2023


Lie down and relax. This week you might need the couch as our readings take you from therapy notes to privacy policies.
old red couch

We have Microsoft burrowing into your computer to find out what version of its software you might be running, and Google serving up addresses you want to avoid.

We get a curious question about the Kremlin’s knowledge of what is passing through a “secure messaging system” and a bit of research that might have you reconsider the cheap Chinese phone with the great camera you had your eye on… and why it may not be such a great deal after all.

There’s an attack on a Swiss University, a breach of US police information (again), and a company that lied and is now making a full confession.

You get it all on this therapist’s couch, so put your feet up, make yourself comfortable, and let’s get this week’s session started!

FR/FI: Hacker who stole Psychotherapist’s notes and leaked them online was caught on the couch.

Finland’s most wanted Hacker: According to the French news site, Kivimäki was arrested around 7 a.m. on Feb. 3, after authorities in Courbevoie responded to a domestic violence report.

Kivimäki had been out earlier with a woman at a local nightclub, and later the two returned to her home but reportedly got into a heated argument and ended up on the couch.

Police responding to the scene were admitted by another woman — possibly a roommate — and found the man inside still sleeping off a long night.

When they roused him and asked for identification, the 6’ 3” blonde, green-eyed man presented an ID that stated he was of Romanian nationality.

The French police were doubtful.

After consulting records on most-wanted criminals, they quickly identified the man as Kivimäki and took him into custody.

Kivimäki was busted as a minor for handling code that had been used to crack over 60,000 servers.

He had been involved in bomb threats and multiple “swatting” incidents and was ultimately convicted of orchestrating more than 50,000 cybercrimes, but got off relatively lightly. as he was only 17 at the time.

So what’s the upshot for you? Something tells us Kivimäki won’t get off so easily this time, assuming he is successfully extradited back to Finland. …and a statement by the Finnish police says that they expect that process to go smoothly.

Global: Microsoft to scan endpoints for versions of Office

Microsoft wants everyone to know that it isn’t looking to invade their privacy while looking through their Windows PCs to find out-of-date versions of Office software.

In its KB5021751 update last month, Microsoft included a plan to scan Windows systems to smoke out those Office versions that are no longer supported or nearing the end of support.

Those include Office 2007 (which saw support end in 2017) and Office 2010 (in 2020) and the 2013 build (this coming April).

The company stressed that it would run only one time and would not install anything on the user’s Windows system, adding that the file for the update is scanned to ensure it’s not infected by malware and is stored on highly secure servers to prevent unauthorized changes to it.

“This data is gathered from registry entries and APIs,” it wrote. “The update does not gather licensing details, customer content, or data about non-Microsoft products. Microsoft values, protects, and defends privacy.”

Microsoft then gives a link to the company’s privacy page for further reading.

So what’s the upshot for you? Users who are squeamish about this scan can download the Show or Hide Updates troubleshooter for Windows 10 and 11, which will stop this search along with disabling updates that repeatedly fail to install or are causing other problems.

US: US proposes new guidelines for the deployment of AI in its own weapons systems.

The US Department of Defense has laid out new rules on how to deal with autonomous systems known as “killer robots.”

Under the new “Autonomy in Weapon Systems” directive, the US military will be required to minimize the “probability and consequences of failures” in autonomous and semi-autonomous weapon systems to avoid unintended engagements.

Systems incorporating AI capabilities will still be allowed, provided they abide by the DoD’s AI Ethical Principles and the Responsible AI Strategy and Implementation Pathway.

The state-of-the-art technology must be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.

So what’s the upshot for you? The reality is that AI was always going to make its way into armaments. Putting it on the rails still means it has the potential to go “off the rails” and we are not entirely sure this new directive will make any difference to that, longer term.

Global: Yesterday Sundar announced “BARD”

Two years ago we, Google, unveiled next-generation language and conversation capabilities powered by our Language Model for Dialogue Applications, or LaMDA for short.

We’ve been working on an experimental conversational AI service, powered by LaMDA, that we’re calling Bard. And today, we’re taking another step forward by opening it up to trusted testers ahead of making it more widely available to the public in the coming weeks.

And in what sounds like it might just be a little dig at Chat GPT he says, “It’s critical that we bring experiences rooted in these models to the world in a bold and responsible way. That’s why we’re committed to developing AI responsibly: In 2018, Google was one of the first companies to publish a set of AI Principles.”

So what’s the upshot for you? Well, Sundar pipped Microsoft to the post with this announcement, but there are already rumors that people are seeing ChatGPT Bing integration.

Global: Artifact: Instagram Co-founders’ take on AI-driven news feed

The two co-founders of Instagram unveiled a new app they have been developing since leaving the social media behemoth four years ago.

The Artifact app, developed by Kevin Systrom and Mike Krieger, is described as “a personalized news feed using the latest AI tech.”

On a “For You” page, Artifact provides news articles that users may find interesting.
The launch of Artifact demonstrates how AI is continuing to have a bigger impact on how people use social media platforms to consume content, including news.

Users of Artifact get a feed of selected news articles from publishers.

The app will display related articles after a user clicks on a story as it gets to know the user’s reading interests.

In addition to fierce competition, using AI to recommend content raises issues with mental health.

Last year, The Wall Street Journal looked into TikTok’s algorithm and discovered that the app was oversaturating teens with material related to eating disorders.

In a practice known as “doom-scrolling,” people have also become addicted to scrolling through their social media feeds looking for the bad news.

So what’s the upshot for you? Startups are embracing AI despite the possible risks it may carry. Systrom stated that he believes algorithmic predictions will dominate social media in the future.

Let’s hope Artifact can help us stay informed without getting overwhelmed. <— AI wrote that!

Global: AI Models Spit Out Photos of Real People and Copyrighted Images

According to new research, popular image generation models can be prompted to produce identifiable photos of real people, potentially threatening their privacy.

The work also shows that these AI systems can be made to regurgitate exact copies of medical images and copyrighted work by artists.

It’s a finding that could strengthen the case for artists who are currently suing AI companies for copyright violations.

The researchers, from Google, DeepMind, UC Berkeley, ETH Zürich, and Princeton, got their results by prompting Stable Diffusion and Google’s Imagen with captions for images, such as a person’s name, many times.

Then they analyzed whether any of the images they generated matched the original images in the model’s database.

The group managed to extract over 100 replicas of images in the AI’s training set…

However, while the results are impressive, they come with some caveats. The images the researchers managed to extract appeared multiple times in the training data or were highly unusual relative to other images in the data set, says Florian Tramèr, an assistant professor of computer science at ETH Zürich, who was part of the group.

People who look unusual or have unusual names are at higher risk of being memorized, says Tramèr.

The researchers were only able to extract relatively few exact copies of individuals’ photos from the AI model: just one in a million images were copies, according to Webster.

But that’s still worrying, Tramèr says: “I really hope that no one’s going to look at these results and say ‘Oh, actually, these numbers aren’t that bad if it’s just one in a million.’”

“The fact that they’re bigger than zero is what matters,” he adds.

This could have implications for startups wanting to use generative AI models in healthcare because it shows that these systems risk leaking sensitive private information.

OpenAI, Google, and Stability.AI did not respond to requests for comment.

So what’s the upshot for you? Imagine if one of the images was of you. Just like ChatGPT writing a nonsensical book summary that sounds realistic, will authorities question your image being produced as one in a million as your photo is added to the top 10 list of wanted InterPol criminals? Hmmnnn…

US: A hack at ODIN Intelligence exposes a huge trove of police raid files

Leaked files reveal tactical plans for US police raids, surveillance, and facial recognition.

A huge cache of data was taken from the internal servers of ODIN Intelligence, a tech company that provides apps and services to police departments, following a hack and defacement of its website in mid-January

The hackers published the company’s Amazon Web Services private keys for accessing its cloud-stored data and claimed to have “shredded” the company’s data and backups but not before exfiltrating gigabytes of data from ODIN’s systems.

The breach not only exposes vast amounts of ODIN’s own internal data but also gigabytes of confidential law enforcement data uploaded by ODIN’s police department customers.

The breach raises questions about ODIN’s cybersecurity but also the security and privacy of the thousands of people — including victims of crime and suspects not charged with any offense — whose personal information was exposed.

None of the data appears encrypted.

The data included dozens of folders with full tactical plans of upcoming raids, alongside suspect mugshots, their fingerprints and biometric descriptions, and other personal information, including intelligence on individuals who might be present at the time of the raid, like children, cohabitants, and roommates, some of whom are described as having “no crim[inal] history.”

The data also contains a large amount of personal information about individuals, including the surveillance techniques that police use to identify or track them.

Other files show police using automatic license plate readers, known as ANPR, which can identify where a suspect drove in recent days.

Another document contained the full contents — including text messages and photos — of a convicted offender’s phone, whose contents were extracted by a forensic extraction tool during a compliance check while the offender was on probation.

So what’s the upshot for you? We are left without words on this one. The breach of ODIN Intelligence’s data serves as a reminder of the importance of cybersecurity and the need for organizations to protect confidential data.

Global: Anker Finally Comes Clean About Its Eufy Security Cameras

From The Verge: First, Anker told us it was impossible.

Then, it covered its tracks. It repeatedly deflected while utterly ignoring our emails.

So shortly before Christmas, we gave the company an ultimatum: if Anker wouldn’t answer why its supposedly always-encrypted Eufy cameras were producing unencrypted streams – among other questions – we would publish a story about the company’s lack of answers. It worked.

In a series of emails to The Verge, Anker has finally admitted its Eufy security cameras are not natively end-to-end encrypted – they can and did produce unencrypted video streams for Eufy’s web portal.

The company has apologized for the lack of communication and promised to do better, confirming it’s bringing in outside security and penetration testing companies to audit Eufy’s practices, is in talks with a “leading and well-known security expert” to produce an independent report, is promising to create an official bug bounty program, and will launch a microsite in February to explain how its security works in more detail.

Those independent audits and reports may be critical for Eufy to regain trust because of how the company has handled the findings of security researchers and journalists.

It’s a little hard to take the company at its word!

So what’s the upshot for you? Fool us once, shame on you; fool us twice, shame on us.

CH: Switzerland’s largest university confirms ‘serious cyberattack’

The University of Zurich, Switzerland’s largest university, announced on Friday it was the target of a “serious cyberattack,” which comes amid a wave of hacks targeting German-speaking institutions.

The university’s website is currently inaccessible, but the phone line to the press office is working. In a statement sent to The Record, a spokesperson described the incident as “part of a current accumulation of attacks on education and health institutions.”

Explaining this accumulation, they cited “several attacks” that “have been carried out on universities in German-speaking countries in recent weeks, resulting in suspension of their IT services for extended periods of time. The attacks are usually carried out by compromising several individual accounts and systems.”

The identity of the attackers and the nature of the attack were not disclosed. The university said it was conducted by perpetrators “acting in a very professional manner.”

The University of Zurich, which has more than 25,000 students and 3,700 academic staff, has a number of campuses across the city of Zurich. It is not yet clear what impact the attack has had on academic work.

In its statement, the university said it “immediately stepped up measures and countered the attacks with internal resources as well as external support. Accounts and systems identified as compromised have been isolated and access, in general, has been made more difficult.”

According to the statement, the university is not aware of any data being encrypted or extracted. “Nevertheless, the relevant organizations (i.e. data protection offices, cantonal police, other universities, and partner organizations) have been informed and involved,” it adds.

“As there are no indications of an intrusion into more protected zones and systems, thanks to the measures taken, IT services can continue to be used by university members for the time being. However, individual or comprehensive restrictions of services for security reasons are to be expected at any time and possibly for extended periods.”

So what’s the upshot for you? Prepare for cyberattacks by taking proactive steps to protect your institution’s data and systems.

Make sure to have a plan in place to respond quickly and effectively to any attack, and involve the relevant organizations to ensure the best possible outcome.

Global: Think Twice Before Using Google To Download Software, Researchers Warn

Searching Google for downloads of popular software has always come with risks, but over the past few months, it has been downright dangerous, according to researchers and a pseudorandom collection of queries.

“Threat researchers are used to seeing a moderate flow of malvertising via Google Ads,” volunteers at Spamhaus wrote on Thursday. "However, over the past few days, researchers have witnessed a massive spike affecting numerous famous brands, with multiple malware being utilized. This is not “the norm.'”

The surge is coming from numerous malware families, including AuroraStealer, IcedID, Meta Stealer, RedLine Stealer, Vidar, Formbook, and XLoader.

In the past, these families typically relied on phishing and malicious spam that attached Microsoft Word documents with booby-trapped macros.

Over the past month, Google Ads has become the go-to place for criminals to spread their malicious wares that are disguised as legitimate downloads by impersonating brands such as Adobe Reader, Gimp, Microsoft Teams, OBS, Slack, Tor, and Thunderbird.

On the same day that Spamhaus published its report, researchers from security firm Sentinel One documented an advanced Google malvertising campaign pushing multiple malicious loaders implemented in .NET. Sentinel One has dubbed these loaders MalVirt.

At the moment, the MalVirt loaders are being used to distribute malware most commonly known as XLoader, available for both Windows and macOS.

The MalVirt loaders use obfuscated virtualization to evade end-point protection and analysis.

To disguise real command and control traffic and evade network detections, MalVirt beacons to decoy command and control servers hosted at providers including Azure, Tucows, Choopa, and Namecheap.

"Until Google devises new defenses, the decoy domains and other obfuscation techniques remain an effective way to conceal the true control servers used in the rampant MalVirt and other malvertising campaigns.

“It’s clear that malvertisers have gained the upper hand over Google’s considerable might.”

So what’s the upshot for you? Ok, so that certainly was not comforting. Now you might want to only use Google to find the source URL for the downloads (check what you are given to see if it makes sense) and then instead of following that link, type it in yourself… correctly.

RU: Kremlin’s Tracking of Russian Dissidents Through Telegram Suggests App’s Encryption Has Been Compromised

Russian antiwar activists placed their faith in Telegram, a supposedly secure messaging app. How does Putin’s regime seem to know their every move?

Matsapulina’s case [anecdote in the story] is hardly an isolated one, though it is especially unsettling.

Over the past year, numerous dissidents across Russia have found their Telegram accounts seemingly monitored or compromised.

Hundreds have had their Telegram activity wielded against them in criminal cases.

Perhaps most disturbingly, some activists have found their “secret chats” – Telegram’s purportedly ironclad, end-to-end encrypted feature – behaving strangely, in ways that suggest an unwelcome third party might be eavesdropping.

These cases have set off a swirl of conspiracy theories, paranoia, and speculation among dissidents, whose trust in Telegram has plummeted.

In many cases, it’s impossible to tell what’s really happening to people’s accounts – whether spyware or Kremlin informants have been used to break in, through no particular fault of the company; whether Telegram really is cooperating with Moscow; or whether it’s such an inherently unsafe platform that the latter is merely what appears to be going on.

So what’s the upshot for you? We always wondered why the folks at Telegram decided to reinvent the wheel.

We have a two-word comment to make on selecting a secure messaging application: “Try Signal”

CN: Android phones from China collect way more data from you

Think you’re getting a better deal on an Android phone because you have ordered it from Wish or Baidu? Think again…

In a paper titled “Android OS Privacy Under the Loupe – A Tale from the East,” the trio of university researchers analyzed the Android system apps installed on the mobile handsets of three popular smartphone vendors in China: OnePlus, Xiaomi, and Oppo Realme.

They looked specifically at the information transmitted by the operating system and system apps, in order to exclude user-installed software, assume users have opted out of analytics and personalization, do not use any cloud storage or optional third-party services, and have not created an account on any platform run by the developer of the Android distribution.

Within this limited scope, the researchers found that Android handsets from the three named vendors “send a worrying amount of Personally Identifiable Information (PII) not only to the device vendor but also to service providers like Baidu and to Chinese mobile network operators.”

The tested phones did so even when these network operators were not providing service – no SIM card was present or the SIM card was associated with a different network operator.

“The data we observe being transmitted includes persistent device identifiers (IMEI, MAC address, etc.), location identifiers (GPS coordinates, mobile network cell ID, etc.), user profiles (phone number, app usage patterns, app telemetry), and social connections (call/SMS history/time, contact phone numbers, etc.),” the researchers state in their paper.

“Combined, this information poses serious risks of user deanonymization and extensive tracking, particularly since in China every phone number is registered under a citizen ID.”

As an example, the researchers claim that the Redmi phone sends post requests to the URL “” whenever the preinstalled Settings, Note, Recorder, Phone, Message, and Camera apps are opened and used, Data is sent even if users opt out of “Send Usage and Diagnostic Data” during device startup.

So what’s the upshot for you? Even if you opt out of certain data collection, it is possible that your device is still sending data to third-party services. Be sure to research the vendor and the device before making a purchase.

US: Americans don’t understand what companies can do with their personal data—and that’s a problem't_Consent.pdf

Researchers from the Annenberg School for Communication asked a nationally representative group of more than 2,000 Americans to answer a set of questions about digital marketing policies and how companies can and should use their personal data.

Their aim was to determine whether current “informed consent” practices work online.

They found that the great majority of Americans don’t understand the fundamentals of internet marketing practices and policies and that many feel incapable of consenting to how companies use their data.

As a result, the researchers say, Americans can’t truly give informed consent to digital data collection.

The survey revealed that 56% of American adults don’t understand the term “privacy policy,” often believing it means that a company won’t share their data with third parties without permission.

In actual fact, many of these policies state that a company can share or sell any data it gathers about site visitors with other websites or companies.

The survey provided many insights into Americans’ digital knowledge:

  • Only around 1 in 3 Americans knows it is legal for an online store to charge people different prices depending on where they are located.
  • More than 8 in 10 Americans believe, incorrectly, that the federal Health Insurance Portability and Accountability Act (HIPAA) stops apps from selling data collected about app users’ health to marketers.
  • Fewer than 1 in 3 Americans knows that price-comparison travel sites such as Expedia or Orbitz are not obligated to display the lowest airline prices.
  • Fewer than half of Americans know that Facebook’s user privacy settings allow users to limit some of the information about them shared with advertisers.

To date, privacy laws have been focused on individual consent, favoring companies over individuals, putting the onus on internet users to make sense of whether—and how—to opt in or out. “We have data now that shows very strongly that the individual consent model isn’t working.”

So what’s the upshot for you? It is pretty difficult to make an informed decision about something that you don’t understand. Keep reading this blog and listening to the podcast and we’ll bring you to that understanding.

Our Quote of the week: "I’d rather see artificial intelligence than no intelligence.” - Michael Crichton

modern red couch

That’s it for this week. Stay safe, stay secure, don’t forget to book your next session, and see you in se7en.