Smart Start, Dumb ending with the IT Privacy and Security Update for the week ending May 9th, 2023


Daml’ers,

This week we start smart and end dumb. Potentially somewhat contrary to our other updates.
Smart
–click on the smart … or dumb to hear the podcast–

In between we might ultimately have a dumb way to protect a car, a dumb executive who got lucky, and a healthcare company that almost seems to be competing with T-Mobile for most breaches in 2023.

There’s a dumb plan by the EU to scan all your messages, and a smart move from Open AI to give you back some privacy.

We have the FBI going “gangsta” on DDoS as a service and a brilliant writeup of Russian malware that shines a light on the current world of espionage.

Then we share the brilliant announcement of a free alternative to CoPilot while Def Con announces they are taking aim at LLMs.

Whatever the caliber of the players, we’ve got the stories. You can make the call on the grades they get.

Come on, grab your marking pens, and let’s go!


US: Apple loses Smartphone copyright battle against security start-up Corellium

Apple failed to revive a long-running copyright lawsuit against cybersecurity firm Corellium over its software that simulates the iPhone’s iOS operating systems, letting security researchers identify flaws in the software.

The US Court of Appeals for the Eleventh Circuit on Monday ruled that Corellium’s CORSEC simulator is protected by copyright law’s fair use doctrine, which allows the duplication of copyrighted work under certain circumstances.

Apple argued that Corellium’s software was “wholesale copying and reproduction” of iOS and served as a market substitute for its own security research products.

Corellium countered that its copying of Apple’s computer code and app icons was only for the purposes of security research and was sufficiently “transformative” under the fair use standard.

The three-judge panel largely agreed with Corellium, finding that CORSEC “furthers scientific progress by allowing security research into important operating systems” and that iOS “is functional operating software that falls outside copyright’s core.”

So what’s the upshot for you? This smartphone loss could leave a bruised Apple.


US: NYPD urges citizens to buy AirTags to fight surge in car thefts

The New York Police Department (NYPD) and New York City’s self-proclaimed computer geek of a mayor are urging resident car owners to equip their vehicles with an Apple AirTag.

During a press conference on Sunday, Mayor Eric Adams announced the distribution of 500 free AirTags to New Yorkers, saying the technology would aid in reducing the city’s surging car theft numbers.

Adams held the press conference at the 43rd precinct in the Bronx, where he said there had been 200 instances of grand larceny of autos.

An NYPD official said that in New York City, 966 Hyundais, and Kias have been stolen this year thus far, already surpassing 2022’s 819 total. T

The NYPD’s public crime statistics tracker says there have been 4,492 vehicle thefts this year, a 13.3 percent increase compared to the same period last year and the largest increase among NYC’s seven major crime categories.

Hyundais and Kias were the subjects of the Kia Challenge TikTok trend that encouraged people to jack said vehicles with a mere USB-A cable. The topic has graduated way beyond a social media fad and into a serious concern.

So what’s the upshot for you? Hey Hyundai and Kia! Come on, you are making the Big Apple look bad, and Mayor Eric Adams, telling people to affix AirTags to cars… we think he might be facing a few more stalking episodes shortly.


US: Ex-Uber Security Chief Walks

A judge sentenced Joe Sullivan, the former chief security officer at Uber, to three years probation and 200 hours of community service on Thursday for covering up a 2016 cyberattack from authorities and obstructing a federal investigation.

Sullivan’s case is likely the first time a security executive has faced criminal charges for mishandling a data breach, and the response to Sullivan’s case has split the cybersecurity community.

In October, a jury found Sullivan guilty of obstructing an active FTC investigation into Uber’s security practices and concealing a 2016 data breach that affected 50 million riders and drivers.

Uber paid the hackers $100,000 to not release any stolen data and keep the attack quiet. Sullivan and his team routed the payment through the company’s bug bounty program, which good-faith security researchers usually use to report flaws.

The hack wasn’t publicly disclosed until 2017, shortly after Dara Khosrowshahi stepped into the CEO role.

Khosrowshahi fired Sullivan in 2017, telling the jury last fall that he thought the decision to conceal the breach was “the wrong decision.”

Sullivan then joined Cloudflare as its chief security officer in 2018, and he stayed there until July 2022 when he stepped down to prepare for his trial.

“If I have a similar case tomorrow, even if the defendant had the character of Pope Francis, they would be going to prison,” Judge William Orrick said during the sentencing on Thursday.

“When you go out and talk to your friends, to your CISOs, you tell them that you got a break not because of what you did, not even because of who you are, but because this was just such an unusual one-off,” Orrick added.

So what’s the upshot for you? CISOs (chief information security officers) can now all exhale in unison.


EU: EU Lawyers Say Plan To Scan Private Messages For Child Abuse May Be Unlawful

An EU plan under which all WhatsApp, iMessage, and Snapchat accounts could be screened for child abuse content has hit a significant obstacle after internal legal advice said the courts would probably annul it for breaching users’ rights.

Under the proposed “chat controls” regulation, any encrypted service provider could be forced to survey billions of messages, videos, and photos for “identifiers” of certain types of content where it was suspected a service was being used to disseminate harmful material.

The providers issued with a so-called “detection order” by national bodies would have to alert police if they found evidence of suspected harmful content being shared or the grooming of children.

Privacy campaigners and service providers have already warned that the proposed EU regulation and a similar online safety bill in the UK risk end-to-end encryption services such as WhatsApp disappearing from Europe.

Now leaked internal EU legal advice, presented to diplomats from the bloc’s member states on 27 April and has been seen by the Guardian, raises significant doubts about the lawfulness of the regulation unveiled by the European Commission in May last year.

The legal service of the Council of the EU, the decision-making body led by national ministers, has advised the proposed regulation poses a “particularly serious limitation to the rights to privacy and personal data” and that there is a “serious risk” of it falling foul of a judicial review on multiple grounds.

The EU lawyers write that the draft regulation “would require the general and indiscriminate screening of the data processed by a specific service provider, and apply without distinction to all the persons using that specific service, without those persons being, even indirectly, in a situation liable to give rise to criminal prosecution.”

The legal service goes on to warn that the European Court of Justice has previously judged the screening of communications metadata is “proportionate only for the purpose of safeguarding national security” and therefore “it is rather unlikely that similar screening of content of communications for the purpose of combating the crime of child sexual abuse would be found proportionate, let alone with regard to the conduct not constituting criminal offenses.”

The lawyers conclude the proposed regulation is at “serious risk of exceeding the limits of what is appropriate and necessary in order to meet the legitimate objectives pursued, and therefore of failing to comply with the principle of proportionality”.

The legal service is also concerned about the introduction of age verification technology and processes to popular encrypted services.

“The lawyers write that this would necessarily involve the mass profiling of users or the biometric analysis of the user’s face or voice, or alternatively the use of a digital certification system they note ‘would necessarily add another layer of interference with the rights and freedoms of the users,’” reports the Guardian.

“Despite the advice, it is understood that 10 EU member states – Belgium, Bulgaria, Cyprus, Hungary, Ireland, Italy, Latvia, Lithuania, Romania, and Spain – continue with the regulation without amendment.”

So what’s the upshot for you? “Laws are for other people.”


Global: OpenAI No Longer Relies On API Customer Data To Train ChatGPT

OpenAI CEO Sam Altman told CNBC that the company no longer trains its AI large-language models such as GPT with paying customer data.

“Customers clearly want us not to train on their data, so we’ve changed our plans: We will not do that,” Altman told CNBC’s Andrew Ross Sorkin.

OpenAI’s terms of service were quietly updated on March 1, records from the Internet Archive’s Wayback Machine show.

“We don’t train on any API data at all, we haven’t for a while,” Altman told CNBC. APIs, or application programming interfaces, are frameworks that allow customers to plug directly into OpenAI’s software.

OpenAI’s business customers, including Microsoft, Salesforce, and Snapchat, are more likely to use OpenAI’s API capabilities.

But OpenAI’s new privacy and data protection extends only to customers who use the company’s API services.

“We may use Content from Services other than our API,” the company’s updated Terms of Use note.

That could include, for example, text that employees enter into the wildly popular chatbot ChatGPT.

So what’s the upshot for you? Even Amazon has recently had to warn employees not to share confidential information with ChatGPT for fear that it might show up in other people’s answers.


US: Feds Seize 13 More DDoS-For-Hire Platforms In Ongoing International Crackdown

The US Justice Department has seized the domains of 13 DDoS-for-hire services as part of an ongoing initiative for combatting the Internet menace.

OK, first, what does DDoS for hire mean? A distributed denial of service (attack) is one that typically throws huge amounts of traffic at a URL or web server until it fails under the load.

Many of the participants are unwitting compromised computers or routers, so DDoS for hire is a service that provides a stream of traffic from compromised endpoints for a fee.

The providers of these illicit services platforms describe them as “booter” or “stressor” services that allow site admins to test the robustness and stability of their infrastructure (a legitimate use case).

But almost all, are patronized by people out to exact revenge on sites they don’t like or to further extortion, bribes, or other forms of graft.

The international law enforcement initiative to address DDos for hire is known as Operation PowerOFF.

In December, federal authorities seized another 48 domains.

And then ten of them returned with new domains, many that closely resembled their previous names.

"For example, one of the domains seized this week – cyberstress.org – appears to be the same service operated under the domain cyberstress.us, which was seized in December.

According to a seizure warrant filed in federal court, the FBI used live accounts available through the services to take down sites with high-capacity bandwidth that were under FBI control.

“The FBI tested each of the services associated with the SUBJECT DOMAINS, meaning that agents or other personnel visited each of the websites and either used previous login information or registered a new account on the service to conduct attacks,” FBI Special Agent Elliott Peterson wrote in the affidavit.

“I believe that each of the SUBJECT DOMAINS is being used to facilitate the commission of attacks against unwitting victims to prevent the victims from accessing the Internet, to disconnect the victim from or degrade communication with established Internet connections, or to cause other similar damage.”

So what’s the upshot for you? This is good work by the FBI, well, unless you were a DDoS’er!


US: NextGen Healthcare Says Hackers Accessed Personal Data of More Than 1 Million Patients

https://apps.web.maine.gov/online/aeviewer/ME/40/cb1d4654-0ce0-4e59-9eec-24391249e2a8.shtml

NextGen Healthcare, a U.S.-based provider of electronic health record software, admitted that hackers breached its systems and stole the personal data of more than 1 million patients.

In a data breach notification filed with the Maine attorney general’s office, NextGen Healthcare confirmed that hackers accessed the personal data of 1.05 million patients, including approximately 4,000 Maine residents.

In a letter sent to those affected, NextGen Healthcare said that hackers stole patients’ names, dates of birth, addresses, and Social Security numbers.

“Importantly, our investigation has revealed no evidence of any access or impact to any of your health or medical records or any health or medical data,” the company added.

TechCrunch asked NextGen Healthcare whether it has the means, such as logs, to determine what data was exfiltrated, but company spokesperson Tami Andrade declined to answer.

In its filing with Maine’s AG, NextGen Healthcare said it was alerted to suspicious activity on March 30 and later determined that hackers had access to its systems between March 29 and April 14, 2023.

The notification says that the attackers gained access to its NextGen Office system – a cloud-based EHR and practice management solution – using client credentials that “appear to have been stolen from other sources or incidents unrelated to NextGen.”

“When we learned of the incident, we took steps to investigate and remediate, including working together with leading outside cybersecurity experts and notifying law enforcement,” Andrade told TechCrunch in a statement.

“The individuals known to be impacted by this incident were notified on April 28, 2023, and we have offered them 24 months of free fraud detection and identity theft protection.”

So what’s the upshot for you? This is becoming a pattern for NextGen who was the victim of a ransomware attack in January this year.

The stolen data from the previous breach, included employee names, addresses, phone numbers, and passport scans and appears to be available on the dark web.


US/CA: The US and Canadian Cyber security agencies publish the book on Russia’s “Snake” Malware

go.dhs.gov/4mc

The Snake implant is considered the most sophisticated cyber espionage tool designed and used by Center 16 of Russia’s Federal Security Service (FSB) for long-term intelligence collection on sensitive targets.

To conduct operations using this tool, the FSB created a covert peer-to-peer (P2P) network of numerous Snake-infected computers worldwide.

Many systems in this P2P network serve as relay nodes that route disguised operational traffic to and from Snake implants on the FSB’s ultimate targets.

Snake’s custom communications protocols employ encryption and fragmentation for confidentiality and are designed to hamper detection and collection efforts.

American and Canadian CyberSecurity agencies have identified Snake infrastructure in over 50 countries across North America, South America, Europe, Africa, Asia, Australia, the United States, and Russia itself.

Although Snake uses infrastructure across all industries, its targeting is purposeful and tactical in nature.

Globally, the FSB has used Snake to collect sensitive intelligence from high-priority targets, such as government networks, research facilities, and journalists.

Within the United States, the FSB has victimized industries including education, small businesses, and media organizations, as well as critical infrastructure sectors including government facilities, financial services, critical manufacturing, and communications.

So what’s the upshot for you? If spy thrillers are your thing, you might love these 50 pages.


Global: StarCoder. A Free Alternative to Microsoft’s CoPilot.

AI startup Hugging Face and ServiceNow Research, ServiceNow’s R&D division, have released StarCoder, a free alternative to code-generating AI systems along the lines of GitHub’s Copilot.

Code-generating systems like DeepMind’s AlphaCode; Amazon’s CodeWhisperer; and OpenAI’s Codex, which powers Copilot, provide a tantalizing glimpse at what’s possible with AI within the realm of computer programming.

Assuming the ethical, technical, and legal issues are someday ironed out (and AI-powered coding tools don’t cause more bugs and security exploits than they solve), they could cut development costs substantially while allowing coders to focus on more creative tasks.

According to a study from the University of Cambridge, at least half of developers’ efforts are spent debugging and not actively programming, which costs the software industry an estimated $312 billion per year.

But so far, only a handful of code-generating AI systems have been made freely available to the public – reflecting the commercial incentives of the organizations building them (see: Replit).

StarCoder, which by contrast is licensed to allow for royalty-free use by anyone, including corporations, was trained on over 80 programming languages as well as text from GitHub repositories, including documentation and programming notebooks.

StarCoder integrates with Microsoft’s Visual Studio Code code editor and, like OpenAI’s ChatGPT, can follow basic instructions (e.g., “create an app UI”) and answer questions about code.

Have a coffee and read the whitepaper: https://drive.google.com/file/d/1cN-b9GnWtHzQRoE7M7gAEyivY0kl4BYs/view

So what’s the upshot for you? This one looks interesting. It’s still being refined, but you can start experimenting with it now.


Global: DEF CON To take Aim at LLMs

This year’s DEF CON AI Village has invited hackers to show up, dive in, and find bugs and biases in large language models (LLMs) built by OpenAI, Google, Anthropic, and others.

The collaborative event, which AI Village organizers describe as “the largest red teaming exercise ever for any group of AI models,” will host “thousands” of people, including “hundreds of students from overlooked institutions and communities,” all of whom will be tasked with finding flaws in LLMs that power today’s chatbots and generative AI.

Think traditional bugs in code, but also problems more specific to machine learning, such as bias, hallucinations, and jailbreaks – all of which ethical and security professionals are now having to grapple with as these technologies scale.

DEF CON is set to run from August 10 to 13 this year in Las Vegas, USA.

For those participating in the red teaming this summer, the AI Village will provide laptops and timed access to LLMs from various vendors.

Currently, this includes models from Anthropic, Google, Hugging Face, Nvidia, OpenAI, and Stability.

The village people’s announcement also mentions this is “with participation from Microsoft,” so perhaps hackers will get a go at Bing.

Red teams will also have access to an evaluation platform developed by Scale AI.

There will be a capture-the-flag-style point system to promote the testing of “a wide range of harms,” according to the AI Village.

Whoever gets the most points wins a high-end Nvidia GPU.

So what’s the upshot for you? Of interest is who is sponsoring the initiative: The event is supported by the (U.S.) White House Office of Science, Technology, and Policy; America’s National Science Foundation’s Computer and Information Science and Engineering (CISE) Directorate; and the Congressional AI Caucus.


Global: Dumbphones are being bought by Gen Z, to reduce distractions

Gen Z is broadly defined as those born in the mid to late ’90s, through to the early 2010s. That is the generation after millennials and those who are currently in the 10-28 age bracket.

Nokia is selling tens of thousands of its dumbphones each month – with Gen Z leading the charge toward a simpler, less distracting device.

dumb
–click on the smart … or dumb to hear the podcast–

The Wall Street Journal reports on the trend, which is led by Gen Z but not exclusive to them.

While some Gen Zers might be buying smartphones that flip and fold, like the $1,000 Samsung Galaxy Z Flip 4, the chatter online centers on “dumb” models with few capabilities.

These devices are experiencing a renaissance as budget second phones—allowing you to detach from constant notifications and the lure of infinite scroll, without losing the ability to send texts and make calls in an emergency.

Young people aren’t the only fans. Nokia sells tens of thousands of its flip phones each month in the U.S., according to Lars Silberbauer, chief marketing officer of HMD Global, the Finnish manufacturer of Nokia phones.

Sales are growing across demographics, he said. “It’s not a small trend.”

So what’s the upshot for you? We love that 9 to 5 Mac was the source for this story about going back to texting with 12 keys. It seems so incongruous.


And our quote of the week - “A person who asks a question is a fool for 5 minutes…but a person who doesn’t is a fool forever.”


That’s it for this week. Stay safe, stay secure, don’t drop your phone in the, and see you in se7en.



The proliferation of community-based or commercially-derived Open Source tools in the ‘CoPilot’ and ‘OpenAI’ sectors is only going to increase. People like myself are happy to pay USD $20+ per month for a service but we expect the full service; no filtering, no ethical constraints or no ‘niceness’ applied.

Ask a question, get an answer … but what you do with that answer is 100% on you. If we totally remove people’s agency to make Bad or Good decisions through the Platform, we will lose the ability to deal with adversity.

It’s important to become familiar with AI variants and the surrounding toolsets. The issue for most businesses is the handling of sensitive data around these toolsets, while they come to understand and leverage the benefits that can be wrought.

Like learning to ride a bicycle, learning to ride safely and falling over, skinning your elbow may be OK, while getting hit by a car certainly isn’t.

1 Like