This week we have a refreshing break for anyone in the US who has been bludgeoned with non-stop Political ads for the last two weeks. (These ads are everywhere, TV, the sides of buses, the Internet, radio, and even people’s front lawns.)
We move on to a story about plagiarism by AI, scanning of devices, scanning of faces, and a new audience for New York neighborhood cams.
In the name of efficiency, we have smartwatches in meat factories, and a new app that you can load on your computer to index what you did and said for weeks at a time.
Finally, we end with what may be the most practical use of AI of all time: writing wedding gift thank you letters.
By the time you get to the end of this update, you’ll be in the know, refreshed, and maybe even ready for the next 4 months of U.S. election recounts!
There’s no longer a question about whether TikTok staff in China can access Europeans data.
TikTok’s policy update comes amid a yearlong investigation by Ireland’s Data Protection Commission, which is looking into its data-transfer policies under the EU’s General Data Protection Regulation.
The inquiry is part of Western governments’ increased scrutiny of the video-sharing platform, which some US officials have characterized as a national security threat due to frequently close relationships between Chinese companies and the government in Beijing.
So what’s the upshot for you? The new policy goes into effect on December 2. ← and really, we couldn’t make this up.
Global: Microsoft’s GitHub Copilot Sued Over 'Software Piracy on an Unprecedented Scale
First, what is GitHub Copilot?
Microsoft describes it as: "…an AI programmer that helps developers to write better code. GitHub Copilot helps a programmer discover alternative ways to solve problems, write tests and explore new API tools very quickly without the need to search for answers on the internet. "
This lawsuit seeks to challenge the legality of GitHub Copilot, as well as OpenAI Codex which powers the AI tool and has been filed against GitHub, its owner Microsoft, and OpenAI. It also accuses GitHub of monetizing code from open-source programmers, “despite GitHub’s pledge never to do so.”
“By training their AI systems on public GitHub repositories (though based on their public statements, possibly much more), we contend that the defendants have violated the legal rights of a vast number of creators who posted code or other work under certain open-source licenses on GitHub,” said the developer bringing the case Matthew Butterick.
These licenses include a set of 11 popular open-source licenses that all require attribution of the author’s name and copyright.
This includes the MIT license, the GNU General Public Licence, and the Apache license.
The case claimed that Copilot violates and removes these licenses offered by thousands, possibly millions, of software developers, and is therefore committing software piracy on an unprecedented scale.
Copilot, which is entirely run on Microsoft Azure, often simply reproduces code that can be traced back to open-source repositories or licensees, according to the lawsuit.
The code never contains attributions to the underlying authors, which is in violation of the licenses. "It is not fair, permitted, or justified.
On the contrary, Copilot’s goal is to replace a huge swath of open source by taking it and keeping it inside a GitHub-controlled paywall…"
Moreover, the case stated that the defendants have also violated GitHub’s own terms of service and privacy policies, the DMCA code 1202 which forbids the removal of copyright-management information, and the California Consumer Privacy Act.
Matthew Butterick says, "AI can only elevate humanity if it’s “fair and ethical for everyone. If it’s not… it will just become another way for the privileged few to profit from the work of the many.”
So what’s the upshot for you? We think he has a point. To train the AI on Github code, have it doling out swathes of your code, and then charge someone for it as an Azure service feels wrong and GitHub has already admitted that in some cases Copilot does produce copied code…
UK: British Government Is Scanning All Internet Devices Hosted In the UK
The United Kingdom’s National Cyber Security Centre (NCSC), the government agency that leads the country’s cyber security mission, is now scanning all Internet-exposed devices hosted in the UK for vulnerabilities.
The goal is to assess UK’s vulnerability to cyber-attacks and to help the owners of Internet-connected systems understand their security posture.
“These activities cover any internet-accessible system that is hosted within the UK and vulnerabilities that are common or particularly important due to their high impact,” the agency said.
“The NCSC uses the data we have collected to create an overview of the UK’s exposure to vulnerabilities following their disclosure and track their remediation over time.”
NCSC’s scans are performed using tools hosted in a dedicated cloud-hosted environment from scanner.scanning.service.ncsc.gov.uk and two IP addresses (220.127.116.11 and 18.104.22.168). The agency says that all vulnerability probes are tested within its own environment to detect any issues before scanning the UK Internet.
“We’re not trying to find vulnerabilities in the UK for some other, nefarious purpose,” NCSC technical director Ian Levy explained.
“We’re beginning with simple scans, and will slowly increase the complexity of the scans, explaining what we’re doing (and why we’re doing it).”
The NCSC says it will “take steps to remove [any sensitive or personal data] and prevent it from being captured again in the future.”
So what’s the upshot for you? British organizations can opt-out of having their servers scanned by emailing a list of IP addresses they want to be excluded at firstname.lastname@example.org.
Global: Soccer (Football) Fans, You’re Being Watched
This fall, more than 15,000 cameras will monitor soccer fans across eight stadiums and on the streets of Doha during the 2022 World Cup, an event expected to attract more than 1 million football fans from around the globe.
“What you see here is the future of stadium operations,” said the organizers’ chief technology officer in August.
“A new standard, a new trend in venue operations, this is our contribution from Qatar to the world of sport.”
Qatar’s World Cup organizers are not alone in deploying biometric technology to monitor soccer fan activity.
In recent years, soccer clubs and stadiums across Europe have been introducing these security and surveillance technologies.
In Denmark, Brondby Stadium has been using facial recognition for ticketing verification since 2019.
In the Netherlands, NEC Nijmegen has used biometric technology to grant access to Goffert Stadium.
France’s FC Metz briefly experimented with a facial recognition device to identify fans banned from Saint-Symphorien Stadium.
And the UK’s Manchester City reportedly hired Texas-based firm Blink Identity in 2019 to deploy facial recognition systems at Etihad Stadium.
In Spain, Atletico Osasuna uses facial recognition to monitor and control access to El Sadar Stadium, while Valencia CF signed a deal in June 2021 with biometrics company FacePhi to design and deploy facial-recognition technology at Mestalla Stadium in the upcoming season.
The sports club then became a global ambassador for the company’s technology.
FacePhi’s biometric onboarding technology was already used for a pilot project to enroll Valencia CF fans in an automated access control system that allowed them to get into the stadium using a QR code via the football club’s mobile app.
(A FacePhi spokesperson declined to provide details about the project but said: “that we are not yet in the implementation phase with Valencia CF.”)
So what’s the upshot for you? You might have to wear a mask to your next footie match.
US: The NYPD Joins Amazon’s Ring Neighbors Surveillance Network
The New York Police Department has joined Ring Neighbors, the neighborhood surveillance network built around Amazon’s Ring security cameras.
The partnership, announced yesterday, means the NYPD will view people’s posts on Neighbors and be able to post directly to it, including requests for public help on “active police matters.”
Neighbors is a Nextdoor-like extension of Ring’s security camera business, allowing residents of a neighborhood to discuss crime and safety as well as post footage from their cameras.
While many law enforcement departments have joined Neighbors in recent years, this marks its adoption by America’s largest police force. (Police could separately request Ring footage for criminal investigations without the app.)
It’s part of an increasingly tight integration between Amazon and police – one that’s raised both concerns about privacy and questions about its crime-solving value.
So what’s the upshot for you? Well let’s see. The NYPD could already request the Ring camera footage, so no change there. This just facilitates even less friction.
We think there should be some friction.
Global: New LinkedIn profile features help verify identity, detect and remove fake accounts, and boost authenticity
Subsequent to the creation of thousands of fake profiles on LinkedIn and their removal the Vice President of Product Management, Oscar Rodriguez updates us on LinkedIn’s progress in a blog post:
We’re adding a new “About this profile” feature that will show you when a profile was created and last updated, along with whether the member has verified a phone number and/or work email associated with their account. We hope that viewing this information will help you make informed decisions, such as when you are deciding whether to accept a connection request or reply to a message.
Detecting fake accounts using AI-generated profile photos we’re seeing rapid advances in AI-based synthetic image generation technology and we’ve created a deep learning model to better catch profiles made with this technology.
AI-based image generators can create an unlimited number of unique, high-quality profile photos that do not correspond to real people.
Fake accounts sometimes use these convincing, AI-generated profile photos to make their fake LinkedIn profile appear more authentic.
Our new deep-learning-based model proactively checks profile photo uploads to determine if the image is AI-generated using cutting-edge technology designed to detect subtle image artifacts associated with the AI-based synthetic image generation process without performing facial recognition or biometric analyses.
This model helps increase the effectiveness of our automated anti-abuse defenses to help detect and remove fake accounts before they have a chance to reach our members. Helping stop suspicious messages
We’re adding a warning to some LinkedIn messages that include high-risk content that could impact your security.
We may warn you about messages that ask you to take the conversation to another platform because that can be a sign of a scam.
These warnings will also give you the choice to report the content without letting the sender know.
So what’s the upshot for you? Generally we applaud these updates. We already can see the downside of that last suggestion where if someone wanted to create mayhem for real members, they’d just report them. Odds are that because that’s where a real person might get involved, it would bind the recipients’ profile up for months.
US: Does Facebook Have Your Phone Number? Here’s How To Find Out
Facebook has quietly rolled out a tool that allows you to check if the social network has your phone number and if so, you can delete it.
The new Facebook tool has rolled out “quietly” as it’s been available since May this year and Meta hasn’t announced it publicly, according to Business Insider, which first reported the story.
So now for the important part—how do you use the new Facebook contacts removal tool?
First, you need to click through to the correct page. https://www.facebook.com/contacts/removal
There you can confirm which contact you want it to look for, such as email address or phone number.
You’ll need to enter your details including your area code, and then you can specify where you want the search to take place, for example, Facebook and Messenger, and Instagram. Facebook will send you a one-time SMS code to enter to confirm it’s you.
If Facebook finds the number it will ask: “Should we delete and block it?”
If you confirm, Facebook will also block it from being uploaded again. Handy.
So what’s the upshot for you? We tried this half a dozen times and it either didn’t send the confirmation text or it errored out. Perhaps those who are U.S.-based will have better luck.
Global: Signal rolls out Snapchat-like “stories” feature
Encrypted messaging app Signal will soon have an ephemeral “stories” feature, with video, pictures, or text that disappear after 24 hours.
Signal, often used by journalists, activists, and privacy-minded individuals, planned to roll out the feature yesterday, the nonprofit’s president Meredith Whittaker said at the Web Summit in Lisbon, Portugal last Thursday.
User updates that last on profiles for 24 hours, often called “stories,” are something popularized by Snapchat and Instagram, both companies with targeted advertising-based business models that also monetize the feature, something Signal is vehemently opposed to.
“The short answer is that people want [stories],” Whittaker replied when asked why the privacy-focused app is rolling out such a feature.
So what’s the upshot for you? Yes, we too got the update at 7:14 am this morning.
US: Big Meat Companies Want To Use Smartwatches To Track Workers’ Every Move
Two of the largest meat companies in the U.S. have invested in a smartwatch app that allows managers to track and monitor workers’ movements.
According to a report by Investigate Midwest, a non-profit newsroom covering the agri-business industry, JBS and Tyson Foods have backed Mentore, a start-up that claims it uses surveillance data and AI to improve worker productivity and reduce workplace injuries.
Once paired with a compatible smartwatch, Mentore’s application uses sensors to collect data on the force, rotation, speed, and directional movement of a worker’s arm as they repeatedly complete the same task.
The company’s algorithm then analyzes that data to determine if those movements are safe and alerts the individual if they are found to be using too much speed or force.
According to the report and Mentore’s co-founder, Apoorva Kiran, the watch can also detect dehydration.
This raw watch data is then converted to real-time metrics that are made visible to supervisors on a dashboard.
At the moment, it seems that Mentore plans to combat uncertainty and issues about transparency about the app by allowing workers to access their current and historical “injury risk” scores, but it’s unclear whether they can do anything to challenge the real-time metrics on the watch itself.
The app can also differentiate between “intense active motion” and “mild active motion.” According to Mentore’s site, this kind of data can “improve productivity, turnover, and safety at scale in real-time.” […]
According to Investigate Midwest, the system has already been installed on about 10,000 devices across five industries in four different countries, including the U.S, Canada, Chile, and Japan.
The move mirrors similar controversial tracking practices that many other companies, including Amazon, have tried to implement over the years in a bid to increase worker productivity.
“Besides the tracking and the invasion of somebody’s privacy, there is this real safety and health issue,” Mark Lauritsen, an international vice president of the United Food and Commercial Workers Union (UFCW) and head of the union’s meatpacking division, told Motherboard.
He says that requiring workers to wear a watch or any other jewelry would be in violation of health and safety policies, opening them up to workplace injury and potentially leading to contamination of the product.
So what’s the upshot for you? “We’re not going to allow their need to have more money and more productivity endanger people’s lives and limbs just so they can make an extra dollar,” Lauritsen said. “It’s just not gonna happen.”
Global: A blisteringly Bad idea? ‘Rewind AI’ records everything you do on your Mac so you can “refresh your memory”
As the name suggests, Rewind AI records absolutely everything you have seen, said, or heard while using your Mac. With these recordings, users can easily go back to a specific time of day to re-watch it. But the app goes way beyond just saving a long screen recording.
There are not many details about the technologies behind the app, but the company’s website says that it uses “mind-boggling compression” that can record a huge amount of data without significant loss of quality. The developers claim that a 10.5GB recording becomes a 2.8MB file. This allows Rewind AI to “store years of recordings.”
The app also uses OCR to identify text content combined with speech recognition to provide powerful search capabilities. In a brief demo video shared by the company, we can see that the app’s interface has a search bar where users can go back to a specific time of day simply by typing in a word or phrase.
So what’s the upshot for you? This product has not had a release date announced yet. In some respects, we hope it never does.
Global: Red Cross Seeks ‘Digital Emblem’ To Protect Against Hacking
The International Committee of the Red Cross said Thursday it is seeking support to create a “digital red cross/red crescent emblem” that would make clear to military and other hackers that they have entered the computer systems of medical facilities or Red Cross offices.
The Geneva-based humanitarian organization said it was calling on governments, Red Cross and Red Crescent societies, and IT experts to join forces in developing “concrete ways to protect medical and humanitarian services from digital harm during armed conflict.”
For over 150 years, symbols such as the red cross have been used to make clear that “in times of armed conflict, those who wear the red cross or facilities and objects marked with them must be protected from harm,” the ICRC said.
That same obligation should apply online, the organization said, noting that hacking operations in conflicts were likely to increase as more militaries develop cyber capabilities.
The organization said that for the proposed “digital emblem” to become reality, nations worldwide would have to agree on its use and make it part of international humanitarian law alongside existing humanitarian insignia.
It hopes the emblem would identify the computer systems of protected facilities much as a red cross or crescent on a hospital roof does in the real world.
“The International Committee of the Red Cross said that it has identified three technical possibilities: a DNS-based emblem that would use a special label to link it to a domain name; an IP-based emblem; and an ADEM, or authenticated digital emblem, a system that would use certificate chains to signal protection,”
So what’s the upshot for you? We don’t want to rain on the parade, but if you look at some of the atrocities committed by nation-states as of late, that digital cross might just serve as a target to aim at.
Global: Google ad for GIMP.org served info-stealing malware via lookalike site
Searching for ‘GIMP’ on Google as recently as last week would show visitors an ad for ‘GIMP.org,’ the official website of the well-known graphics editor, GNU Image Manipulation Program.
This ad would appear to be legitimate as it would state ‘GIMP.org’ as the destination domain.
But clicking on it drove visitors to a lookalike phishing website that provided them with a 700 MB executable disguised as GIMP which, in reality, was malware.
Reddit user ZachIngram04 earlier shared the development stating that the ad previously took users to a Dropbox URL to serve malware, but was soon “replaced with an even more malicious one” which employed a fake replica website ‘gilimp.org’ to serve malware.
BleepingCompuer observed another domain ‘gimp.monster’ related to this campaign.
To pass off the trojanized executable as GIMP in a believable manner to the user, the threat actor artificially inflated the malware, which is otherwise under 5 MB in size, to 700 MB by a simple technique known as binary padding.
So what’s the upshot for you? It still isn’t clear if this instance was a slip-up caused by a potential bug in Google Ad Manager that allowed malvertising.
US: Hundreds of U.S. news sites push malware in supply-chain attack
“The media company in question is a firm that provides both video content and advertising to major news outlets. [It] serves many different companies in different markets across the United States,.”
In total, the malware has been installed on sites belonging to more than 250 U.S. news outlets, some of them being major news organizations, according to security researchers at enterprise security firm Proofpoint.
We track this actor as #TA569. TA569 historically removed and reinstated these malicious JS injects on a rotating basis. Therefore the presence of the payload and malicious content can vary from hour to hour and shouldn’t be considered a false positive.
So what’s the upshot for you? Frustratingly, the source news site nor the chained news sites are named, so for the moment pause your downloads of .zip files from any news sites.
US: The Cost of Ransomware Payments Top $1 Billion a Year
The US Treasury Department this week said US financial institutions facilitated ransomware payments totaling nearly $1.2 billion in 2021—a 200 percent increase since 2020.
The report landed amid an international White House summit aiming to combat the rise of ransomware, a type of malware that allows attackers to encrypt a target’s files and hold them for ransom until the victim pays.
Himamauli Das, acting director of the Treasury Department’s Financial Crimes Enforcement Network, said in a statement that “ransomware—including attacks perpetrated by Russian-linked actors—remain a serious threat to our national and economic security.
While $1.2 billion in payments is already painful enough, the number does not take into account the costs and other financial consequences that come with a ransomware attack outside of the payment itself.
So what’s the upshot for you? We hope that this precipitates a little more action related to these types of events. You get some focus if you are a large corporation, but smaller firms, really are on their own.
Global: Adding AI to your spreadsheets
Shubhro Saha figured out how to run GPT-3 prompts in Google Sheets, allowing you to automatically sanitize data, categorize feedback, etc.
On Twitter Shubhro shares a spreadsheet mockup where he 's got a wedding list in a spreadsheet.
It lists who gave what and contains notes about their interactions at the wedding.
He shows the AI component creating the “Thank you” letters for each guest.
So what’s the upshot for you? It’s a great demo. We joined the waitlist to test and may have more detail in future updates.
And our quote of the week: “People never lie so much as after a hunt, during a war, or before an election.” Otto von Bismarck
That’s it for this week. Stay safe, stay secure, and see you in se7en.