Cyber Security Headlines Week in Review: FAA system failure, LastPass lawsuit, ChatGPT writing malware  

This week’s Cyber Security Headlines – Week in Review, January 9-13, is hosted by Rich Stroffolino with our guest, Shaun Marion, CISO, McDonald’s

Cyber Security Headlines – Week in Review is live every Friday at 12:30pm PT/3:30pm ET. Join us each week by registering for the open discussion at CISOSeries.com

FAA system failure delays flights

The US Federal Aviation Administration posted an advisory warning that its United States Notice to Air Missions system “failed,” resulting in estimated flight delays and cancellations impacting hundreds of flights. The NOTAM sends notices of essential information for personnel handling flight operations that isn’t known far enough in advance to be sent by other methods.  As a result, the advisory said it ordered airlines to pause all domestic departures until 9am ET on January 11th in order to “validate the integrity of flight and safety information.” The FAA said its investigating the cause of the issue, the White House press secretary said “there is no evidence of a cyberattack at this point.” The FAA subsequently confirmed it lifted the ground band just before 9am, with flights gradually resuming. Mandiant VP John Hultquist said a cyberattack on the system seemed unlikely, saying the failure likely came from cascading failures across increasingly complex interdependent systems. 

(Reuters)

LastPass hit with lawsuit over August breach

The August data disaster at LastPass just keeps getting worse for the company is now going to the courts. A lawsuit has been filed by an unnamed individual who said LastPass’ failures led to the theft of an unspecified amount of Bitcoin private keys stored in the wallet, which the suit said contained roughly $53,000 in the cryptocurrency. The suit is seeking a jury trial to squeeze damages and restitution out of LastPass for a nationwide class that includes any LastPass users who had data stolen in the breach. In December, LastPass admitted that the attack was more serious than had first been suspected, with attackers gaining access to a cloud storage system to steal user password vaults. 

(The Register)

Trying to write malware with ChatGPT

We already know that OpenAI’s ChatGPT text engine can theoretically write malware. Last month a security researcher successfully got it to describe a basic buffer overflow vulnerability, admittedly with critical syntax errors. Now security researchers at Check Point report dark web hacking forums are experimenting using ChatGPT to help facilitate and support malicious attacks. The researchers say this could open the door for actors with very low levels of technical knowledge to launch attacks, or make sophisticated cyber operations much more efficient and easier. OpenAI’s terms of service ban malware generation and it attempts to block requests to create spam. One poster on the forum said they were able to use ChatGPT to create a Python-based information stealer malware, while another showed how they created Java-based malware to exploit PowerShell. 

(ZDNet)

Russian Turla hackers hijack decade-old malware infrastructure to deploy new backdoors

The Russian cyberespionage group known as Turla has been observed piggybacking on attack infrastructure used by a decade-old malware to deliver its own reconnaissance and backdoor tools to targets in Ukraine. Google-owned Mandiant, said the hijacked servers correspond to a variant of a commodity malware called ANDROMEDA (aka Gamarue) that was uploaded to VirusTotal in 2013. Since the onset of Russia’s military invasion of Ukraine in February 2022, the group has been linked to a string of credential phishing and reconnaissance efforts aimed at entities located in the country, as well as Solar Winds.

(The Hacker News)

Thanks to today’s episode sponsor, AppOmni

Can you name all the third party apps connected to your major SaaS platforms like Salesforce and Microsoft? What about the data these apps can access? After all, one compromised 3rd party app could put your entire SaaS ecosystem at risk. With AppOmni, you get visibility to all third party apps and SaaS-to-SaaS connections — including which end users have enabled them, and the level of data access they’ve been granted. Visit AppOmni.com to request a free risk assessment.

Amazon S3 will now encrypt all new data with AES-256 by default

Amazon Simple Storage Service (S3) will now automatically encrypt all new objects added on buckets on the server side, using AES-256 by default. While the server-side encryption system has been available on AWS for over a decade, the tech giant has enabled it by default to bolster security. Administrators will not have to take any actions for the new encryption system to affect their buckets, and Amazon promises it won’t have any negative performance impact. Two notable examples concerning Amazon S3 storage buckets are the leak of data from 123 million households in December 2017 and the leak of 540 million records of Facebook users in April 2019 in which the data had not been encrypted.

(Bleeping Computer)

Government watchdog cracks federal agency’s passwords

The Office of the Inspector General (OIG) has published a scathing rebuke of security practices employed by the Department of the Interior, which manages the country’s federal land, national parks and multi-billion dollar budget. After the department claimed that it would require more than 100 years to recover its passwords using off-the-shelf password cracking software, the OIG used a rig costing just under $15,000 to crack nearly 14,000 employee passwords (16%) within 90 minutes. The OIG also found that some critical systems and user accounts failed to comply with the government’s own two-factor authentication mandate. The report concluded that poor password practices put the department at risk of a breach that could pose a “high probability” of massive disruption to its operations. The Department of the Interior agreed with most of the OIG’s findings and said it’s “committed” to improving its cybersecurity defenses.

(TechCrunch)

‘Trojan Puzzle’ trains AI assistants to suggest malicious code

Researchers have devised a new poisoning attack, dubbed ‘Trojan Puzzle,’ that trains AI models to learn how to reproduce dangerous payloads. The researchers poisoned an AI training data set of nearly 6 GB of Python code which mimics datasets that AI models pull right from the Internet. Trojan Puzzle avoids detection by actively hiding part of its malicious payload during the training process. The attack relies on the ML to substitute random words until it finally suggests the entire attacker-chosen payload code. After running three training epochs on the AI model, researchers were able to obtain a 21% insecure suggestion rate. Given the rise of coding assistants like GitHub’s Copilot and OpenAI’s ChatGPT, AI training exploits could potentially lead to large-scale supply-chain attacks.

(Bleeping Computer)

Lawsuit claims student loan site inflated membership to entice acquisition

JPMorgan Chase is suing the 30-year-old founder of Frank, a fintech startup it acquired for $175 million, for allegedly lying about its scale and success by creating an enormous list of fake users to entice the financial giant to buy it. Frank offers software aimed at improving the student loan application process for young Americans seeking financial aid. The lawsuit, filed late last year claims that CEO Charlie Javice allegedly created a roster of “fake customers – a list of names, addresses, dates of birth, and other personal information for 4.265 million ‘students’ who did not actually exist,” when in reality, according to the suit, Frank had fewer than 300,000 customer accounts at that time.

(Forbes)