Connect with us

AI Insights

Ninety laptops, millions of dollars: US woman jailed over North Korea remote-work scam | US news

Published

on


In March 2020, about the time the Covid pandemic started, Christina Chapman, a woman who lived in Arizona and Minnesota, received a message on LinkedIn asking her to “be the US face” of a company and help overseas IT workers gain remote employment.

As working from home became the norm for many people, Chapman was able to find jobs for the foreign workers at hundreds of US companies, including some in the Fortune 500, such as Nike; “a premier Silicon Valley technology company”; and one of the “most recognizable media and entertainment companies in the world”.

The employers thought they were hiring US citizens. They were actually people in North Korea.

Chapman was participating in the North Korean government’s scheme to deploy thousands of “highly skilled IT workers” by stealing identities to make it look like they were in the US or other countries. They have collected millions of dollars to boost the government’s nuclear weapons development, according to the US justice department and court records.

Chapman’s bizarre story – which culminated in an eight-year prison sentence – is a curious mix of geopolitics, international crime and one woman’s tragic tale of isolation and working from home in a gig-dominated economy where increasingly everything happens through a computer screen and it is harder to tell fact from fiction.

The secret North Korean workers, according to the federal government and cybersecurity experts, not only help the US’s adversary – a dictatorship which has been hobbled by international sanctions over its weapons program – but also harm US citizens by stealing their identities and potentially hurt domestic companies by “enabling malicious cyber intrusions” into their networks.

“Once Covid hit and everybody really went virtual, a lot of the tech jobs never went back to the office,” said Benjamin Racenberg, a senior intelligence manager at Nisos, a cybersecurity firm.

“Companies quickly realized: I can get good talent from anywhere. North Koreans and other employment fraudsters have realized that they can trick hiring systems to get jobs. I don’t think that we have done enough as a community to prevent this.”

To run the schemes, the North Koreans need facilitators in the United States, because the companies “aren’t going to willingly send laptops to North Korea or even China”, said Adam Meyers, head of counter-adversary operations for CrowdStrike, a cybersecurity firm.

“They find somebody that is also looking for a gig-economy job, and they say, ‘Hey, we are happy to get you $200 per laptop that you manage,’” said Meyers, whose team has published reports on the North Korean operation.

Chapman grew up in an abusive home and drifted “between low-paying jobs and unstable housing”, according to documents submitted by her attorneys. In 2020, she was also taking care of her mother, who had been diagnosed with renal cancer.

About six months after the LinkedIn message, Chapman started running what law enforcement officials describe as “laptop farms”.

In addition to hosting computers, she helped the North Koreans pose as US citizens by validating stolen identity information; sent some laptops abroad; logged into the computers so that the foreign workers could connect remotely; and received paychecks and transferred the money to the workers, according to court documents.

Meanwhile, the North Koreans created fictitious personas and online profiles to match the job requirements for remote IT worker positions. They often got the jobs through staffing agencies.

In one case, a “top-five national television network and media company” headquartered in New York hired one of the North Koreans as a video-streaming engineer.

The person posing as “Daniel B” asked Chapman to join a Microsoft Teams meeting with the employer so that the co-conspirator could also join. The indictment does not list victims’ full names.

“I just typed in the name Daniel,” Chapman told the person in North Korea, according to court records of an online conversation. “If they ask WHY you are using two devices, just say the microphone on your laptop doesn’t work right.”

“OK,” the foreign actor responded.

“Most IT people are fine with that explanation,” Chapman replied.

Chapman was aware that her actions were illegal.

“I hope you guys can find other people to do your physical I-9s. These are federal documents. I will SEND them for you, but have someone else do the paperwork. I can go to FEDERAL PRISON for falsifying federal documents,” Chapman wrote to a group of her co-conspirators.

Chapman was also active on social media. In a video posted in June 2023, she talked about having breakfast on the go because she was so busy, and her clients were “going crazy!”, Wired reported.

Behind Chapman were racks with at least a dozen open laptops with sticky notes. In October 2023, federal investigators raided her home and found 90 laptops. In February this year, she pleaded guilty to conspiracy to commit wire fraud, aggravated identity theft and conspiracy to launder monetary instruments.

Over the three years that Chapman worked with the North Koreans, some of the employees received hundreds of thousands of dollars from a single company. In total, the scheme generated $17m for Chapman and the North Korean government.

The fraudsters also stole the identities of 68 people, who then also had false tax liabilities, according to the justice department.

In a letter to the court before her sentencing, Chapman thanked the FBI for arresting her because she had been “trying to get away from the guys that I was working with for awhile [sic] and I wasn’t really sure how to do it”.

“The area where we lived didn’t provide for a lot of job opportunities that fit what I needed,” Chapman wrote. “To the people who were harmed, I send my sincerest apologies. I am not someone who seeks to harm anyone, so knowing that I was a part of a company that set out to harm people is devastating to me.”

Last week, US district court judge Randolph Moss sentenced Chapman to more than eight years in prison; to forfeit $284,000 that was to be paid to the North Koreans, and to pay a fine of $176,000.

Chapman and her co-conspirators were not the only ones conducting such fraud. In January, the federal government also charged two people in North Korea, a Mexican citizen and two US citizens for a scheme that helped North Korean IT workers land jobs with at least 64 US companies and generated at least $866,000 in revenue, according to the justice department.

Racenberg, of Nisos, said he expected cybercriminals to use artificial intelligence to “get better and better” at performing such schemes.

Companies should conduct “open-source research” on applicants because oftentimes the fraudsters reuse résumé content, Racenberg said.

“If you put the first few lines of the résumé in, you might find two, three other résumés online that are exactly the same with these very similar companies or similar dates,” Racenberg added. “That should raise some flags.”

During an interview, if there is background noise that sounds like a call center or if the applicant refuses to remove a fake or blurred background, that could also be cause for concern, Meyers, of CrowdStrike, said.

And companies should ask new hires to visit the office to pick up their laptop rather than mail it to them because that allows the company to see if the person who shows up is the same one you interviewed, Racenberg said.

Five years after the pandemic, more companies have also started to require employees to return to the office at least part time. If all corporations did that, would it eliminate the threat?

“It’s going to prevent all of this from happening, yes,” Racenberg said. “But are we going to go back to that? Probably not.”



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

AI Insights

Children are asking AI for advice on sex and mental health, new report finds

Published

on


With AI regulation still up in the air, a new report reveals concerning trends in how children interact with artificial intelligence — and they’re not just using it for homework help.

The study shows some children and teens are turning to AI chatbots for conversations about sensitive topics like sex. The report also finds they spend more time chatting online with AI than texting their friends.

Experts warn that some kids may be confusing chatbots with actual human relationships.

Children having longer conversations with AI than friends

As the use of artificial intelligence continues to spread, a growing number of children are turning to it for companionship.

Those are the findings in the new report from the company Aura, which provides digital protection services. They found some children are having conversations with AI chatbots that are 10 times longer than the texts they send their friends.

Among Aura’s findings, the company found messages to GenAI companion apps averaged 163 words per message. The typical iMessage is just 12 words.

“We have kids eight, 10 years old that we’re seeing in our data that are using these platforms,” said Aura’s chief medical officer Dr. Scott Kollins.

In analyzing how kids are using the tech, Aura found AI interactions ranging from homework and mental health themes to shared personal information and even sexual and romantic roleplaying.

“The concern that raises for me as a psychologist, but also as a parent, is that it’s clearly serving some purpose for the kids from a social interaction perspective,” Kollins said. “But if that becomes a substitute for learning how to interact and engage in real life, that presents some big unknowns and potential problems for kids’ development.”

Experts warn of developmental risks

Experts say those potential problems can arise because children lack the emotional maturity to understand interactions with AI.

Link to Dr. Joanna: https://www.healthychildren.org/English/family-life/Media/Pages/are-ai-chatbots-safe-for-kids.aspx

“The thing about children is they have more magical thinking than adults, so they can really attach to an AI chatbot and think that it’s human,” Dr. Joanna Parga-Belinkie said.

Parga-Belinkie is a pediatrician and neonatologist. She’s not involved in Aura’s research but says chatbots can be risky for young users.

“AI will feed a user information it thinks that user wants to hear,” she explained, “and there are just not a lot of safeguards in place to stop AI from telling children, false, harmful, over-sexualized, or even violent things.”

Parents urged to set boundaries

Experts say it’s important for parents to take steps to talk to their children about safe and appropriate uses for AI.

Kollins points out that while many people are familiar with ChatGPT and a few other popular AI chatbots, in reality there are hundreds of AI tools out there. He says parents need to make sure they know which apps their child is downloading so they can set appropriate boundaries.

RELATED STORY | Anthropic will pay out $1.5B to settle allegations of book piracy, used to train its AI

Uncertainty over AI policy

There are organizations like the nonprofit Common Sense Media pushing for a ban of Meta’s AI chatbot for kids under the age of 18.

This month, First Lady Melania Trump called on private and public sectors to prepare children for the growth of AI.

For now, uncertainty remains for AI policies geared toward children. Experts advise parents to monitor their children’s phones, ask questions, and talk about the dangers of sharing personal information.

This story was reported on-air by a journalist and has been converted to this platform with the assistance of AI. Our editorial team verifies all reporting on all platforms for fairness and accuracy.





Source link

Continue Reading

AI Insights

Sevierville uses new artificial intelligence system to fix potholes, rate roadways – WBIR

Published

on



Sevierville uses new artificial intelligence system to fix potholes, rate roadways  WBIR



Source link

Continue Reading

AI Insights

Sevierville uses new artificial intelligence system to fix potholes and rate roadways – WBIR

Published

on



Sevierville uses new artificial intelligence system to fix potholes and rate roadways  WBIR



Source link

Continue Reading

Trending