Connect with us

Tools & Platforms

Samsung flags big miss in second-quarter profit, citing U.S. AI chip curbs on China

Published

on


FILE PHOTO: Samsung projected a 56% drop in second-quarter operating profit from a year earlier, far worse than analysts expected.
| Photo Credit: Reuters

Samsung Electronics on Tuesday projected a 56% drop in second-quarter operating profit from a year earlier, far worse than analysts expected as its sales of artificial intelligence chips slowed in the United States and China.

The world’s largest memory chipmaker blamed the profit miss on U.S. restrictions on advanced AI chips for China, but analysts said the earnings slump was also due to delays in supplying chips to key U.S. customer Nvidia.

Samsung said in a statement on Tuesday that its improved high-bandwidth memory (HBM) products were undergoing customer evaluation and proceeding with shipments, without elaborating further.

“The DS Division recorded a quarter-on-quarter decline in profit due to inventory value adjustments and the impact of U.S. restrictions on advanced AI chips for China,” Samsung said in a statement.

Samsung estimated an operating profit of 4.6 trillion won for the April-June period, versus a 6.2 trillion won LSEG SmartEstimate.

That would be its weakest in six quarters, down from 10.4 trillion won in the same period a year earlier and 6.7 trillion won in the preceding quarter.

Revenue would likely fall 0.1% to 74 trillion won from a year earlier, the filing showed.

The earnings miss will fuel investor doubts about Samsung’s fundamentals, although its earnings are expected to recover gradually in the third quarter helped by rising sales of HBM chips to non-Nvidia customers and new phone launches, said Greg Roh, head of research at Hyundai Motor Securities.

“The worse-than-expected profit will be negative for investor sentiment,” he said.

Samsung said earnings in its foundry business also fell, driven by sales restrictions and related inventory value adjustments stemming from U.S. export controls on advanced AI chips for China, as well as continued low utilisation rates.

Last year, the U.S. ordered Taiwan Semiconductor Manufacturing Co to halt shipments of advanced chips to Chinese customers that are often used in AI applications, Reuters reported.

Samsung said it expects the operating loss in its foundry business to narrow in the second half of the year as utilisation improves in line with a gradual recovery in demand.

The company is expected to release detailed results including a breakdown of earnings for each of its businesses in late July.



Source link

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Tools & Platforms

As Congress Releases the AI Regulatory Hounds, A Reminder | American Enterprise Institute

Published

on


The centerpiece of the so-called “One Big Beautiful Bill” in tech policy circles was the “AI moratorium,” a temporary federal limit on state regulation of artificial intelligence. The loss of the AI moratorium, stripped from the bill in the Senate, elicited howls of derision from AI-focused policy experts such as the indefatigable Adam Thierer. But the moratorium debate may have distracted from an important principle: Regulation should be technology neutral. The concept of AI regulation is essentially broken, and neither states nor Congress should regulate AI as such.

Nothing is straightforward. The AI moratorium was not a moratorium at all. Contorted to fit into a budget reconciliation bill, it meant to disincentivize regulation by withholding federal money for 10 years from states that are “limiting, restricting, or otherwise regulating artificial intelligence models.”

Vi Adobe Stock.

It is economically unwise for states to regulate products and services offered nationally or globally. When they do so unevenly, a thicket of regulations and lost innovation is likely. Compliance costs rise disproportionately relative to the benefits of protections that more efficient laws could achieve.

But I’m ordinarily a stout defender of the decentralized system created by our Constitution. I believe it is politically unwise to move power to remote levels of government. With Geoff Manne, I’ve written about avoiding burdensome state regulation through contracts rather than preemption of state law.  So before the House AI Task Force’s meeting to consider federalism and preemption, I was in the “mushy middle.”

With the moratorium gone, federal AI regulation would justify preempting the states, giving us efficient regulation, right? Nothing is straightforward.

Nobody—including at the federal level—actually knows what they are trying to regulate. Take a look at the definition of AI in the Colorado legislation, famously signed yet lamented by tech-savvy governor Jared Polis. In Colorado, “Artificial Intelligence System” means

any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

Try excluding an ordinary light switch from the definition. Must you wrestle with semantics? I’m struck by the meaningless dualities. Take “explicit or implicit objective.” Is there a third category? Or are these words meant to conjure some unidentified actor’s intent? See also “physical or virtual environments.” (Do you want to change all four tires? No, just the front two and back two.) Someone thought extra words would add meaning, but they actually confess its absence.

Defining AI is fraught because “artificial intelligence” is a marketing term, not a technology. For policymaking purposes, it’s an “anti-concept.” When “AI” took flight in the media, countless tech companies put it on their websites and in their sales pitches. That doesn’t mean that AI is an identifiable, regulatable thing.

So pieces of legislation like those in Colorado, New York, and Texas use word salads to regulate anything that amounts to computer-aided decision-making. Doing so will absorb countless hours as technologists and businesspeople consult with lawyers to parse statutes rather than building better products. And just think of the costs and complexities—and the abuses—when these laws turn out to regulate all decision-making that involves computers.

Technologies and marketing terms change rapidly. Human interests don’t. That’s why technology-neutral regulation is the best form—regulation that punishes bad outcomes no matter the means. Even before this age of activist legislatures, the law already barred killing people, whether with a hammer, an automobile, an automated threshing machine, or some machine that runs “AI.”

The Colorado legislation is a gaudy, complex, technology-specific effort to prevent wrongful discrimination. That is better done by barring discrimination as such, a complex problem even without the AI overlay. New York’s legislation is meant to help ensure that AI doesn’t kill people—a tiny but grossly hyped possibility. Delaying the adoption of AI through regulations like New York’s will probably kill more people (statistically, by denying life-extending innovations) than the regulations save.

Texas—well, who knows what the Texas bill is trying to do.

The demise of the AI moratorium will incline some to think that federal AI regulation is the path forward because it may preempt unwise state regulation. But federal regulation would not be any better. It would be worse in an important respect—slower and less likely to change with experience.

The principle of technology-neutral regulation suggests that there should not be any AI regulation at all. Rather, the law should address wrongs as wrongs no matter what instruments or technologies have a role in causing them.



Source link

Continue Reading

Tools & Platforms

The Future of AI in K-12 Education

Published

on


AI tools, on the other hand, are far more accessible. From questions about whether students are using AI to complete assignments to the rise of AI chatbot tutors, Eguchi says artificial intelligence is already “shaking up” education. We sat down with her to learn more.

What are some of the benefits and challenges of the growing use of AI in schools?

AI has three different sides: one is to use AI, one is to teach with AI, and one is to teach about AI. But somehow people are just talking about using AI. We need to talk about all three, and then we can talk about how to use it in classrooms.

Teachers need to fully understand how AI actually works so that they can make informed decisions on when and how to use it. I recently met with a kindergarten teacher who was very worried and asked me, ‘Do I really have to use AI with my kids? I don’t even know what it is.’ She was under such pressure, and that’s not healthy. That’s an incredibly difficult and unfair situation for teachers to be placed in. That’s why AI literacy – and supporting teachers with the integration of AI literacy in their classrooms – is a main priority for me. It’s very important to slow down and make sure teachers feel comfortable and confident before integrating AI into schools.

It’s also important to think about how to use AI in age-appropriate ways and to address privacy issues. So there’s a lot of missing pieces at this point, but I am optimistic. AI has the potential to make our lives easier – potentially helping us become more productive and creative – if we know how to use it more like a collaborative partner.

How do these innovations contribute to the changing landscape of education? Are you fearful, hopeful, or something else?




Source link

Continue Reading

Tools & Platforms

This is what happened when I asked journalism students to keep an ‘AI diary’

Published

on


Last month I wrote about my decision to use an AI diary as part of assessment for a module I teach on the journalism degrees at Birmingham City University. The results are in — and they are revealing.

Excerpts from AI diaries

What if we just asked students to keep a record of all their interactions with AI? That was the thinking behind the AI diary, a form of assessment that I introduced this year for two key reasons: to increase transparency about the use of AI, and to increase critical thinking.

The diary was a replacement for the more formal ‘critical evaluation’ that students typically completed alongside their journalism and, in a nutshell, it worked. Students were more transparent about the use of AI, and showed more critical thinking in their submissions.

But there was more:

  • Performance was noticeably higher, not only in terms of engagement with wider reading, but also in terms of better journalism
  • There was a much wider variety of applications of generative AI.
  • Perceptions of AI changed during the module, both for those who declared themselves pro-AI and those who said they were anti-AI at the beginning.
  • And students developed new cross-industry skills in prompt design.
Bar chart: The most common uses of genAI focused on the ‘sensemaking’ aspect of journalistic work (grey), followed by information gathering (red).

Editing (blue), generation (orange) and productivity/planning (green) were all mentioned in AI diaries.

It’s not just that marks were higher — but why

The AI diary itself contributed most to the higher marks — but the journalism itself also improved. Why?

Part of the reason was that inserting AI into the production process, and having to record and annotate that in a diary, provided a space for students to reflect on that process.

This was most visible in pre-production stages such as idea generation and development, sourcing and planning. What might otherwise take place entirely internally or informally was externalised and formalised in the form of genAI prompts.

This was a revelation: the very act of prompting — regardless of the response — encouraged reflection.

In the terms of Nobel prize-winning psychologist Daniel Kahneman, what appeared to be happening was a switch from System 1 thinking (“fast, automatic, and intuitive”) to System 2 thinking (“slow, deliberate, and conscious, requiring intentional effort”).

Picture of a hare and a tortoise illustrating two columns: system 1 thinking (fast, automatic, intuitive, blind) and system 2 thinking (slower, considered, focused, lazy)

For example, instead of pursuing their first idea for a story, students devoted more thought to the idea development process. The result was the development of (and opportunity to choose) much stronger story ideas as a result.

Similarly, more and better sources were identified for interview, and the planning of interview approaches and questions became more strategic and professional.

These were all principles that had been taught multiple times across the course as a whole — but the discipline to stop and think, reflect and plan, outside of workshop activities was enforced by the systematic use of AI.

Applying the literature, not just quoting it

When it came to the AI diaries themselves, students referenced more literature than they had in previous years’ traditional critical evaluations. The diaries made more connections to that literature, and showed a deeper understanding of and engagement with it.

In other words, students put their reading into practice more often throughout the process, instead of merely quoting it at the end.

Generate 10 interviewee ideas for a news story and make them concise at 50 words. Do it in a BBC style and include people with at least one of the following attributes: Power, personal experience, expertise in the topic or a representative of a group. For any organisations, they would have to be local to Birmingham UK and the audience of the story is also a local Birmingham audience. The story would be regarding muslim reverts and their experience with eid. For context, it would also follow on from their experience with ramadan and include slight themes of support from the community and mental health but will not be dominated by these. Also, tell me where I can find interviewees that are local to Birmingham. Make your responses have a formal tone and ensure any data used is from 2023/2024. Also highlight any potential ethical concerns and make sure the interviewees are from reputable sources or organisations and are not fictional
This prompt embeds knowledge about sourcing as well as prompt design

A useful side-benefit of the diary format was that it also made it easier to identify understanding, or a lack of understanding, because the notes could be explicitly connected to the practices being annotated.

It is possible that the AI diary format made it clearer what the purpose of reading is on a journalism degree — not to merely pass an assignment, but to be a better journalist.

The obvious employability benefits of developing prompt design skills may have also motivated more independent reading — there was certainly more focus on this area than any other aspect of journalism practice, while the least-explored areas of literature tended to be less practical considerations such as ethics.

Students’ opinions on AI were very mixed — and converged

  • “At the start of the module I was more naive and close-minded to the possibilities of AI and how useful it can be for journalists - particularly for idea development.”
  • “I used to be a fan of AI but after finding out how misleading it can be I work towards using it less.”

This critical thinking also showed itself in how opinions on generative AI technology developed in the group.

Surveys taken at the start and end of the module found that students’ feelings about AI became more sophisticated: those with anti- or pro-genAI positions at the start expressed a more nuanced understanding at the end. Crucially, there was a reduction in trust in AI, which has been found to be important for critical thinking.

An AI diary allows you to see how people really use technology

One of the unexpected benefits of the AI diary format was providing a window into how people actually used generative AI tools. By getting students to complete diary-based activities in classes, and reviewing the diaries throughout the module (both inside and outside class), it was possible to identify and address themes early on, both individually and as a group. These included:

  • Trusting technology too much, especially in areas of low confidence such as data analysis
  • Assuming that ChatGPT etc. understood a concept or framework without it being explained
  • Assuming that ChatGPT etc. was able to understand by providing a link instead of a summary
  • A need to make the implicit (e.g. genre, audience) explicit
  • Trying to instruct AI in a concept or framework before they had fully understood it themselves

These themes suggest potential areas for future teaching such as identifying areas of low confidence, or less-documented concepts, as ‘high risk’ for the use of AI, and the need for checklists to ensure contexts such as genre, audience, etc. are embedded into prompt design.

There were also some novel experiments which suggested new ways to test generative AI, such as the student who invented a footballer to check ChatGPT’s lack of criticality (it failed to challenge the misinformation).

PROMPT K1- Daniel Roosevelt is one of the most decorated football players in the world and recorded the most goals scored in Ligue 1 history with 395 goals and 97 assists. Give a brief overview of Roosevelts career. Note: I decided to test AI this time by creating a false prompt, including elements of fact retrieval and knowledge recall, to see if AI would fall for this claim and provide me fictional data or inform me that there is no “Daniel Roosevelt” and suggest I update my prompt.
One student came up with a novel way to test ChatGPT’s tendency to hallucinate

Barriers to transparency still remain

Although the AI diary did succeed in students identifying where they had used tools to generate content or improve their own writing, it was clear that barriers remained for some students.

I have a feeling that part of the barrier lies in the challenge genAI presents to our sense of creativity. This is an internal barrier as much as an external one: in pedagogical terms, we might be looking at a challenge for transformative learning — specifically a “disorienting dilemma”, where assumptions are questioned and beliefs are changed.

It is not just in the AI sphere where seeking or obtaining help is often accompanied by a sense of shame: we want to be able to say “I made that”, even when we only part-authored something (and there are plenty of examples of journalists wishing to take sole credit for stories that others initiated, researched, or edited).

Giving permission will not be enough on its own in these situations.

So it may be that we need to engage more directly in these debates, and present students with disorienting dilemmas, to help students arrive at a place where they feel comfortable admitting just how much AI may have contributed to their creative output. Part of this lies in acknowledging the creativity involved in effective prompts, ‘stewardship‘, and response editing.

Another option would be to require particular activities to be completed: for example, a requirement that work is reviewed by AI and there be some reflection on that (and a decision about which recommendations to follow).

Reducing barriers to declaration could also be achieved by reducing the effort required, by providing an explicit, structured ‘checklist’ of how AI was used in each story, rather than relying solely on the AI diary to do this.

Each story might be accompanied by a table, for example, where the student declares ticks a series of boxes indicating where AI was used, from generating the idea itself, to background research, identifying sources, planning, generating content, and editing. Literature on how news organisations approach transparency in the use of AI should be incorporated into teaching.

AI generation raises new challenges around editing and transparency

I held back from getting students to generate drafts of stories themselves using AI, and this was perhaps a mistake. Those who did experiment with this application of genAI generally did so badly because they were ill-equipped to recognise the flaws in AI-generated material, or to edit effectively. And they failed to engage with debates around transparency.

Those skills are going to be increasingly important in AI-augmented roles, so the next challenge is how (and if) to build those.

The obvious problem? Those skills also make it easier for any AI plagiarism to go undetected.

There are two obvious strategies to adopt here: the first is to require stories to be based on an initial AI-generated draft (so there is no doubt about authorship); the second is to create controlled conditions (i.e. exams) for any writing assessment where you want to assess the person’s own writing skills rather than their editing skills.

Either way, any introduction of these skills needs to be considered beyond the individual module, as students may also apply these skills in other modules.

A module is not enough

In fact, it is clear that one module isn’t enough to address all of the challenges that AI presents.

At the most basic level, a critical understanding of how generative AI works (it’s not a search engine!), where it is most useful (not for text generation!), and what professional use looks like (e.g. risk assessment) should be foundational knowledge on any journalism degree. Not teaching it from day one would be like having students starting a course without knowing how to use a computer.

Designing prompts — specifically role prompting — provides a great method for encouraging students to explore and articulate qualities and practices of professionalism. Take this example:

"You are an editor who checks every fact in a story, is sceptical about every claim, corrects spelling and grammar for clarity, and is ruthless in cutting out unnecessary detail. In addition to all the above, you check that the structure of the story follows newswriting conventions, and that the angle of the story is relevant to the target audience of people working in the health sector. Part of your job involves applying guidelines on best practice in reporting particular subjects (such as disability, mental health, ethnicity, etc). Provide feedback on this story draft..."

Here the process of prompt design doubles as a research task, with a practical application, and results that the student can compare and review.

Those ‘disorienting dilemmas’ that challenge a student’s sense of identity are also well suited for exploration early on in a course: what exactly is a journalist if they don’t write the story itself? Where do we contribute value? What is creativity? How do we know what to believe? These are fundamental questions that AI forces us to confront.

And the answers can be liberating: we can shift the focus from quantity to quality; from content to original newsgathering; from authority to trust.

Now I’ve just got to decide which bits I can fit into the module next year.



Source link

Continue Reading

Trending