Tiếng AnhTừ đề thi

Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 3...

Đề bài

Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 31 to 40.

How AI News Summarisation Undermines Information Integrity

The growth of artificial intelligence – powered news summarisation services has brought a new level of convenience, but this progress also has a worrying downside: tools that aim to shorten complex information into easy summaries can create distortions that weaken public trust in facts people can check. [I] Recent BBC research was prompted by a major error from Apple Intelligence, which misread a headline about Luigi Mangione and falsely suggested the suspect had shot himself rather than being arrested for murder. The incident highlights how easily leading AI assistants can add false details, link quotes to the wrong people, and get the original article wrong. In its investigation, the BBC was given temporary access to ChatGPT, Copilot, Gemini, and Perplexity and asked them to summarise 100 news prompts. The results were troubling: 51% of responses showed significant issues, including 19% with factual errors and 13% that changed or fully invented quotations attributed to BBC reporting.

[II] Gemini, for example, wrongly claimed that the NHS discourages vaping, despite the NHS promoting its “swap to stop” initiative. Perplexity produced a false timeline that placed TV doctor Michael Mosley’s disappearance in October and his discovery in November, even though he died in June 2024. Taken together, these are not minor slips but a pattern of confident guessing that threatens the relationship between citizens and credible journalism. The risk is amplified because AI summaries often sound authoritative and certain, without the caution that human journalists typically use when details are unclear. BBC News CEO Deborah Turness also warned about a broader system effect: as generative AI becomes more common, information can enter a feedback loop in which people use AI to write messages and others use AI to process them, gradually thinning meaning until the signal becomes mostly noise.

[III] Corporate responses to these findings have often avoided the central issue. An OpenAI spokesperson highlighted the platform’s reach and said the company is committed to improving in – line citation accuracy, but that promise feels weak when set against the failures documented in the BBC’s test. [IV] Regulation remains limited, with few strong mechanisms requiring clear disclosure about model limits or how much outputs rely on pattern – based guessing. At root, these systems are designed to produce plausible text, not guaranteed truth, which makes them risky for tasks where exact factual accuracy matters.

[Adapted from https://www.theregister.com/]

Question 31: Where in the passage does the following sentence best fit?

The impact of this unreliability goes beyond isolated cases of misinformation.

A. [II]        B. [IV]        C. [I]        D. [III]

Xem đáp án và lời giải

Câu hỏi liên quan