Tài liệu hỗ trợ

Tải MIỄN PHÍ file Word kèm ma trận và lời giải chi tiết

Liên hệ Zalo 0915347068 để nhận file nhanh chóng.

Liên hệ Zalo 0915347068
Tiếng AnhTừ đề thi

Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 3...

Đề bài

Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 33 to 40.

        “Deepfakes” describe synthetic images or videos produced by deep learning, in which algorithms ingest vast examples and generate outputs with unsettling verisimilitude. As infants learn by trial and error, so do models that, once trained, can mimic patterns without “seeing” reality. The process is often concealed from laypeople, and yet the consequences – if such media were trusted uncritically – could be profound. Although the technology is frequently showcased as innovation, it has also been framed in the passive voice: harms are incurred, norms are unsettled, and trust is diluted.

        Unlike playful face-swap filters or clumsy photoshop hoaxes – typically benign, self-evident, and intended for amusement – high-grade deepfakes are dangerous precisely because they are hard to spot. The casual edits that once circulated as jokes could be laughed off; deepfakes, by contrast, may pass as authentic even to trained eyes. If the public confuses fabrication with documentary record, deliberation is corrupted, reputations are damaged, and accountability is displaced, whereas the tool that enables the fakery remains invisible to most observers.

        The stakes range from petty fraud to geopolitical chaos. Personalized clips can depict a relative begging for money, while counterfeit speeches by leaders might inflame unrest or catalyze war. When they proliferate across feeds, the velocity of misinformation outpaces the capacity for verification; by the time a correction is issued, the lie has already traveled. Should emergency systems be spoofed, officials could be forced into reactive postures, and citizens – misled by plausible footage – might act on fabricated cues.

        Vigilance is teachable, and detection is becoming algorithmic. AI systems can be trained to notice artifacts that humans typically miss, which could expose forgeries. Yet media literacy still matters: users should interrogate extraordinary claims, verify sources, and pause before sharing. If safeguards are adopted early, damage may be contained; if not, the asymmetry between forgers and fact-checkers will widen. Although deepfakes are not yet ubiquitous, their prevalence and polish are likely to increase, making prudent skepticism indispensable.

(Adapted from University of Virginia Information Security: “What the heck is a deepfake? Can you really believe what you see?”)

Question 33. Which of the following is NOT mentioned in paragraph 2 as a reason ordinary edits seem harmless?

A. They are designed mainly for amusement.

B. They are easy for viewers to spot as fake.

C. They typically appear as obvious, joking alterations.

D. They require expert authorization from platforms.

Xem đáp án và lời giải

Câu hỏi liên quan