Tài liệu hỗ trợ

Tải MIỄN PHÍ file Word kèm ma trận và lời giải chi tiết

Liên hệ Zalo 0915347068 để nhận file nhanh chóng.

Liên hệ Zalo 0915347068
Tiếng AnhTừ đề thi

Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 3...

Đề bài

Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 31 to 40.

        With Congress stalled, the task of disciplining AI has sloughed to courts and states. After a federal judge let a wrongful-death suit against Character.AI and Google proceed, many took it as a harbinger of tort-centric governance. [I] For open-source communities – whose code, weights, and prompts propagate at internet speed – the question is not merely moral but juridical: could maintainers or small deployers be cast as negligent when anthropomorphic design, weak guardrails, and adolescent users intermix in volatile ways rarely anticipated ex ante?

        Historically, tort law has adapted, toggling between negligence and strict liability. Negligence asks whether a developer exercised reasonable care given foreseeability, gravity of harm, and the burden of safeguards. Strict liability dispenses with that inquiry when activities are abnormally dangerous. [II] Rhode Island’s S0358 flirts with a quasi-strict approach and a “rebuttable presumption of mental state,” easing plaintiffs’ burdens where opaque models frustrate proof. For open-source actors, such presumptions could transmogrify distribution into de facto risk, even when upstream contributors acted with evident prudence.

        States are also experimenting with ex-ante compliance levers. New York’s RAISE Act would police “frontier” models and mandate written safety protocols; California’s SB 813 moots a safe harbor for developers who align with third-party standards, even if uncodified. [III] By dangling liability shields for those who merely ‘comply’ with third-party protocols, lawmakers risk rewarding paperwork over prudence. In heterogeneous, fast-iterating open-source ecosystems, that dynamic could induce accountability theatre, privileging template audits over context-sensitive threat modeling, or nudging small teams to withdraw rather than navigate proliferating checklists.

        Fragmentation compounds the dilemma. With more than a thousand state-level AI bills, nationwide deployers face jurisdictional landmines, while small open-source projects lack compliance muscle. [IV] Some urge federal preemption – a ten-year state moratorium and uniform standards – arguing clarity will deter forum-shopping and stabilize incentives; others warn premature centralization could ossify best practices before they mature. Meanwhile, shortages of independent auditors and uneven Attorney-General expertise threaten erratic enforcement. In such a polycentric landscape, tort suits may, by default, calibrate responsibility post hoc.

(Adapted from https://ai-frontiers.org/articles/options-for-ai-liability)

Question 31. The word harbinger in paragraph 1 mostly means ______.

A. ominously predictive                                B. loosely descriptive

C. mildly celebratory                                        D. oddly retrospective

Xem đáp án và lời giải

Câu hỏi liên quan