Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 3...
Đề bài
Read the passage and mark the letter A, B, C or D on your answer sheet to indicate the best answer to each of the following questions from 31 to 40.
As AI seeps into governance and markets, jurists debate whether a limited legal personality should attach to highly autonomous systems. Rather than a binary yes/no, many propose a spectrum calibrating specific rights and duties to functional capacities, drawing a guarded analogy to corporate personhood. On this cautious middle path, the law might allow narrow capacities – such as being sued or having assets held in trust – without imputing moral agency or dignity. [I] The ambition is managerial, not metaphysical: to allocate responsibility where it can be practically enforced.
Serious obstacles remain. AI is a protean family of tools with divergent architectures, risk profiles, and human entanglements; a one-size regime would be blunt and unjust. Limited-liability logics also falter where systems operate with real autonomy, obscuring fault and making accountability diffuse. [II] Moral personhood diverges from legal personhood, and anthropomorphic labels – “smart”, “self-directed” – can seduce lawmakers into over-attributing agency. The better question is not whether AI “deserves” rights but how law should supervise artifacts that sometimes act without immediate human supervision.
The present consensus is deliberately modest: treat AI as products, keep humans answerable, and adapt remedies for novel harms. Granting independent legal personhood to AI would be premature so long as accountability still traces back to human designers and operators. Examples exist: corporate law can lift the veil for fraud or misfeasance, suggesting targeted revocation or shutdown powers for AIs that cause harm. [III] Meanwhile, EU debates on AI liability remain wary of anything approaching full personhood, favoring incremental procedural adjustments.
Looking ahead, two currents tug in opposite directions. Brain-machine interfaces may entwine computation with cognition, and sustained social participation could bestow de facto legitimacy on useful systems. Even so, most analysts expect no full personhood within the next two decades; prudence counsels constraint while capabilities race forward. [IV] The likely path is iterative: sharpen liability, refine evidentiary rules, and reserve any expansion of status for moments when control, accountability, and public reason can be credibly guaranteed.
(Adapted from CEULI – “Legal Personhood for AI: Challenges and Future Possibilities”)
Question 31. The word protean in paragraph 2 mostly means ______.
A. highly variable B. narrowly fixed
C. painfully repetitive D. mildly predictable
