Read the following passage about voice cloning technology and mark the letter A, B, C, D on your answer sheet to indicate the best answer to each of the following questions from 31 to 40.
[I] Voice cloning technology has undergone a meteoric rise in sophistication, yet its safeguards remain glaringly inadequate, as evidenced by a recent Consumer Reports investigation. [II] Many of the most prominent AI-driven voice synthesis tools require nothing more than a simple checkbox confirmation to replicate an individual’s voice, making unauthorized impersonation alarmingly accessible to anyone with malicious intent. [III] The perpetrator behind that scheme was met with a hefty $6 million fine, yet the broader regulatory landscape remains nebulous, leaving ample room for misuse and potential exploitation of this rapidly evolving technology. [IV]
The deficiencies in security mechanisms are compounded by the technological prowess of modern AI voice generators, which can replicate vocal nuances with only a few seconds of sample audio. Despite the existence of deepfake detection software, many of these tools falter in distinguishing genuine speech from synthetic imitations, exacerbating concerns surrounding fraudulent applications. Among the six leading AI voice cloning platforms examined, only one - Descript - exhibited moderately effective safeguards, requiring users to submit a spoken consent statement before cloning an individual’s voice. Others, including ElevenLabs, Speechify, PlayHT, and Lovo, merely ask users to confirm consent through an unchecked declaration, a measure that is wholly insufficient in preventing deception, particularly when bad actors seek to leverage the technology for illicit purposes or financial gain. This glaring oversight is an Achilles heel in the fight against AI-driven deception, as it allows malicious actors to exploit the system with little effort.
The rapid evolution of generative AI has far outpaced regulatory intervention, leaving companies largely responsible for implementing self-imposed ethical boundaries without meaningful government oversight or accountability. President Biden attempted to introduce safety mandates through an executive order in 2023, but these efforts were undone when President Trump rescinded the directive upon assuming office, further stalling potential regulatory action. As a result, AI voice cloning remains a largely ungoverned frontier, where nefarious actors can exploit its capabilities with minimal restrictions and little fear of consequences.
Legitimate applications of AI voice synthesis do exist, particularly in accessibility services for individuals with disabilities and in multilingual content production, yet its potential for abuse casts a long shadow over these benefits, complicating ethical discussions. Fraudulent schemes, including AI-enhanced grandparent scams, have already demonstrated how convincingly cloned voices can manipulate unsuspecting victims, often preying on their emotional vulnerabilities. The music industry, too, has suffered from this technological loophole, as seen in the unauthorized replication of artists’ voices to produce and distribute unapproved tracks without the original creator's consent or involvement. Without immediate and comprehensive regulatory action, the unchecked expansion of voice cloning technology threatens to undermine public trust, destabilize industries, and facilitate new forms of deception at an unprecedented scale, making it imperative for policymakers and industry leaders to address these pressing concerns before they spiral further out of control.
(Adapted from NBC News)
Question 31: Where in paragraph 1 does the following sentence best fit?
"This lack of stringent protective measures has already yielded troubling consequences, most notably during the Democratic primaries, when robocalls featuring a fabricated Joe Biden voice sought to manipulate voter behavior."
A. [I] B. [II] C. [III] D. [IV]