2017-07-18 29 views
0

我试图用SpeechSynthesis API制作一个Web应用程序,在点击开始按钮后运行我的程序,然后开始在我的Android和iOS设备上收听用户。用户可以说任何事情来运行程序。之后,我可以每三秒钟播放音频文件。以下是我的代码到目前为止。我的逻辑错了吗?点击并听到任何声音后,我无法启动程序。开始在我的Android和Safari设备上运行SpeechSynthesis API

另一个问题是,这个SpeechSynthesis API可以支持Android和iOS设备,但是当我看到一些事件如'soundstart event'时,它不支持Safari Mobile。他们的关系是什么?我非常困惑。 SpeechRecognition API仅支持Chrome浏览器,但是我不需要像soundstart那样使用某些事件吗?

非常感谢您的帮助。对此,我真的非常感激。

<p id="msg" align="center"></p> 

    <script> 
     var utterance = new SpeechSynthesisUtterance("Hello"); 
     //window.speechSynthesis.speak(utterance); 

     var supportMsg = document.getElementById('msg'); 

     if ('speechSynthesis' in window) 
     { 
      supportMsg.innerHTML = 'Your browser <strong>supports</strong> speech synthesis.'; 
      console.log("Hi"); 

      utterance.onstart = function(event) 
      { 
       console.log('Hhhh') 
      }; 


      var playList = ["1_hello", "2_how_old", "3_what_did_you_make"]; 
      var dir = "sound/"; 
      var extention = ".wav"; 
      audio.src = dir + playList[audioIndex] + extention; 
      audio.load(); 

      var audioIndex = 0; 
      setTimeout(function(){audio.play();}, 1000); 


      window.speechSynthesis.speak(utterance); 

     } 
     else 
     { 



      supportMsg.innerHTML = 'Sorry your browser <strong>does not support</strong> speech synthesis.<br>Try this in <a href="https://www.google.co.uk/intl/en/chrome/browser/canary.html">Chrome Canary</a>.'; 
     } 

     //window.speechSynthesis(utterance); 

    </script> 
    <div class="container"> 
     <button id="runProgram" onclick='utterance.onstart();' 
     class="runProgram-button">Start</button> 
    </div> 

回答

0

这是否适合您?

function playAudio() { 
 
    var msg = new SpeechSynthesisUtterance('Help me with this code please?'); 
 
    msg.pitch = 0; 
 
    msg.rate = .6; 
 
    window.speechSynthesis.speak(msg); 
 

 

 

 
    var msg = new SpeechSynthesisUtterance(); 
 
    var voices = window.speechSynthesis.getVoices(); 
 
    msg.voice = voices[10]; // Note: some voices don't support altering params 
 
    msg.voiceURI = 'native'; 
 
    msg.volume = 1; // 0 to 1 
 
    msg.rate = 1.2; // 0.1 to 10 
 
    msg.pitch = 2; //0 to 2 
 
    msg.text = 'Sure. This code plays "Hello World" for you'; 
 
    msg.lang = 'en-US'; 
 

 
    msg.onend = function(e) { 
 
    var msg1 = new SpeechSynthesisUtterance('Now code plays audios for you'); 
 
    msg1.voice = speechSynthesis.getVoices().filter(function(voice) { return voice.name == 'Whisper'; })[0]; 
 
    msg1.pitch = 2; //0 to 2 
 
    msg1.rate= 1.2; //0 to 2 
 
    // start your audio loop. 
 
    speechSynthesis.speak(msg1); 
 
    console.log('Finished in ' + e.elapsedTime + ' seconds.'); 
 
    }; 
 

 
    speechSynthesis.speak(msg); 
 
}
<div class="container"> 
 
    <button id="runProgram" onclick='playAudio();' class="runProgram-button">Start</button> 
 
</div>

+0

我是这么认为的,即使这听起来很可笑。这是一个很好的提示。非常感谢Alex。我会尽量让这个API像我们为annyang所做的那样工作。你认为我可以做onClick ='window.speechSynthesis.soundstart()'吗?另外,SpeechSynthesis和SpeechRecognition API之间的关系是什么? Cuz SpeechSynthesis可以支持Android和safari,但SpeechRecognition只能支持Chrome。 – Johnny

+0

编号'SoundStart'是IT开始讲话时触发的事件,如IT完成时触发“onend”。它们不是直接触发的方法。 'speechSynthesis.speak'会在开始说话时触发'SoundStart'。草稿状态下的两个API都是**实验**。谷歌只是在其他方面有所提高。 –

+0

感谢您的回复,亚历山大。我懂了。我想知道是否可以调整我的音频文件,如音调或比率,而不是使用IT来说话。 – Johnny

相关问题