Svelte, do you hear me?
- Eric Bréhault Frontend developer at Nuclia
This is a short introduction of the talk2svelte library which provides voice recognition and voice synthesis for Svelte, thanks to the Web Speech API.
It allows to interact with a Svelte site by voice, like navigating or clicking on elements.
Transcript
Hello.
My name is Eric Breaux.
I'm a front-end developer at Nuclea,
and I use a lot of Svelte. I just love it.
I'm super happy to participate to the Svelte Summit this year.
So I'm here to talk to you about Talk to Svelte.
That's a Svelte library,
which is providing voice recognition
and voice synthesis for Svelte applications.
It's based on the Web Speech API,
which is widely supported at the moment on all browsers,
most of the browsers, I'd say, and on smartphones.
And the first question is, why would we
need to integrate voice recognition and synthesis
on any web application?
Well, I see two reasons for that.
The first one is kind of obvious.
It's accessibility.
If you cannot use your fingers properly,
if you cannot use a keyboard or a mouse for any reason
in different contexts, well, maybe using your voice
to interact with a website is a good thing.
and it can help in many situations.
The second reason here is because most people
are using websites using smartphones.
And well, phones initially has been designed
to be used through voice and ear,
as crazy as it can sound nowadays.
So I think that maybe it makes sense to leverage this
when you're building a web application.
So let's see how it works.
All you need to do is to import a directive from TalkToSvelte,
and then you can put it on any kind of element,
like a button or a link that you want to interact with.
With the directive, you can decide
which is going to be the voice command associated with that.
And by just saying it, it will activate the click event
or any other event you want to trigger from that.
That's basically how it goes.
So now let's see a demo to see what it does exactly.
All right, so first I enable voice recognition.
Here we go.
And now I can start interacting with this web page by voice.
Let's try.
Click.
OK, the counter is incremented.
It just worked.
Cool.
Menu.
Menu.
Examples.
OK, I've been navigating to this page just by seeing a cloud.
So that's cool.
Let's move this blue square in the grid.
Right, right, down, up.
OK, works.
This example now, origin.
Origin, Mexico, destination, Paris.
All right.
So you can see we can define context
where each command can be used, even though they are the same.
Now let's go with free text input.
So record.
I love that and this is a great demo.
Cool.
So menu.
Sorry, I didn't say stop.
Stop.
Come on.
All right, so now I did stop registering the free text input.
Good.
Menu.
Menu.
Languages.
All right, so as you could see, it was a bit challenging for me to use voice command in
English because I don't have a super good accent.
I'm not a native speaker, obviously,
but the system works with all languages.
I have here a small list.
Let's switch to French, which is my native language.
And let's try with that one.
Canard.
OK, cool.
I got the duck.
So that's how it goes.
You can use that in any useful application.
You can perform any interaction you want.
And that's very fun to use.
That's the end of the demo.
Okay, so that's it. Thank you for your attention.
I hope you enjoyed this demo.
And if you want to contact me, here is my contact information.
And you can check the demo on toptozvalve.zorso.app if you want to.
There is also a GitHub repository, so feel free to check it,
make pull requests or whatever. Thank you.