- We have a App for the people who knows english but can't able to communicate with other people so first the user will think about taking a tutor or contacting to the other people who knows english well and can able to correct him/her. So we came across an innovative idea so that one can learn to conversate in english with the other people independently and more correctly according to their comfort.
-
In the front end part we are using the react-native for creating our App.If the user click the mic button then he can give the prompt in the voice formate and it will automatically get the reply in the text and voice formate.
-
we have a three line at top left corner in which user can access the information about us and they can find the help section there.
-
By clicking in the stop button the user can see his/her all the conversation over the week and he can also able to analyze his performance over the week.
-
At the backend we are using the tech-stack such as MongoDB,NodeJS,ExpressJS,Ollama as AI Language model.
-
When the user will click the mic button and giving the input as Voice then it get converted into "TEXT" with the help of React-native Library called "react-native-voice/voice" then this data will be get stored in the data base using MongoDB database.
-
Now this data will be fetched thorugh Axios "GET" request to our local server where Ollama model ("called Mistral") has already been installed, this wiil give the user response, it's correct form without any Grammatical error.
#PptVideo
#workingVideo