For the initial prototype of the project, I thought my product would look like a small ear piece that was always active in the background, listening to all surrounding sounds and actively distinguishing between the misophonia and non-misophonia triggering sounds. This way the device would play the white noise in the ear of the individual only. However, when I was researching devices to integrate my algorithm on, I saw that the raspberry pi device was the best choice for the use of my project. Therefore, the size of the raspberry pi changed my initial idea of an ear piece to a small rectangular machine. Now the raspberry pi code I wrote can recognize surrounding sounds and will predict whether it is misophonia or or non-misophonia triggering sounds, and it will only play white noise through a speaker if it is a misophonia triggering noise.
I am currently working on improving the sensitivity of the prediction function on the raspberry pi device. I am already using a microphone when testing sound recognition of the algorithm, however the machine needs to be more sensitive in terms of separating sounds from the environment and evaluating each on its own to get a more accurate prediction. The prediction accuracy is about 92% on my current coding platform on my computer, which is a high accuracy, but when I upload part of the code to the raspberry machine, the sensitivity to surrounding sound decreases so the accuracy decreases. Now that I have almost completed my initial goal of creating such a device, I will keep on working to extend this project throughout this year and maybe even make it into an app. With that I may be able to connect the app to headphones and play the white noise into the ear of the user, like I was planning for my initial prototype. This will make it easier to use the product in public as it will target only the user.
