This web app can detect if a person has covid-19 with 73% accuracy.
The app is developed using Azure Web APP, Azure Custom Vision and Azure Active Directory. Dataset used is the Coswara Dataset by IISc and only the heavy-cough sounds are used.
It takes .wav sound files as input and converts them to a Mel-Spectogram and saves it as a .png image.
The following python script converts the .wav sound file to mel-spectogram and stores as .png:
def getMelSpectogram(input_path, output_path):
sound = input_path
y, sr = librosa.load(sound, mono=True, duration=3)
librosa.feature.melspectrogram(y=y, sr=sr)
D = np.abs(librosa.stft(y))**2
S = librosa.feature.melspectrogram(S=D, sr=sr)
S = librosa.feature.melspectrogram(y=y, sr=sr, n_mels=128,
fmax=8000)
fig, ax = plt.subplots()
S_dB = librosa.power_to_db(S, ref=np.max)
img = librosa.display.specshow(S_dB, x_axis='time',
y_axis='mel', sr=sr,
fmax=8000, ax=ax)
fig.colorbar(img, ax=ax, format='%+2.0f dB')
ax.set(title='Mel-frequency spectrogram')
plt.savefig(output_path)
-
Azure Custom Vision: The Mel spectograms are sent as an intput to Azure Custom Vision Model, which predicts the result and displays on the web app.
-
Azure Web App: The flask app is deployed using azure web app.
-
Azure Active Directory: The app is made secure using Azure Active Directory which is used for Authenticating the user.
The app can be either tested using custom or personal sound or using the test data available at: https://github.com/UzairAI/Detectovid/tree/master/TestData
Note: It does not take into consideration empty or non-cough sound files.
Dataset source: https://arxiv.org/abs/2005.10548