In recent years, AI-related technologies, such as data science and machine learning, have been used in various fields. In the future, there will be more and more opportunities to learn about such technologies in school education. Moreover, an essential part of learning data science is the ability to analyze vast amounts of data using graphs and other tools. For the visually impaired, the necessity of using such methods that rely on sight is a serious barrier to learning data science.
Therefore, in this project, I would like to organize useful information and develop tools necessary to learn data science and machine learning techniques without relying on sight. At the moment, I have developed a library that notifies visual state changes with sound on Google Colaboratory, and an alternative visualization tool that can convert graphs into audio. I have also created tutorials on basic machine learning and advanced deep learning using these libraries.
The goal is that visually impaired people with some programming knowledge and an interest in data science will be able to run and understand a few typical machine learning tutorials. Eventually, I hope that someone will be able to apply the various tools to other fields, such as scientific research. The detailed background of the project can be found in the following blog, if you would like to read it.What would it take to learn data science without relying on sight?
I am developing various Python libraries mainly for use on Google Colaboratory, which is a relatively accessible notebook environment often used in data science. I have also tried to use local environment scripting for the graph sonification library, but it strongly depends on the running environment, so if you have any problems, please contact me.
This library converts graph data into sound and playback it to enable users to understand the characteristics of the data. It also provides functions that are essential for data analysis, such as the interactive check of values by tracing them with the mouse cursor. The following is an example of a graph that allows you to interactively check the data. Two graphs, one sine wave and one cosine wave, are overlapped and sonified. First of all, please click on the following button to unmute the graph audio. Note that you do not need to click the button on Google Colab.
After pressing the button, next move the mouse cursor appropriately to find if the sound "Enter graph" is played out. Then, while you are on the graph image, you can check the values plotted on the graph as pitches by moving the mouse left or right. When you don't have a mouse, move the cursor to the slider and try moving the slider left or right.
In the initial state, you can hear the sine wave values. By double-clicking, you can now switch to the cosine value. You can also single-click to read out the x and y values for that point. When you don't have a mouse, multiple sliders are available for each graph.
In addition to checking interactively the graph data, it is also possible to create a graph that simply plays sounds. The following is an example of a graph of the same data in audio format. Play it back and check how the sound changes.
You can also access the Google Colab example below to try out all the functions while modifying them by yourself. Please check out the basic usage and how to specify options.
This library adds the following features to improve usability on Google Colab. Together with the browser extension options described next, I think you will be able to use Google Colab more smoothly without having to navigate around a lot with a screen reader.
This is a script that partially rewrites HTML to make it easier to operate the screen reader on Google Colab. It is not required to run the tutorial described below, so please install it if you have time.
If you already have Tampermonkey installed, just click on the following user script link and you will be taken to a page to import it.colab_a11y_utils.user.js
In this tutorial, you will learn about the basics of machine learning, the linear regression algorithm, which is also introduced in the Python Data Science Handbook, a learning resource described below.
In this tutorial, you will learn how to train a voice command recognizer using deep learning, as described on the official PyTorch website.
This website contains the full text of the Python Data Science Handbook by Jake VanderPlas. A link to Colab is provided each page so that you can probably learn all about the basics of machine learning by using the library in this project. Please feel free to request me any additional features that are essential for reading along with the screen reader.
For more information and collaboration, please let me know on disucussion page on Github or send me an email directly. In particular, it would be great if you could tell me what you would like to see added.GitHub Discussion Page
Please share this project. Tweet
I would be happy to have more Stars on the GitHub page.