top of page
Search
Writer's pictureJulia Underwood

Guess What I'll Say...


Predictive texts are helping to minimize the amount of time we spend trying to communicate by guessing our next words. However, it could be argued that it also minimizes the amount of thinking power we put behind our messages. By choosing our words, predictive texts put words in our mouth (or should I say hands).



I have a secret to share - I don't actually have any social media accounts and have been Facebook Free since 2008. Shocking considering I am in the MET program, right? This made me a little limited in options for using predictive text to create a microblog. So, I started on my phone and created a text message trying out a couple of the prompts:


My idea of technology is to have a good day and I'm not sure if I can get some more information.


As a society, we are going to be a little more comfortable with what you want to do and that is a good thing.


Education is not about the weekend of the day off to the gym and pizza.


There are definitely some connected phrases in there that I have used multiple times. For example, "have a good day", "I'm not sure", and "that is a good thing". Also, my husband and I go to the gym and have pizza - perhaps we do that more than I realized.


When I first did this, I just kept picking the first word that was suggested to me, and was sending the message to my husband. So I tried to send it to someone else and got the same predictions. I then tried again but was more selective in my prompts:


My idea of technology is to take the first step towards making a real positive difference in your life.


As a society, we are going to have to believe in us.


Education is not about how much money you have to pay for school.


When I picked the prompts, they actually sounded less like me as it was not populated with phrases I commonly use.


So what does this mean? On a day to day basis as prompts in our communication with friends and families seems pretty harmless as we ultimately have the power to select what we choose in the creation of our message. On the darker side, it is a little strange thinking about how the language we use is being monitored, tracked, and applied. Information about us in the from of how we communicate with our friends and family is being collected and used to generate a mock-up of our personality. Taking it to the next level, this type of algorithm creates the possibility for our personalities can be mimicked and potentially used in a situation that we may be unaware of.


Predictive texts can be used to generate more than just sentences. There are programs that create entire stories based off of the first few words inputted (see my extension post "Second Guessing..."for an example). This could lead to larger issues if you have programs such as these generating stories picked up by news outlets or even the stories themselves.


Twitterbots are another example of using text programs to interact with users and they have the ability to push certain stories. They also mimic users and have been known to make racist and sexist comments (google "twitter bot gone wrong" and see what comes up). This is an example of the AI Mirror discussed by Dr. Vallor.


In her lecture, Dr. Vallor has a quote that stood out to me: "this future of human-AI partnership, one that seres and enriches human lives, won’t happen organically; it will need to be a choice we make, to improve our machines by improving ourselves.” (2018, 14.32). As those in charge of creating the algorithms that run these programs, we need to be accountable for the instructions and data we input and that these should "reflect the values that we ant to aspire to" (Machine Bias, 42.00).


References:


McRaney, D. (n.d.). Machine Bias (rebroadcast). In You Are Not so Smart. Retrieved from https://soundcloud.com/youarenotsosmart/140-machine-bias-rebroadcast


Santa Clara University. (2018). Lessons from the AI Mirror Shannon Vallor

13 views0 comments

Recent Posts

See All

Comentarios


bottom of page