Category Archives: reading

Flatow: Improving Healthcare, One Search At A Time

This NPR piece was done on “Science Friday” and featured a conversation between Ira Flatow, and guest Dr. Eric Horvitz, a Microsoft Research scientist. The discussion follows the role of increasing internet searches for health-related topics in discovering critical information about the interactions between drugs. Horvitz and his team were able to find a connection between Pravastatin, a drug used to lower cholesterol, and Paroxetine, an antidepressant. When combined, these drugs can cause hyperglycemia. It is noted that these two drugs are both quite common and yet no drug testing had proven the adverse side effects of combining the two. While Horvitz clarifies that in this case, another study at Stanford had found the connection between the two drugs, Horvitz’s team aimed to see if they could have predicted what the study found through looking at search history. As a result, Horvitz went back to a year before the Stanford study, 2010, and, with consent, conducted an analysis that showed similar findings. Thus, this technology could be used in the future too, without scientific study, predict negative interactions between medicine.

While this specific incidence does not necessarily pose a significant ethical question with regards to privacy, as consent was asked, how technology is used in this application does bring about some issues. For example, while we hold Microsoft to a high standard because we know the company, what about other companies using similar technologies, both for good and bad? Will privacy always be prioritized?

The article also highlights the current issue in the U.S., and around the world, of “cyberchondria,” which is a phenomenon where people tend to look to the internet to answer what their symptoms mean, only to escalate their symptoms to the worst extreme. Perhaps with the transition to more forms of e-health, this cyberchondria issue will become less of a problem.

Facial recognition follow up reading

Some other resources for learning about the social impacts and ethical questions of facial recognition:

TheVerge.com covers facial recognition quite a bit. Here’s some recent stories.

One of those stories talks about a tool for airports called Biometric Exit that 15 airports have already put into use. The Department of Homeland Security wants it to be in most US airports within the next four years.

If you don’t like that, maybe you can change your face. In just one more example of resilient creativity in the digital age, folks have been working on how to defeat FR with (pretty rad) hairstyles and makeup.

An overview of the ethics of FR by the Center for Digital Ethics and Policy.

That post mentions the Federal Trade Commission’s (somewhat outdated) best practices for FR, which are suggestions, not regulations.

Is there such a thing as ethical facial recognition? Kairos is one company trying to prove that there is. See their About page and a recent blog post on what would constitute ethical facial recognition, keeping in mind that this is marketing material for a specific company.

There are surely many others–if you have a suggested reading, let me know and I’ll add it.

Angwin: Machine Bias, Singer: Amazon’s Facial Recognition

The New York Times article covered the flawed facial recognition technology created by Amazon. With a database of 25,00 publicly available mugshots, the “technology incorrectly matched lawmakers with people who had been charged with a crime.” Although the software initially served to prevent human trafficking, facial recognition is fast becoming a top target for civil liberties groups and privacy experts. This way, civil liberties groups view it as a surveillance system to lower political protests by eliminating anonymity. In the wrong hands, facial recognition can be inseparable from a tool of social control.

Northpointe’s algorithm has been shown to turn up flawed results. Broward County, Florida uses the score in pretrial hearings, and ProPublica’s research proved it remarkably unreliable. Only 20 percent of those predicted to commit violent crimes did so, and when looking at all crimes it was only slightly more reliable “than a coin flip.” Moreover, it turned up black defendants as more likely to be future criminals two times as much as whites, and also incorrectly labeled whites as low risk more frequently.

Regarding ethics, we determined multiple options for possible outcomes. One extreme would be for technologies such as the risk assessment algorithm and Amazon’s facial recognition to continue to be used in their current capacities. This would mean bias being perpetuated in yet another mode. The opposing option for the former would be for these technologies to be banned entirely. While this would prevent the fundamental flaws currently happening with both the algorithm and facial recognition, there are also benefits that can be had from using this technology, were bias, specifically against people of color, to be removed. As we spoke about in class, just because a product has a high success rate, that does not mean that the success rate of predicting is equal among everybody. Thus, while an average looks successful, the accuracy can be completely skewed. Therefore, the third alternative, which is the alternative our group said we could live with, is to ban products like these until they can be re-thought and created to display no bias.

Fake News and Lethal Robots

In the article, “Fake News and Partisan Epistemology”, Regina Rini expresses concerns about the epidemic of fake news being spread so widely and rapidly on social media platforms today. Fake news is deliberately deceptive, it is meant to catch the eye and catch clicks for the purpose of generating revenue for someone’s website. A variety of epistemic virtues are strangely abandoned on the platform of social media. The article investigates what features of social media make it so easy for people to abandon epistemic virtue. Rini points out that partisanship is the reason that people are more likely to surrender epistemic virtue and readily jump upon wild and outrageous conspiracy theories which to anyone with a critical eye would warrant some skepticism. The way partisanship manifests itself as an opponent to truth is when people share particular political affiliations with others. Those people are seen as closer to themselves and are seen as right simply because the receiver of the fake news assumes that anyone who shares their political opinions must be feeding them proper information. This mainly has to do as well with our willingness to believe testimony outright rather than cross checking every bit of information that is fed to us. Believing testimony is individually reasonable and Rini argues that the mechanism of social media takes advantage of this individually reasonable behavior and co-opts the space of testimony in order to spread misinformation. Rini believes a possible solution to this phenomenon is for social media platforms to flag individual accounts which regularly spread misinformation and to create a sort of score which measures credibility thereby making people take more responsibility for the things they post.

This article begins to consider the implications of machines used as military weaponry. Specifically, should machines be able to kill people in combat? It’s clear that even things that are not designed as weapons have the power and potential to be used as weapons..even a toaster. While robots themselves were never intended to replace humans in war they serve as a way to potentially decrease casualties while also being able to make the choice to kill another person at will. The article explains that this is exactly the issue in that robots have no “will” or morality and than even while war literally means death the only people who can be the perpetrators of death must also be willing to be the recipient of it themselves.
After detailing the various background information regarding the types of weapons and the laws of war, the authors proceed to address the major question of their essay: “should we relinquish the decision to kill a human to a non-human machine?” (134). In order to treat this profound question, the authors expound on the philosophical definition of a human being, a being with intrinsic dignity and rights according to Immanuel Kant. Using a robot to kill a human treats a human being as a mere object, and therefore denies human dignity. Furthermore, the authors discuss morality as an essentially human characteristic, and maintain that a robot could only imitate moral actions, without being in itself moral. They also discuss LAWS as being potentially dishonorable, in that they negate the risk of immediate sacrifice inherent in war. Without the potential for sacrifice, the use of robots becomes cowardly, and thus contrary to what is considered honorable military conduct. The authors conclude by postulating a complete ban on autonomous weapons systems, much like the current status of chemical and gas weapons, considered too heinous to be tolerated.

Luerweg: The Internet knows You Better Than Your Spouse Does

“The Internet knows You Better Than Your Spouse Does,” by Frank Luerweg describes how internet algorithms use psychology to identify personality traits of users. One algorithm used a small number of Facebook likes to pinpoint the “Big Five dimensions of personality.” With only ten likes, it could describe someone as accurately as a co-worker of theirs. This type of technology extends beyond internet algorithms. Studies observing participants’ eye movements were able to accurately describe their personalities based on where they look when walking around a college campus and shopping. Cameras on our computers and smartphones have the potential to read our emotions.

Even though the algorithms are based on personal information, facial expressions and psychological traces are maliciously and commercially used, there are some cases for which algorithms were used to better diagnose and treat psychological disorders and prevent suicide. In some research, the language that people typed and spoke on the phone were gathered and analyzed through the algorithm and it could determine the precursors of suicide and severe depression quite accurately. Moreover, a research team gathered every data from a tester from GPS data to phone calls and what he read on the phone. The team precisely analyzed and could better treat the patient with the result in which level the patient is suffering with a bipolar disorder.

In spite of potential positive effects from such advancements in technology, there are still potentially great drawbacks towards the Internet’s ability to recognize a user. While recognizing a user is mostly used towards commercial matters such as advertisements to suit the user’s preference, such information can result in modern day machine’s being able to use the algorithm in order to correlate further into a person. Such can include personal information such as using photos as facial recognition, with common photo algorithms pointing towards people in manners such as personal information like their orientation, or in certain cases, to recognize if a person has potential criminal tendencies. All this however, comes down to correlating through the given information, but as technology advances, they can lead towards further accuracy, and can reveal more of a person than what was intended by that user. In the end, even the slightest comment or photo on the Internet could open a book into a person’s world.

Paragraphs, in oder, by Georgia, Sean L., and Gabriel. Compiled by Georgia.

Isaak: User Data Privacy

Jim Isaak explains the unauthorized access of personal information of more than 87 million Facebook users to data firm Cambridge Analytica. Researchers at this data firm accomplished such a task through a personality test taken on Facebook to evaluate the user’s psychological profile. To their surprise, this research established a clear relationship between Facebook activity and their personality profile. So, Cambridge Analytica “micro-targeted” their consumers with messages to influence their political behavior, such as with “Project Alamo” under President Trump’s campaign. It wasn’t only members being affected; in fact, every website linked to Facebook allows the tracking of non-members data. Therefore, Cambridge Analytica purposefully targeted messages to the users as a way to influence their political power.

Towards the end of the article, Isaak lays out propositions for how to preserve privacy and protect data. The principles fall under four sub-categories, “public transparency,” “disclosure for users,” “control,” and “notification.” Regarding actual legislation that has been proposed, there are three current propositions in the works. The first, the Blumenthal-Markey bill, focuses on protecting the privacy, focusing on the “opt-in” aspect of consent, while the second bill, put forth by Senator Amy Klobuchar, maintains similar elements but also adds more on notification of changes. Lastly, California is pushing to further secure privacy rights for its citizens, hopefully setting a standard for how to address user privacy in the U.S., and the world, following Facebook & Cambridge Analytica’s inappropriate handling of user data.

Luis wrote the first paragraph, and Kate wrote the second.

The Secret History of Women in Coding

The New York Times article “The Secret History of Women in Coding” tells the story of how computer programming contrary to its association with masculinity today, was once in fact seen as “women’s work”. Coding was seen as secondary to the hard work of creating the hardware which is why it was casted unto women. On the job the women were extremely adept at diagnosing problems with hardware as well since they were tasked to understand it so well. Concepts such as compiling and debugging were discovered by the women who worked with the computers. In fact, it was these women who discovered that the code never really worked the first time. Through the story of Mary Allen Wilkes, who became a computer programmer after being discouraged from pursuing a career in law, the article shows how open programming used to be to neophytes. If you didn’t know how to code, you would learn on the job. Despite the sexism and the pay disparities, Wilkes described how the relationship between men and women on the job was actually quite inclusive and close-knit.

When the number of coding jobs exploded in the 50’s and 60’s women were still on the forefront of computer coding,however the year 1984 significantly changes the way computers are being utilized in science and culture at large. The invention of the personal computer meant that boys would now be favored to learn how to code for no particular reason. If a computer was often bought it was almost always put in a boys bedroom and he would then be at an advantage to learn coding himself prior to entering high school. This, in turn, also begins to switch who becomes desirable as someone who codes and leads to many of the culture identities present in large tech firms today that favor these “hardcore” coders over anyone else and seek to reproduce this type of personality. After 1984 there was a significant drop off in the amount of women majoring in computer science and actively pursuing it as a job post grad. These numbers remained on the downward trend until recently when about 26% of computer science majors are women, this too however is still grim and has a matriculation of about 3% of women represented in large industry firms such as Twitter.

The last section of the article dealt with present-day attempts to remedy the problem of exclusionary and homogenous computing. The author mentioned efforts by Carnegie-Mellon to make the computer science program more accommodating and welcoming to people with less experience, an effort which has proven very efficacious in bringing more women into computer science. The author also brought up various coding boot camps and other initiatives that have contributed to the rising interest in coding and computer science by various segments of the population. The article concludes with an interview of three young, prodigious, female coders who won a hackathon in New York City who express the same frustration with the “boys’ club” atmosphere previously discussed in the article.

Moldova’s Twitter Revolution

Today featured two readings dealing with protests in Moldova in 2009, known as the Twitter Revolution. For background, Moldova was formed following the downfall of the Soviet Union. While others countries in the eastern block experienced economic growth during subsequent years, Moldova’s development stalled, and the country returned to Communism in 2001. Many moved to other parts of Europe for work, but with global financial crisis, many overseas job disappeared. Moreover, the EU has limited residents of Moldova’s access to other countries.

The impetus for these revolts was general elections in April 2009; exit poll were competitive, with about 35% each between the Communists and the other party. However, the election commission stated that the Communists got about half of the vote and people grew angry and skeptical of impartiality. More than 900,000 people gathered publicly and protested against the government for a few days, but the protest was finally suppressed. Social media was the main force behind organization. It served as a vehicle to garner support from the masses and inspire rebellion against the current government. Because its use was relatively unregulated, Twitter was the perfect venue; those opposed could speak out and share their sentiments publicly through hashtags in a way that would allow them to create a digital following.

After the April 5th election results and subsequent protests in Moldova’s capital, Chișinău, the PCRN government used water cannons to disperse the crowds. In the following days, hundreds of protesters, journalists, and students were arrested. Torture and police micconduct including three deaths were documented. Internet access in Chișinău was also shut off. The articles left us with multiple ethical questions, depending on which actor we focused on. Is it ethical for the government to shut down internet access during a protest? Was violent protest the most ethical reaction from the opposition groups? What is Twitter’s responsibility as a company when its platform is used in this kind of situation? 

The events in Moldova demonstrate the role of social media in a democracy and raise questions about government’s right to control the internet as well as the companies responsibility to those citizenries.

 

 

 

 

 

 

Data search: linear and binary search

During the last week, our class discussed more foundational and abstract algorithms in the perspective of the computer. First, we talked about two different means of searching, linear search, and binary search. They were different in the way of searching the target data and this led to higher or lower efficiency in certain situations. The linear search conducted a sequential search starting from the first data of the range. This mean of searching was very straightforward and take more time than other methods but it was advantageous when the data were not sorted. In contrast, the binary search had a specific algorithm which allowed shorter trials until reaching to the target data. The only shortcoming of it was that it cannot be conducted if the user doesn’t know how the data is sorted.

I could better understand the application of these methods after the Huffman encoding tree. It was interesting to see during the lab that the most frequent letter appears near the top of the tree where the depth is not large. This allowed the data to be compressed as much as possible.

Overall, these discussions and materials were about a very small segment of data search, which includes farther accuracy, memory, and re-usability, other than just efficiency. This approach was coherent to what one of the readings said in the class. In order to develop a human familiar software and applications, understanding the foundational algorithm of how the computer inherently works should be prioritized.

TransTech and beyond

This week another groups article, Code Switch by Janet Abbat brings out some attempts at what making coding more accessible to many different types of communities might look like. I wanted to look more into TransTech and do some investigating on their website.  It is clear that they engage heavily with teaching computer coding to trans people and offer many job training packages. What I wanted to see, and I guess what Abat is claiming they do, is get trans identifying people jobs at technology firms. Whats missing from this respective website is how successful they are at this endeavor and what companies they have been able to reach. I also wanted more testimonials and clips of real people, perhaps some video updates of what past TransTech associates are doing.

I think, in all seriousness, that it will probably take more time than one conceptually wants to imagine to make these goals more mainstream and attainable in the computer science sector. I am also cautioned to believe what “diversity” does exist in the CS field. I have recently been in conversation with a friend currently taking a sociology course about Asian Americans and the significance in which even though they are largely represented in the computer cluster they are seldom ever anything other than entry level programmers or developers thus falling victim to these “model minority” tropes elicited by the technology cluster.