Monthly Archives: April 2019

Complexity and Quantum Computing (Blog)

In class, we discussed the Computability capabilities of modern computers. Although they are exponentially more efficient than their 40-year old counterparts, computers still struggle what can be computable. One way to approach this issue is to improve the amount of steps that an algorithm takes to sort or search through a list. For example, a linear search requires an n amount of steps to achieve while a binary search requires a log2n steps. With lists containing thousands of elements, binary search makes the difference between ten steps to thousands of steps for linear search. Apart from the efficiency of an algorithm, there is also another component that saves time and money: circuits. Seven decades ago, computers consisted of moving parts, such as vacuum tubes, to calculate mathematical functions. It all changed once transistors were used to compute. Unlike physical components that turn on and off, transistors are small, efficient, and do not generate as much heat. Compared to the 1980’s, a problem that required months to solve is now solvable in minutes. This revolutionary discovery sparked the birth of the Digital Age. This is an improvement, but is there an even faster method of computing? With  the help of physicist and their knowledge of Quantum Mechanics, computers can work at unbelievable speeds using Quantum Computing.

In Quantum Computing (closely linked to quantum mechanics), quantum bits (or ‘qubits’) can simultaneously hold values of 1, 0, or both, rather than being set to 1 or 0 as traditional electronic bits are. This way, one qubits can hold the same information as 3 bits. As good as it sounds, quantum computing is still in its infancy since Quantum Computers are large machines that are somewhat unreliable and not yet very powerful. Regardless of the current struggles for these computers, large technology companies, such as Google and Nasa, are experimenting and building the first quantum computers. For example, Google strives for the future of computing and proclaims that their “D-Wave 2X quantum computing machine has been figuring out algorithms at 100,000,000 times the speed that a traditional computer chip can”. Although the product is not available to the public, such efforts are a great step forward for the future of computing.

So, if Quantum Computers are still in their infancy, then why should we care so much about them? In the best scenario, they have the potential to blow right through obstacles that limit the power of classical computers, solving problems in seconds that would take a classical computer the entire life of the Universe just to attempt to solve, like encryption, optimization, and other similar tasks. They are powerful, but things can go wrong if they are used for unethical reason such as online theft and security breaching. In the Digital Age, it is crucial to promote the application of Quantum Computers along with their ethical implications. Without the proper education, Quantum Computing can be a step forward for scientific research or a step backwards for the world economy.

Privacy and Data Sovereignty

Talking about the ethics of social media companies controlling our data made me think of an episode of Hasan Minhaj’s “Patriot Act” on Netflix. Minhaj discussed these same topics that were brought up in discussion on Monday, including the seeming cluelessness of lawmakers as to the basics of technology, as well the rich collection of data companies have built up. His main conclusion of the episode is that regulations of these companies need to change to protect our data and treat these companies as what they really are: glorified ad agencies.

The discussion also made me reflect on the difficulties or protecting our information absent necessary legislation. Because so much of the modern life relies on the internet, it is almost impossible to go completely off the grid unless you have already lived as a hermit, or you are Ron Swanson from “Parks and Recreation.” I am extremely troubled by this. Technology is a basic fact of life at this point; it is unsettling that companies are able to have an extremely detailed record of our locations, store every single one of our internet searches, and keep track of what we interact with on social media, all in the service of creating a detailed profile to sell us more. This is made worse by the potentiality that law enforcement could possibly gain access to all of this data in the future in a potentially incriminating way.

As Minhaj suggested in his show, the best possible method forward is changing the way companies interact with our data, and, as mentioned at the end of class, convincing legislators requires organization. I do not have the technical knowledge to propose nuanced solutions to these problems, but in the meantime, I will try to remain extremely conscious of the way I use my devices and these platforms.

 

Overlooked weaknesses of the Internet and SNS

Social network services (SNS) such as Facebook and Twitter have a huge impact on society. SNS influenced politics, economics and our daily lives in every world. There are some countries in Europe that succeeded democratizing and SNS helped the processes. In Moldova, it cannot be denied that SNS helped to gather the crowd and fueled the flames of protests. More than 900,000 people gathered through a Twitter hashtag, but as soon as the government blocked the internet, they lost what to do and the will to fight. Two issues arose in my after reading the article.

First, should the government censure any stuff in SNS? I think there need to be some regulations to prevent sexual abuse, violence and any inhumane discriminations. However, the problem is that once the public gives the government a power to control the internet and SNS, they can do something other than what they were supposed to do in specific occasions like political protests.

Secondly, I realized that people have been overlooked the blind spot of the internet and SNS. What happens if we don’t have access to them and if this is intentional by some parties? This sounds unlikely, but it is a point to think about. People hugely rely on computers, internets and SNS, and they are definitely helpful. But at the same time, we need to know that they also have some limits. They are not something operated automatically, but instead, they are controlled and managed by someone with intentions. In the article we read, Moldovans did not know how to keep protests when the internet was blocked.

Merge vs. Quicksort (Blog)

In class today, we covered the current Complexity and Computability for Computers.These concepts are essential to Computer Science because the process of writing algorithms is not limited to writing correct syntax. In fact, that is the first step for programmers. Once the programmer understands the syntax for a language, he or she must then research how much money and time an algorithm requires because every additional bit costs a lot for storage. These questions are not covered in introductory programming, but they make the difference between writing a program to count apples to finding the fastest route between points. Although Graph Theory solves this problem, there is plenty of real life applications, from walking between classes in college to how Google Maps gives you the fastest route to your destination. So, how is one algorithm better than the other? Well, one algorithm is considered better if it takes less steps to complete.

In the context of our class, we covered two sorting algorithms: Insertion and Merge. Insertion takes n^2 number of steps to complete while Merge takes n*log2n steps to complete, n being the size of the list. Through these examples, it is clear that Merge sorting is better than Insertion. However, is it the best? Well, not really. At the end of the day though, whatever the best sorting algorithm really is depends on the input (and who you ask). There is an algorithm known as Quicksort which can take less steps than Merge steps. The time complexity of Quicksort is O(n log n) in the best case, O(n log n) in the average case, and O(n^2) in the worst case. Let’s discover how they compare:

Like Merge sort, quick sort works like a recursive procedure (a function that calls itself). The key to understanding this algorithm is to think of it as a pivot. Without going into too much detail, this is how it works: the goal is to rearrange the array such that all all elements less than the pivot are to its left, and all elements greater than the pivot are to its right. Sounds simple, but how to do it will take me pages. Regardless, the important distinction is storage. Why quicksort is better than mergesort ? Quick sort is an in-place sorting algorithm. In-place sorting means no additional storage space is needed to perform sorting. Merge sort requires a temporary array to merge the sorted arrays and hence it is not in-place giving Quick sort the advantage of space. Like I mentioned before, more storage equals more money. So, in terms of price and speed, Quicksort outperforms Merge Sort.

That was a lot of information, so what’s the importance? Well, the problem lies when programmers that are only taught syntax never consider storage cost and processing power. Unless we do something about it, potentially billions of dollars are at stake for storing the large quantities of information from the Digital Age. To further the conservation, here are some additional questions to build upon: can we build faster and cheaper sorting algorithms? If so, how? Is there ethical implications of storing and sorting so much information at a faster rate? How can we assure that all programmers understand the implications of writing code?

Mondays discussion

Monday’s discussion dealt primarily with ethical situations we had not begun to discuss much in class like the implications of privacy and large tech corporations selling our information. I think I want to take this time to say more about how these problems are also connected to the disparities of women and others not male in the technology sector. The industry is also preying on these groups of people significantly more for location data, buying trends and personal information. I feel like this often never gets enough attention because its happening to those most closely centered on the “margin” of who we consider important or worth saving.

I might also add that the preying on these groups also repudiates many harmful cycles of fake news, propaganda and false reporting that leads to many consequences. The need for these groups to really take over the technology sector is immediate and the digital privacy disaster we are in right now can only be minimized if we bestow agency back to these groups in a commitment that is long overdue.

Reflecting on Monday’s discussion

What  struck me most from the discussion on Monday would be just how widespread and trenchant access large companies, like Facebook and Google, have to our ‘personal’ data. This trend worries me considerably. I feel like privacy rights are almost obsolete in an age where so much information is readily available digitally, often without the user’s knowledge. Luis’ requesting his data from Google really substantiates this point. There really is an exhaustive record of everything we do on the internet.

Now of course, we all see this as troubling and we all don’t desire our personal information to be collected in such a manner. However, we all continue to use Facebook, etc. for a variety of reasons. Whether it be convenience, comfort or habit, we continue to be involved in social media and other digital enterprises that result in data collection. To be truly removed would be living off the grid, something few people are willing to do. This fact led me to the unhappy conclusion that this type of infringement on privacy is an axiomatic result of living in the digital age. I hope this is not the case, but unless people really make a concerted effort to address this issue, I’m doubtful of anything changing soon. That being said, I do believe many people are concerned about this and are actively trying to change things for the better.

Project idea brainstorm

What the board said:

White board with project ideas

 

My best attempt to transcribe:

Python

  • TJ: Not sure what to do yet
  • Gray: Expand poem generator

Web (mostly without names)

  • Design a website that explores a computing concept (halting problem? traveling salesman?)
  • Create a website for my personal <something>
  • Luis: Personal portfolio
  • Georgia (I think?): A website about website design principles
  • Kate (I think?)Create a website/portfolio for my print media art class that houses all of my art (& the art of other people in the class) OR a youtube video that EXPOSES Google & other web-related things (maybe Alexa too?)?

Twine

  • Story about privacy & data sovereignty
  • TJ: Make my existing story more elaborate

 

Isaak: User Data Privacy

Jim Isaak explains the unauthorized access of personal information of more than 87 million Facebook users to data firm Cambridge Analytica. Researchers at this data firm accomplished such a task through a personality test taken on Facebook to evaluate the user’s psychological profile. To their surprise, this research established a clear relationship between Facebook activity and their personality profile. So, Cambridge Analytica “micro-targeted” their consumers with messages to influence their political behavior, such as with “Project Alamo” under President Trump’s campaign. It wasn’t only members being affected; in fact, every website linked to Facebook allows the tracking of non-members data. Therefore, Cambridge Analytica purposefully targeted messages to the users as a way to influence their political power.

Towards the end of the article, Isaak lays out propositions for how to preserve privacy and protect data. The principles fall under four sub-categories, “public transparency,” “disclosure for users,” “control,” and “notification.” Regarding actual legislation that has been proposed, there are three current propositions in the works. The first, the Blumenthal-Markey bill, focuses on protecting the privacy, focusing on the “opt-in” aspect of consent, while the second bill, put forth by Senator Amy Klobuchar, maintains similar elements but also adds more on notification of changes. Lastly, California is pushing to further secure privacy rights for its citizens, hopefully setting a standard for how to address user privacy in the U.S., and the world, following Facebook & Cambridge Analytica’s inappropriate handling of user data.

Luis wrote the first paragraph, and Kate wrote the second.

The Secret History of Women in Coding

The New York Times article “The Secret History of Women in Coding” tells the story of how computer programming contrary to its association with masculinity today, was once in fact seen as “women’s work”. Coding was seen as secondary to the hard work of creating the hardware which is why it was casted unto women. On the job the women were extremely adept at diagnosing problems with hardware as well since they were tasked to understand it so well. Concepts such as compiling and debugging were discovered by the women who worked with the computers. In fact, it was these women who discovered that the code never really worked the first time. Through the story of Mary Allen Wilkes, who became a computer programmer after being discouraged from pursuing a career in law, the article shows how open programming used to be to neophytes. If you didn’t know how to code, you would learn on the job. Despite the sexism and the pay disparities, Wilkes described how the relationship between men and women on the job was actually quite inclusive and close-knit.

When the number of coding jobs exploded in the 50’s and 60’s women were still on the forefront of computer coding,however the year 1984 significantly changes the way computers are being utilized in science and culture at large. The invention of the personal computer meant that boys would now be favored to learn how to code for no particular reason. If a computer was often bought it was almost always put in a boys bedroom and he would then be at an advantage to learn coding himself prior to entering high school. This, in turn, also begins to switch who becomes desirable as someone who codes and leads to many of the culture identities present in large tech firms today that favor these “hardcore” coders over anyone else and seek to reproduce this type of personality. After 1984 there was a significant drop off in the amount of women majoring in computer science and actively pursuing it as a job post grad. These numbers remained on the downward trend until recently when about 26% of computer science majors are women, this too however is still grim and has a matriculation of about 3% of women represented in large industry firms such as Twitter.

The last section of the article dealt with present-day attempts to remedy the problem of exclusionary and homogenous computing. The author mentioned efforts by Carnegie-Mellon to make the computer science program more accommodating and welcoming to people with less experience, an effort which has proven very efficacious in bringing more women into computer science. The author also brought up various coding boot camps and other initiatives that have contributed to the rising interest in coding and computer science by various segments of the population. The article concludes with an interview of three young, prodigious, female coders who won a hackathon in New York City who express the same frustration with the “boys’ club” atmosphere previously discussed in the article.