PandaByte

A strange place where pandas and technology meet.

Lack of women in STEM, say it ain’t so! — May 24, 2018

Lack of women in STEM, say it ain’t so!

Note: Article has been updated since original submission October 26, 2015, for a college assignment. 

From my experience, the women that I know are the same as other men in computer science.  They chose to study computer science because they like it and want to make a lifelong career out of it.  This reason is similar to any other person who studies their major.  One studies it because they like it.  However, I do believe that stereotyping does exist and it may influence how females approach STEM fields.  For example, at the high school, I’m volunteering at, the majority of my club is male, not female.  Every time some girls stop at the door, we invite them in, but they decline the idea of hanging out in a technology or robotics club.  Maybe it is truly lack of interest, but I think it may be the typical stereotypes of STEM, robotics, and technology being nerdy.  At least, this was what I felt at the time. However, in reference to what I have seen in my age group, I truly believe the gap between and male and female representation in STEM Fields will decline.

The stereotypes of working in STEM fields are not dissuading women from it.  Being in the STEM fields has its benefits, such as the money, research, innovation, etc.  There tends to be a stigma that women avoid STEM fields because of its non-feminine and “nerdy” nature.  In my opinion, I do not find this to be true.  There have been studies that women are “underrepresented” in comparison to men, but it should also be taken into account what jobs have been traditionally female and male.  Factors such as test scores or minority information should not be the only ones to determine the female interest in STEM fields.  It should also factor in how women tend to choose other paths and careers.  It is not fair to say that women are necessarily “underrepresented” if there tend to be similar trends among other disciples.

I believe that women choose other fields that men seem to be underrepresented in.  For example, at Loyola University, women dominate the nursing field, even more so than in comparison with men dominating the computer science field.  This pattern can be studied across other fields, but would probably show similar results.  I do not think that this underrepresentation is caused by ability or stereotyping, but rather women and men choose the roles they think they are meant to pursue (whether they are conscious of it or not).

For instance, the traditional role of a woman has been to stay at home, watch the kids, or have some job pertaining to that.  It’s only been recently (past few decades) that women have increasingly had a chance to go to school, have a job, etc., while men have had the chance to do so since the beginning of time.  Ergo, women have had less opportunity to make a statement in a field.  Being able to participate in STEM fields is not a matter of ability, but opportunity.   Furthermore, the STEM fields are relatively “new” in terms of importance and need, thus women could be taking a longer time to choose these fields in relation to what they have been able to choose.  Other disciples such as law or medicine have existed a lot longer, thus the distribution of female to male ratios is more balanced.  As STEM becomes more established and expected as a career choice, women will eventually come to choose STEM fields at same rate men chooses STEM fields. I definitely see it happening in my future.

Authenticating Who You Are —

Authenticating Who You Are

Note: Article has been updated since original submission September 2014 for a college assignment. 

When boarding an airplane, people are asked to display some sort of identification.  Only certain documents are accepted.  From passports to drivers’ licenses, all establish an idea that the person is who they say they are.  This is a rudimentary understanding of validating identity.  A person who is allowed to board the plane has had their identity checked.  The person was able to board because the security guard finds that their piece of information (their document) is true.  People agree to these demands in order to board the plane.  Most agree that this is done to prevent unwanted persons from entering the plane.  The behaviors and reasoning for proving identity is the basis of user authentication.  It is a widely used practice that is an integral part of people’s lives.  The following research demonstrates mostly user authentication in a digital context.  User authentication methods have developed in order to provide a secure environment for accessing information and using applications.  It will continue to have an increasingly larger role as information becomes more efficient to store digitally.

User authentication is substantiating a claim of identity.  The user has to provide means to claim that their identity matches what the receiver of the information knows.  Generally, there are two parts to establishing identity.  These two parts are identification and verification (Rountree 14).  Identification associates the user with an identity that is hopefully theirs.  Verification is the accepted acknowledgment that the user matches the identity.  The processes of the said steps enable a basic method for authentication.  To begin with, there are some basic ways to create identification.  The user either knows, possesses, is, or does certain things to prove who they are (Stallings 452).  Common ways of identification are passwords, fingerprints, etc.  Determining the combination of which methods to use is challenging.  The goal is to preserve security and to prevent fraud.

Authentication is needed for technology users because they want to believe that they can trust someone or something with the control of their information.  Perhaps they wish to access a Cloud system or they want to see their banking information online.  Generally, people want to know that no one else has access to the same application, especially if it is private.  Since digital applications cannot verify the user through physical meetings, like at an airport, users must provide an identity check.  One type of authentication is a one user instance.  Only the one sending the information needs to substantiate who they are (Stallings 454).  A common practice of it is e-mail.  The more common practice of authentication is known as mutual authentication.  There are two or more parties attempting to provide a valid identity to the other (Rountree 10).  It requires that both communicate by sending some proof that the other is who they say they are.  People want to know that whatever they are sending their credentials to is the one they want to send it to.  It is the same vice versa.  Protocols are set in place to ensure a safe exchange.  In order to do such, an exchange of some known thing between the parties takes place.  A digital representation of such could be a key or sequence of numbers that must match what the receiver of the information knows.  For example, the Key Distribution Center (KDC) creates a session key that the user can use with the other party (network, server, etc.) (Stallings 455). It allows the network or server to recognize that the user has access.  This a very basic understanding of what happens for authentication protocols between two parties.  IEEE 802.1X outlines authentication protocols required by port based networks.  It explicitly states that “possession of master keys is proof of mutual authentication in key agreement protocols,” (“Port Based Network” 29).  The standards demonstrate mutual authentication practices.  Another well-known use of mutual authentication practices is the Kerberos system.   The authentication service from Massachusetts Institute of Technology (MIT) negotiates the authentication process between users and services (Rountree 16).  It depends on symmetric encryption.  The efficiency of this system is that users may access whatever servers that Kerberos is associated which creates a single sign-on (SSO) time (Rountree 18).  There are common and necessary standards for mutual authentication.  It is how authentication services are regulated.

The types of verification that a user may use for authentication vary.  The multiple methods and its usage are known as multifactor authentication.  Different types and amounts of authentication can be used to establish some identity (Rountree 23).  For example, a person may be asked to give a password along with a fingerprint to access a computer.  The process took two types of information to validate the identity of the user, something the user knows and something the user is.  It could also be as simple and common as just typing a password.  Ideally, it is accepted that increased factors of authentication may create higher security (Rountree 24).  It would help prevent false users from passing one or more tests.  By having multiple factors, the opponents will have a harder time accessing another user’s information.

Some methods of user authentication are preferred over others.  Each situation is different, hence authentication methods vary.  For example, it is standard to have a username/password for when accessing an email account.  It would not be plausible to use biometric authentication for an email account.  There is limited availability (Stallings 452).  Yet, biometric authentication may be useful for accessing a phone.  Using the most suitable or many different kinds of user authentication creates a better safety net from unwanted people.  It has enabled people to be more conscious of what types of access they are protecting.  From limiting Wi-Fi access to bank accounts, many of the things used require a specific type of authenticating oneself.  Deciding which type of authentication will vary depending on how important the access is.

Non-physical forms of authentication came to be because of the lack of identification through physical attributes.  Digital environments promote a sense of anonymity, but it is difficult to protect such (Shinder).  Generally, when one wants to prove who they are, they appear before the other party.  One’s physical features can substantiate that they are who they say they are.  On the internet, no party can identify who one is based on physical features.  Authenticating identity is crucial.  It provides people a secure digital environment for people to store information or to have access to something (Rountree 7).  It is then necessary to create a claim of identity through non-physical attributes.  Authentication allows users to be exclusive with their information or applications.  People either store information for later use or they want to access a certain application.  Only users know the code required to access the item they want.

Everyone has valuable information to their name.  Usually, if the information is truly valuable, say private pictures, people want to put a lock on it.  They store it in some chest and use it later.  In order to access the things stored, the user needs a key.  When talking about digital information or applications, the same concept applies. To demonstrate how valuable information is, the internet is an example.  A major use of the internet is to share information (Rountree 7).  It is a caveat.  Although there is a bounty of information and applications, there is minimal credibility and trust of the communication paths.  A simple search for a name could pull out things that one does not want others to know.  For example, if one’s information is associated with a social media site, someone could easily find an address and a phone number.  It is also important to note that the internet has proved to have many applications, such as online banking.  Many use the internet as their “real selves”, they act as if they were doing the same actions in person (Shinder).  Hiding all the intricate details of a person’s life is difficult when people are always using that information online.  People go through the hardships of user authentication in order to limit access to such information.

Yet even with different types of authentication processes, it has its flaws.  It is possible to find a hole that enables unwanted users to have access.  They find ways to fabricate user authentication, without having the user.  For example, identity theft is an adamant topic that people worry about.  Identity theft is when someone or something takes a person’s “identity” (Stallings 453).  Usually, this is done unnoticed.  Some person has magically obtained a person’s information by finding out how a person authenticates themselves.  A simple example is using a username/password of somebody else to access their information.  The username/password grants access to a credit card account.  It seems mysterious as to how the bad guys obtained the username/password.  One method of obtaining the username/password or the session key for access is brute-force attack.  It requires guessing what the key could be (Stallings 33).  This could range from pulling passwords from a large deposit of possible passwords or passwords based on a probable algorithm.  Although this method seems inefficient, it works from time to time.  The enemy would probably have to check upon thousands of combinations in order to get something right.  Yet, if they were to try hard enough, they could probably get the information.

Not all problems of user authentication are associated with the authenticating process.  An unwanted user may not want to entirely access someone’s information as them.  They may prefer to alter something already in existence.  Replay attacks are a common attempt on a piece of information’s vitality (Stallings 453).  The enemy accesses the information the sender is in the process of sending, manipulates it (copy/change), and pretends/slips unnoticed to be the original sender.  It is difficult to monitor this case because once the username/password has been authenticated, the information is vulnerable.  There are very few things that can be done at this time.

Another problem associated with user authentication is time.  Time does not necessarily seem to be an issue, but it has a major input to how “good” information is.  Time is usually a precaution to make information more secure, also known as a timestamp (Stallings 453).  The information that a person sends or stores may be associated with a time to further prevent people from accessing the information.   The receiver of the information knows that the piece is associated with a certain time frame.  They know if the timeframe does not match what their expecting, it probably has been tampered with.  It seems secure, yet, there are ways to bypass this feature.  For starters, the enemy may take advantage of the time it takes for the information to sync with the local time (Stallings 454).  The opponent could access the information if they knew that the clocks between the sender and the receiver are off.  Another possible fault lies within the processors.  The thing that is processing may have some sort of glitch that is unable to properly sync with the opposing clock at the correct time (Stallings 454).  The receiver/sender has minimal control in this case.

To avoid a timestamp problem, an alternative solution is a challenge/response system.  This mode of authenticating requires that both parties are “present” to semi-communicate with each other (Rountree 15).  The two parties have communicated beforehand to respond in a certain way when the sender starts the authentication process.  One party sends a challenge and one sends a response.  It is different than having just a username/password situation because the challenge/response system can vary a great deal.  The main issue is overhead, or future insight (Stallings 454).  Rather than no worries at the time of the authentication, users would have to be dependent on what is going to happen.  Generally, there are going to be cases where the user will not be able to partake in the challenge/response or the opponent may have already figured out what the response will be.  Users will find that a challenge/system may be inefficient for basic usage.  Combining both timestamps and challenge/response authenticators are ideal, but it may create more overhead (Stallings 455). Since the authentication process requires both a timestamp and a challenge response, both parties must have more information to work together.  This may be good for enemies if they were to figure out at least one piece of the authenticator, for example, a timestamp or challenge/response.   It could potentially be easier to figure out the missing part.

Authenticating processes are not without flaw, but it creates great good.  Many use some type of technology that requires an authentication process.  Naturally, there are times when people have access to certain types of information and other times they do not.  Perhaps, they wish to join a Wi-Fi network or a printer sharing group.  Whatever the case may be, this group is exclusive.  It is required to become authenticated in order to access the group.  Authentication protocols are in place to protect these acceptances of groups.  The IEEE 802.1X mandates a basic safety protocol that all port-based networks must abide to (“Port Based Network” 20).  Port-based networks are the entities that authorize a user’s access to the network.   The servers have specific protocols that they abide by in order to be deemed a secure communication line.  It prevents illegal transmissions, data loss, or data intrusion (“Port Based Network” 19)).   There are multiple mandates, protocols, and guidelines highlighted in IEEE 802.1X.  For example, Extensible Authentication Protocol (EAP) orders that networks support authentication servers (“Port Based Network” 65).  Authentication requirements work together with authorization protocols in order to create a secure line.  Usually, these requirements happen without the user knowing.  Although these standards are low key, they occur on a regular basis.  It is widely used every day.

A relatively new method of user authentication is federated identity management.  This concept outlines the importance of shared user authentication protocols (Stallings 478).  A single set of authentication standards would apply to multiple companies, organizations, etc.  It reduces inefficiencies such as repetition or time (Stallings 479).  Common authentication protocols allow users to basically use the same credentials for one thing to apply to many things.  Federated identity schemes separate authentication from authorization.  Individual providers/applications do not deal directly with a user’s credentials.  The system is checked regularly and has more requirements in order to access the network, but it creates a more efficient use of shared networks.  It is becoming more and more widely available, notably Google, Yahoo, Facebook, etc. (Rountree 38).  Yet, there are problems with this type of system.  One of the problems associated with federated identities is the lack of knowledge to use.  There are few technologies/applications that enable the features necessary to use federated authentication (Rountree 35).  Another common issue is that federated technologies are still expensive compared to longstanding authentication systems like Kerberos (Rountree 35).  It will still be awhile before every application can use a federated identity scheme.  Yet, with issues of identity theft and lack of organization, a centralized approach to identity management may prove useful (Shinder).  By creating a common authentication scheme, it will create many more applications for users.  All it will take is just one credential.

People will become more dependent on technology, as more applications and uses are created. Providing a credential is as harmless as making a bet without no money.  Digital information is constantly being accessed and stored.  In order to preserve the integrity of the information, user authentication is a must.  Most protocols for validating an identity depend on multiple parties.  These parties share and exchange information.  As seen with federated identities, one type of credential can be used to access multiple applications.   Even now, there are many authentication protocols in place such as the IEEE 802.1X to provide a secure environment.  Regulation of authentication is now the key to future technological advances.  Future research can be done to create even stronger authentication services.  A more efficient, secure, and stable authentication process will definitely be seen soon.

 

Works Cited

“Port Based Network Access Control.” IEEE 802.1X™-2010. IEEE Standards Association, 2010. 19-143. Print.

Rountree, Derrick. Federated Identity Primer. Burlington: Elsevier Science, 2012. Ebook Library. Web. 4 Sep. 2014.

Shinder, Deb. “Cybercrime and the online problem of identity verification”. TechRepublic. N.p., 29 Feb. 2012. Web. 04 Sept. 2014.

Stallings, William. “User Authentication.” Cryptography and Network Security: Principles and Practice. Sixth ed. Pearson Education, 2014. 33,451-490. Print.

Sprouting Engineers Need a Soil Bed —

Sprouting Engineers Need a Soil Bed

Note: Article has been updated since submission on September 28, 2014. Interview based on class assignment. 

Click, click, click, and clack.  The sounds are more than common in a room filled with computers.  Worn pads are evident on many keyboards.  Cycling intervals of intense strokes intermixed with silent pauses are familiar.  Some people converse about a current project, while others are dedicated to finishing the job.  This is a workspace of software engineers.

Erik Allar is currently employed at Sprout Social, a social media software company.  He defines his workspace as casual.  It is not a monotone space, such as a cubicle.  There are couches, ping pong tables, bean bag chairs and more.  His favorite thing about his space are the snacks conveniently located in the kitchen.  The entirety of this space is where budding minds create applications.

These applications entail managing social media for big name companies.  Companies of notable mention are GrubHub, Urban Outfitters, Spotify and more.   Sprout Social’s products analyze and publish tools to monitor social media influences.  Their products are available in web browsers and mobile applications.  Since social media is changing how companies interact with consumers, it becomes necessary to find better ways to manage those interactions.  Sprout Social is an answer.   It creates a better experience for customers and allows businesses to flourish.  Erik found that this company was doing what he wished to do.

Erik Allar graduated from Michigan State University in 2010.  He earned a Bachelor’s of Arts in finance.  He was a bright graduate itching to grab on to life.  Yet, his degree did not lead him to where he thought it would.  The dreams of success and prosperity were not turning out how he wanted it to be.  At this point in his life, he did not have an exact goal or path, but he found that finance was restricting.  The creativity associated with programming lured Eric into his field.  Finance was not as enjoyable as he thought.  Fortunately, he mostly taught himself how to program.  Erik realized “how much potential there was for me to create”.

He further studied programming from Dev Bootcamp, an intensive program for learning computer programming.  There he gained the skills necessary to begin his career in programming.  Starting with back-end applications, Erik did not believe that he would eventually end up where he is now.  In November 2013, Erik began working for Sprout Social.  A friend had enlightened Erik about his own experiences, and Erik was hooked.  He was interested in the types of problems that Sprout Social was addressing and decided to get an interview.  Yet, the most important part that influenced his decision was the people.  Erik found that the people were “very kind”, “down to earth”, and “enthusiastic about what they were doing”.   As described by Erik, “There was an infectious energy at Sprout and I wanted to be a part of it.”

As part of the Sprout Social team, he begins his day with a simple hello.  Each day typically starts with some coffee and oatmeal at 8 or so.  Next, Erik attends any stand-ups, meetings for current projects.  Then he gets to coding.  Currently, at Sprout Social, he is a software engineer with a concentration in iOS applications.  Erik maintains and builds iPhone and iPad Apps.  His most used programming languages are Objective-C, Python, Java, and a few more.  He types away until the next part of his day.  Lunch is usually with lots of chicken and other colleagues chatting about current projects.  In the afternoon, there is an occasional meeting, but usually, Erik is programming again.  When the day comes to an end, Erik leaves around 4:30, takes the L, to avoid the blunders of Chicago traffic.

A typical work week is the standard Monday to Friday set, but it is possible for Eric to work from home.  Yet for Eric, going to Sprout for the people is one of the best things about the job.  Even there, he can be an independent and or a collaborative engineer.  There are times when working alone is best to finish the intended project.  His team is incredible and with high energy, so working is a good time.

Yet, there are times when he is being challenged.  Ironically, Erik considers his greatest challenge to be not knowing any mobile development before becoming an iOS engineer. Any problem does not go without kinks.  Obstacles do not stop Eric.  Although they may be difficult at times, working within a team while solving problems is fun.  According to Erik, “We work on some very unique problems at Sprout, so we need to come up with some very interesting solutions”.   If there is ever a time when Erik feels the need for help, he talks to other iOS engineers and together they solve the problem.  Collaboration is a key characteristic of the engineering teams at Sprout Social.

According to Erik, it is necessary to handle frustration and have patience. Being able to learn quickly and efficiently is an important skill for someone in computer science.  The discipline is about solving hard problems, which allows intuitive minds like Eric to grow. Erik values being able to work out these problems, despite being frustrated.  He has learned that “Eventually you’ll figure it out, but it takes patience.”

It has become important to be able to manage his time and improve his focus, but these qualities apply to life, not just work.  He does not feel that Sprout Social has “changed” him, but rather he feels energized as a person.  “Most of the stuff I did before Sprout is still there, it’s just been amplified because of the energy, support, and balance I find in working at Sprout”.  He would say that “developing a sense of trust” and “following instincts” are what led him to Sprout Social.  If he had chosen to continue finance, he may be a different person.  The intense and hard work at Sprout Social is unmatched.

Ecstatic is a one-word description of Erik.  “I love my job, work at a phenomenal company, and most of all have met tremendous people, the best.”  It is because of the space at Sprout Social.  It is where hard problems are solved despite having such a pleasant atmosphere.  The soft couches and the high number of computers create a haven for software engineers to grow.   Although there are times when assignments are uncomfortable, Erik will do it.  For him, “If you are comfortable, you are doing something wrong”.  The way to learn and discover new things is being outside his comfort zone, despite the luxury of Sprout Social.  This is the way of a software engineer.

Inspection is the basis of quality control… —

Inspection is the basis of quality control…

Note: Article has been updated since original submission March 26, 2015 for a college assignment. 

Summary:

The following report outlines the capabilities of software inspections.  Software inspections can be used as a supplementary error-finding process to the software development scheme.  They may even reduce the cost and time used by other phases in the software development process.   Even more so, software inspections can improve the overall software quality of a system that may not be covered by testing.  Inspections are possible and are arguably a more reliant tool for finding errors in software.

 

Key Things Learned 

  • An inspection is only as good as the person inspecting. It is hard to do a good inspection.
  • Software inspections are not a replacement for any phases of the software development process, such as testing, but a supplement that makes the process more efficient and effective.
  • When regularly done, software inspections is a formal process. They may be conducted regularly with specified reports that are to be documented.
  • Inspections are not used as often as they could be. Many people skip over software inspections in favor of testing, although inspections can reduce the number of defects just as much as testing, if not more.

Software, Tools, Apps, Used and Evaluated

There are many self-automated tools for conducting software inspections.  Some are even included within an integrated development environment (IDE).   Some examples are CodeSurfer, C++ Test, and TestTrack Pro (“Tools”).   But the more practical software inspections, and what this report is focused on is self-inspections. The people working on the software inspect it themselves.  This can be seen as a more formal process, rather than an informal process. As earlier stated, software inspections are meant to be a formal process (Stellman and Greene).  High use of software inspections will reduce cost and time inefficiencies when conducting other phases of the software process such as testing and implementation.  Figure 1 is an example course of how a software inspection process may be conducted in a formal setting.

Figure 1.  Chart of Inspection Meeting Script

softwareinspect.png

Source:  Stellman, Andrew, and Jennifer Greene. Chart of Inspection Meeting Script. Digital Image. Building Better Software. O’reilly, 10 Feb. 2015. Web. 26 Mar. 2015.

 

Findings:

1.1 Software Inspections
A software inspection is a process of evaluating the validity of a software system.  This includes, but is not limited to source code, design models, requirements (Sommerville 666).  The process is done to reduce errors that can be caught before testing and implementation of the system.  Inspections are a way of finding defects at a relatively low risk (low costs, less time, etc).  They can be done by self-inspection or an automated tool.

1.2 Advantages
When conducting a software inspection, such as on source code, it helps mitigate the possible errors hidden by other errors.  Errors because of interactions among the system cannot be easily found by software inspections, but errors that are an error in itself can be found and fixed before any testing (Sommerville 208).  If an error is found by software inspections, a developer will know where it is, and most likely know how to fix it.  This will minimize the number of defects to test the number of errors will be reduced (Radice).  The same cannot be said if errors are found by testing, which the error can be hidden by another error.  Another advantage of software inspections is that inspections can reduce cost.  It does not necessarily require more testing or tools, just knowledgeable persons who can evaluate code.  Also, software inspection increase the likelihood of finding an error earlier rather than later, which tends to be cheaper (Radice).  All possible errors do not show up at the same time during the software cycle.  Therefore there are always reoccurring costs; inspections helps reduce the need to repeat a process with cost, such as reintegration.  Figure 2 outlines the entire software activity cycle, and the number of defects throughout it.  Note that with inspections, the amount of defects noticed is higher than without having inspections throughout the complete cycle.

Figure 2. Example of Defect Removal With Inspections

softwareinspect2.gif

Source:  Radice, Ron. Example of Defect Removal With Inspections. Digital image. Improve Software Quality with Software Inspections. Methods & Tools, 1 Jan. 2002. Web. 26 Mar. 2015.

Inspections also consider things beyond the source code itself.  It could be the system architecture or the style of programming that is inefficient or inadequate to fit the needs of standards, clients, etc.  The software quality of a system can be improved when using inspections because they help find defects that cannot be found by testing or automated tools.  Inspections can improve the maintainability, reusability, efficiency, without having to conduct a test (Sommerville 209).  Also if inspecting something such as the checking the architecture for defects, it is not possible to “test” this as easily as source code.  Furthermore, software inspections help promote understanding of the software, since it forces developers and other workers on the project to comprehend what is going on in the system (Radice).  This may help ameliorate the abilities of junior developers/workers.

1.3 Disadvantages
Software inspections are done based upon the idea that one knows how to evaluate software.  That is, a person must be knowledgeable about how the source code or system should look like before it is ever implemented.  If one is not knowledgeable about how the source code is to work, then a good inspection will be hard to conduct (Radice).  It is necessary to have a high level of skill in terms of knowledge and style of programming, perhaps in algorithms or coding language syntax, to ensure a workable system based on using the right formulas.  Since software inspections are not as practiced as they could be, there is a limited pool in which people know how to conduct a software inspection (Sommerville 666-667).  Even then, many developers on projects are not necessarily masters of what they are programming and may depend on testing to find errors rather than doing self-inspections of the system.  Another issue associated with software inspections is that its lack of use may be due to social concerns.  Since software inspections are not highly used, managers may find the process time and cost consuming, when it actually does the opposite (Radice). Also, since there is lack of documentation that explains how well software inspections can be used, many do not find it absolutely necessary to do.

 

Problems / Questions / Further Work

The report has outlined the basic capabilities of software inspections.  In order to get more in-depth about the process of software inspections itself, they must first be widely implemented.  If anything is to be done to implement software inspections, it must be that developers have to be more willing and adamant about conducting them.  Software inspections can have a greater outcome when done, rather than not doing them at all.  The benefits outweigh the costs, and software inspections should be a seen as a reputable process in the software development plan.  Again, a software inspection is only as good as people allow them to be.  With that said, there are also automated software inspection tools, which were not covered in detail by this report that may make the transition of not using software inspection as a practice to a highly used one.

 

Change Control and Updates

  1. Version 1.0 (original) <<Julie Leong>>, <<3/26/2015>>
  2. Version 2.0 (updated) <<Julie Leong>>, <<5/24/2018>>

 

References  

Radice, Ron. “Improve Software Quality with Software Inspections.” Improve Software Quality with Software Inspections. Methods & Tools, 1 Jan. 2002. Web. 26 Mar. 2015. <http://www.methodsandtools.com/archive/archive.php?id=29&gt;.https://sw.csiac.org/databases/url/key/165/169

Sommerville, Ian. “Software Quality.” Software Engineering. 9th ed. Boston: Pearson, 2011. 208-209, 663-668. Print.

Stellman, Andrew, and Jennifer Greene. “Applied Software Project Management – Review Practices.”Building Better Software. O’reilly, 10 Feb. 2015. Web. 26 Mar. 2015. <http://www.stellman-greene.com/applied-software-project-management/applied-software-project-management-review-practices/&gt;.

“Tools.” Inspections. Cyber Security & Information Systems Information Analysis Center, 1 Jan. 2015. Web. 26 Mar. 2015. <https://sw.csiac.org/databases/url/key/165/169&gt;.

Importance of finding the root —

Importance of finding the root

Note: Article has been updated since original submission April 9, 2015 for a college assignment. 

Summary:

The following report covers what root cause analysis is and how it can be applied to software engineering.  Root cause analysis or RCA identifies the root cause of a problem, which by fixing the root cause, all defects and associated defects will cease to occur.  It can be used as a supplementary process to identify and fix errors that may occur.  Software development cycle can be improved by using RCA, since it may increase the efficiency of fixing errors in a system.  In other contexts, RCA is used as quality management and failure analysis tools to understand errors, which if applied to software development, it may yield positive results.

Key Things Learned

  • Root cause analysis has a series of steps that are to be followed, almost like the scientific method.
  • Root cause analysis can be used as a tool to fix errors in software.
  • Asking “why” continuously about a defect can lead to the root cause.
  • Fixing errors in software development may be made easier by using root cause analysis.
  • Root cause analysis is a quality management tool and failure analysis tool for multiple contexts and should be further applied in software development.

Software, Tools, Apps, Used and Evaluated

The most popular or general tools for root cause analysis are cause mappings that show relationships between defects, possible causes, and their associated factors.  They are used to help identify which cause is the ultimate cause (root cause) of a defect in a plain view (Otegui 188). One example of this is fishbone diagrams.  They break down causes into branches and associate attributes or factors to each cause.  This cause-effect diagram is read from right to left as in the Japanese language since its creator was Kaoru Ishikawa (“Root Cause Analysis”).  Figure 1 is a computer-based fish bone example.  The idea is that a server has crashed and that the ends of the bone are possible causes such as the method, men (workers), the workers, technology, and policy.  Along each bone are associated factors or more detailed descriptions of each major bone.  By representing the data from a root cause analysis in this way, it may be easier to question “why” at each part and understand which bone is the ultimate root cause.

Figure 1. Server Crash Fish Bone Diagram.

rca

Source:  Dhandapani, Dhanasekar. Fishbone Diagram Part 3. Digital image. IBM DeveloperWorks. IBM, 21 June 2004. Web.

Another possible representation of root cause analysis is cause-effect mapping which is more of a plain text view of the analysis known as the five “why” approach by Sakichi Toyoda (“Root Cause Analysis”).  Note that the diagram is also read from right to left.  Instead of breaking the data collected from the defect into a fish structure, like in a fishbone diagram, a cause-effect mapping is more like a chain of events.  Figure 2 is a generic outline of a five why approach.  The process begins with a defect(s) and then is questioned with a why which leads to another why and so on.  This process is done repeatedly until one arrives at the conclusion that they have the final cause.  The diagram also allows for even more extensions of why it can technically go on longer than five why questionings.

Figure 2.  Five why approach.

rca2

Source:  5 Whys on a Cause Map. Digital image. ThinkReliability. ThinkReliability, 2011. Web. <http://www.thinkreliability.com/Root-Cause-Analysis-CM-Basics.aspx&gt;.

Either diagram can be used in root cause analysis.  They are visual aids that have helped develop the root cause analysis process into what it is.  Root cause analysis can help identify and clarify what is going on in a context.

Findings

Introduction

Root cause analysis is the process of finding the underlying cause of a defect.  This is known as the root cause.  Ideally, upon finding the root cause, it will be simpler to find and fix a solution for the root cause.  Therefore, it eliminates the root cause and other defects and factors associated with the original defect.  This analysis reduces overexertion on fixing problems, errors, and defects.  It may ultimately lead to a more efficient system and mitigate problems.

Originally, root cause analysis (RCA) has origins in Japan, from the creation of the fishbone diagram by Kaoru Ishikawa.  These diagrams help understand what are the causes of problems and what are their associated factors (refer to Software, Tools, Apps, Used and Evaluated).  Root cause analysis has since gained popularity as a quality management tool for projects and problems.  More recent case studies and implications have focused on studying disasters.  For example, NASA studied the 1996 explosion of the Challenger spaceship with RCA.  Questions such as “Why was there no eject safety procedures?”, “How was the ship able to simultaneously combust?” were from a large debate of what caused the Challenger to explode (Otegui 184).  If these questions were efficiently addressed, such as finding the root cause of the problem, then it may have been possible to stop the explosion.  RCA is now formally adapted, almost as its own scientific method, to understand problems of a situation and be able to fix those problems.  It is applied to many areas, not just software, such as engineering, business, science, etc.

 

1.2 Process of Root Cause Analysis

The first step with RCA is to identify and define the defect or error of a context.  RCA is used for after the fact.  The assumption is that a defect must be known in order to truly understand the entire context of the defect.  Identifying the error helps associate the context with the outcome.  The next step in RCA is to collect data associated with the defect (Otegui 186).  By collecting the data of the defect, it will be clearer what other problems or defects are associated with the original defect.  It will also help create understanding of the entire context of the defect.  The next step is to understand, possibly draw out a possible tree, of events, factors, and defects associated with the original defect.  This may create a clear view of what thing causes everything else to happen (refer to Software, Tools, Apps, Used and Evaluated).

Next step is to question which parts are true causes for the defects.  A way to do this is to repeatedly question “why” for every event/factor (Shore and Warden).  For example, if one were asked why a fuse blew out, one could say that the fuse needed to be changed.  Then, that event could be questioned even further.  The fact that the fuse needed to be changed is true, but why did it need to be changed.  A possible solution is that the circuitry was faulty, and so on.   Continuously questioning why will eventually lead to an underlying cause or root cause of a situation.  In order to check this possible root cause, it is necessary to check the cause-effect relationship and make sure it applies to the original defect.  Continuing on with the fuse example, the fuse circuitry was faulty which needed to be changed, and the fuse blew because of it (Otegui 187).  It is necessary to check if the logic is sound to ensure the validity of the defect.  After confirming the root cause, a solution should be created to address the root cause.  The ultimate result will be that the defect and other associated defects will no longer occur.

Use in Software Engineering

The software development process is not without errors.  Even with the best-created requirements, test cases, etc., errors will still happen.  As Murphy’s Law states, “if something can go wrong, it will” (Shore and Warden).  It is illogical for the entire software development cycle to have no errors. When these errors occur, it is necessary to have a process that addresses these errors and is able to understand them.   Root cause analysis can be used in software engineering to fix errors at any part of the software development cycle.  It could be during the design stage or even production stage when problems arise.  In theory, root cause analysis, in general, is the investigation of an unwanted event and how to fix it (Jenkins).  The unwanted event is created from a base of factors and evidence that shows that the root cause is at the top, like a pyramid.

If RCA is used with software engineering, it can be used as a quality management technique to oversee errors with the software process.  Ideally, RCA would prevent a repeated defect from happening again.  It can also fix, most likely, multiple errors associated with the root cause at one time.  This may reduce time spent trying to fix each error at a time, which is important to software development as people want efficient software for low cost and time (Shore and Warden).  Another key use is that many problems that occur in a software system are either repeated errors or of a similar nature, which could show a common or root cause.  For example, when compiling a program, a null pointer error exception is returned.  One can fix that one or few lines of code and the error is fixed.  But another error appears again, this time it is also a null pointer exception in another location.  This shows some sort of symmetry in a problem that may be solved by conducting root cause analysis.   If software developers were to apply RCA to this example, one could study the associated data, diagram the results and then begin questioning.  The first question would be why is there a null pointer exception?  The answer, there was no test for this scenario.  Again, why was there no test for this scenario, and so on.  By doing RCA, a better and more informative answer can be found, rather than just saying “there was no testing” (Shore and Warden).

One environment in software development, agile development, optimizes the root cause analysis scheme.  Agile development is the idea of completing the software development cycle from start to finish multiple times.  Although the software system is not perfect, it is able to be perfected by creating a series of versions of the system (Shore and Warden).  The multiple iterations of a system aid the process of RCA.  RCA is most useful for analyzing common errors.  These common errors are usually very similar in nature and have near identical results.  The idea is that during the agile development cycle, one can see problems in a relatively short time frame and in different contexts and constraints (Jenkins).  This would improve the data scheme of a problem and showcase that if there are repeated problems, there is likely some root cause to the repeated problems in the software system.  RCA is ideal for this type of situation.

Consequences from Using RCA

Root cause analysis provides a better understanding of failures.  It will efficiently fix defects and also prevent them from happening.  Besides fixing the overlying defects and further preventing the same or similar defects, RCA in software engineering will cause software development teams to cooperate, and to spend less time blaming a part that went bad (Shore and Warden).  In turn, this enables to create retrospectives of the situation and may ameliorate the knowledge of all team members of the software system’s context at hand.  Also, and most importantly, using root cause analysis can improve the overall software quality of the system.  RCA helps create the most efficient system by reducing the most inefficient errors in one blow (figuratively speaking, possible to do RCA multiple times).  This is done at a relatively lower cost and time spent because the time spent doing RCA will reduce time and cost for repeating expensive processes such as testing (Jenkins).  RCA helps promote efficient and time/cost effective products with practical and good results.

Yet, root cause analysis is not always “good”.  Although root cause analysis has been seen in a positive view, it may be possible to over apply root-cause analysis.  As said earlier, “what can go wrong, will go wrong”.  Root cause analysis increases overhead in the system, that is one will know what, when, where, how something is going to happen.  This may require a lot of time and more in-depth understanding of the system, which may not be possible in larger operations (Shore and Warden).  Also not everything is RCA applicable.  This means that some cases may not need RCA in the sense of having a thorough understanding.  Some things are just simple and not complex which RCA may prove to be.  RCA can also be proved to be inefficient when trying to fix a non-common problem.  Common problems or repeated problems show that they probably share or have very similar root causes. Non-common problems show that it may have been a onetime thing, so RCA is not required in this case (“Root Cause Analysis”).  Another issue with RCA, is that there is an assumption that the root cause can be fixed, which is not necessarily true.  It could be that software developers do not have the skill to fix the root cause and are unable to get the resources to fix it (Shore and Warden).  Even more important is the fact that the root cause could be completely outside of the team’s scope to fix it.  Scope as in it is unable to be fixed because it is unable to be controlled.  For example, if a rainy day caused a car to slide, one doesn’t blame their tires, they blame the weather.  But they cannot control the weather.  In terms of software, an example of a limited scope may be that a company requires the use of a certain standard or hardware that cannot be changed no matter what.  The software development team will be unable to fix the root cause.

Problems / Questions / Further Work

The following report is generated based on existing research of root cause analysis.  Root cause analysis can be applied to multiple contexts, such as engineering, software development, etc.  But in terms of RCA being used with every software development cycle, there is not much documentation of root cause analysis as a tool specifically for software engineering.  Some agile development teams may already use root cause analysis thinking, but they do not necessarily implement it formally.  In the future, more research can be done with using root cause analysis techniques formally as a standard with software engineering.  Also there more research could be done on the relationship between software quality and root cause analysis.

Change Control and Updates

Version 1.0 (original) Julie Leong, 04/09/2015

Version 2.0 (updated) Julie Leong, 05/24/2018

References

Jenkins, Nick. “Root Cause Analysis.” A Software Testing Primer:. Software QA Testing Resource Center, 2008. Web. 10 Apr. 2015. <http://sqa.fyicenter.com/Introduction_to_Software_Testing/35_Root_Cause_Analysis.html&gt;.

Otegui, Jose Luis. Failure Analysis : Fundamentals and Applications in Mechanical Components. Cham: Springer International Publishing, 2014. Ebook Library. Web. 06 Apr. 2015.

“Root Cause Analysis.” :: Cause Mapping Basics. ThinkReliability, 1 Jan. 2011. Web. 10 Apr. 2015. <http://www.thinkreliability.com/Root-Cause-Analysis-CM-Basics.aspx&gt;.

Shore, James, and Shane Warden. “The Art of AgileSM.” James Shore: The Art of Agile Development: Root-Cause Analysis. O’reilly Media, Inc., 1 Jan. 2008. Web. 10 Apr. 2015. <http://www.jamesshore.com/Agile-Book/root_cause_analysis.html&gt;.