PandaByte

A strange place where pandas and technology meet.

Authenticating Who You Are — May 24, 2018

Authenticating Who You Are

Note: Article has been updated since original submission September 2014 for a college assignment. 

When boarding an airplane, people are asked to display some sort of identification.  Only certain documents are accepted.  From passports to drivers’ licenses, all establish an idea that the person is who they say they are.  This is a rudimentary understanding of validating identity.  A person who is allowed to board the plane has had their identity checked.  The person was able to board because the security guard finds that their piece of information (their document) is true.  People agree to these demands in order to board the plane.  Most agree that this is done to prevent unwanted persons from entering the plane.  The behaviors and reasoning for proving identity is the basis of user authentication.  It is a widely used practice that is an integral part of people’s lives.  The following research demonstrates mostly user authentication in a digital context.  User authentication methods have developed in order to provide a secure environment for accessing information and using applications.  It will continue to have an increasingly larger role as information becomes more efficient to store digitally.

User authentication is substantiating a claim of identity.  The user has to provide means to claim that their identity matches what the receiver of the information knows.  Generally, there are two parts to establishing identity.  These two parts are identification and verification (Rountree 14).  Identification associates the user with an identity that is hopefully theirs.  Verification is the accepted acknowledgment that the user matches the identity.  The processes of the said steps enable a basic method for authentication.  To begin with, there are some basic ways to create identification.  The user either knows, possesses, is, or does certain things to prove who they are (Stallings 452).  Common ways of identification are passwords, fingerprints, etc.  Determining the combination of which methods to use is challenging.  The goal is to preserve security and to prevent fraud.

Authentication is needed for technology users because they want to believe that they can trust someone or something with the control of their information.  Perhaps they wish to access a Cloud system or they want to see their banking information online.  Generally, people want to know that no one else has access to the same application, especially if it is private.  Since digital applications cannot verify the user through physical meetings, like at an airport, users must provide an identity check.  One type of authentication is a one user instance.  Only the one sending the information needs to substantiate who they are (Stallings 454).  A common practice of it is e-mail.  The more common practice of authentication is known as mutual authentication.  There are two or more parties attempting to provide a valid identity to the other (Rountree 10).  It requires that both communicate by sending some proof that the other is who they say they are.  People want to know that whatever they are sending their credentials to is the one they want to send it to.  It is the same vice versa.  Protocols are set in place to ensure a safe exchange.  In order to do such, an exchange of some known thing between the parties takes place.  A digital representation of such could be a key or sequence of numbers that must match what the receiver of the information knows.  For example, the Key Distribution Center (KDC) creates a session key that the user can use with the other party (network, server, etc.) (Stallings 455). It allows the network or server to recognize that the user has access.  This a very basic understanding of what happens for authentication protocols between two parties.  IEEE 802.1X outlines authentication protocols required by port based networks.  It explicitly states that “possession of master keys is proof of mutual authentication in key agreement protocols,” (“Port Based Network” 29).  The standards demonstrate mutual authentication practices.  Another well-known use of mutual authentication practices is the Kerberos system.   The authentication service from Massachusetts Institute of Technology (MIT) negotiates the authentication process between users and services (Rountree 16).  It depends on symmetric encryption.  The efficiency of this system is that users may access whatever servers that Kerberos is associated which creates a single sign-on (SSO) time (Rountree 18).  There are common and necessary standards for mutual authentication.  It is how authentication services are regulated.

The types of verification that a user may use for authentication vary.  The multiple methods and its usage are known as multifactor authentication.  Different types and amounts of authentication can be used to establish some identity (Rountree 23).  For example, a person may be asked to give a password along with a fingerprint to access a computer.  The process took two types of information to validate the identity of the user, something the user knows and something the user is.  It could also be as simple and common as just typing a password.  Ideally, it is accepted that increased factors of authentication may create higher security (Rountree 24).  It would help prevent false users from passing one or more tests.  By having multiple factors, the opponents will have a harder time accessing another user’s information.

Some methods of user authentication are preferred over others.  Each situation is different, hence authentication methods vary.  For example, it is standard to have a username/password for when accessing an email account.  It would not be plausible to use biometric authentication for an email account.  There is limited availability (Stallings 452).  Yet, biometric authentication may be useful for accessing a phone.  Using the most suitable or many different kinds of user authentication creates a better safety net from unwanted people.  It has enabled people to be more conscious of what types of access they are protecting.  From limiting Wi-Fi access to bank accounts, many of the things used require a specific type of authenticating oneself.  Deciding which type of authentication will vary depending on how important the access is.

Non-physical forms of authentication came to be because of the lack of identification through physical attributes.  Digital environments promote a sense of anonymity, but it is difficult to protect such (Shinder).  Generally, when one wants to prove who they are, they appear before the other party.  One’s physical features can substantiate that they are who they say they are.  On the internet, no party can identify who one is based on physical features.  Authenticating identity is crucial.  It provides people a secure digital environment for people to store information or to have access to something (Rountree 7).  It is then necessary to create a claim of identity through non-physical attributes.  Authentication allows users to be exclusive with their information or applications.  People either store information for later use or they want to access a certain application.  Only users know the code required to access the item they want.

Everyone has valuable information to their name.  Usually, if the information is truly valuable, say private pictures, people want to put a lock on it.  They store it in some chest and use it later.  In order to access the things stored, the user needs a key.  When talking about digital information or applications, the same concept applies. To demonstrate how valuable information is, the internet is an example.  A major use of the internet is to share information (Rountree 7).  It is a caveat.  Although there is a bounty of information and applications, there is minimal credibility and trust of the communication paths.  A simple search for a name could pull out things that one does not want others to know.  For example, if one’s information is associated with a social media site, someone could easily find an address and a phone number.  It is also important to note that the internet has proved to have many applications, such as online banking.  Many use the internet as their “real selves”, they act as if they were doing the same actions in person (Shinder).  Hiding all the intricate details of a person’s life is difficult when people are always using that information online.  People go through the hardships of user authentication in order to limit access to such information.

Yet even with different types of authentication processes, it has its flaws.  It is possible to find a hole that enables unwanted users to have access.  They find ways to fabricate user authentication, without having the user.  For example, identity theft is an adamant topic that people worry about.  Identity theft is when someone or something takes a person’s “identity” (Stallings 453).  Usually, this is done unnoticed.  Some person has magically obtained a person’s information by finding out how a person authenticates themselves.  A simple example is using a username/password of somebody else to access their information.  The username/password grants access to a credit card account.  It seems mysterious as to how the bad guys obtained the username/password.  One method of obtaining the username/password or the session key for access is brute-force attack.  It requires guessing what the key could be (Stallings 33).  This could range from pulling passwords from a large deposit of possible passwords or passwords based on a probable algorithm.  Although this method seems inefficient, it works from time to time.  The enemy would probably have to check upon thousands of combinations in order to get something right.  Yet, if they were to try hard enough, they could probably get the information.

Not all problems of user authentication are associated with the authenticating process.  An unwanted user may not want to entirely access someone’s information as them.  They may prefer to alter something already in existence.  Replay attacks are a common attempt on a piece of information’s vitality (Stallings 453).  The enemy accesses the information the sender is in the process of sending, manipulates it (copy/change), and pretends/slips unnoticed to be the original sender.  It is difficult to monitor this case because once the username/password has been authenticated, the information is vulnerable.  There are very few things that can be done at this time.

Another problem associated with user authentication is time.  Time does not necessarily seem to be an issue, but it has a major input to how “good” information is.  Time is usually a precaution to make information more secure, also known as a timestamp (Stallings 453).  The information that a person sends or stores may be associated with a time to further prevent people from accessing the information.   The receiver of the information knows that the piece is associated with a certain time frame.  They know if the timeframe does not match what their expecting, it probably has been tampered with.  It seems secure, yet, there are ways to bypass this feature.  For starters, the enemy may take advantage of the time it takes for the information to sync with the local time (Stallings 454).  The opponent could access the information if they knew that the clocks between the sender and the receiver are off.  Another possible fault lies within the processors.  The thing that is processing may have some sort of glitch that is unable to properly sync with the opposing clock at the correct time (Stallings 454).  The receiver/sender has minimal control in this case.

To avoid a timestamp problem, an alternative solution is a challenge/response system.  This mode of authenticating requires that both parties are “present” to semi-communicate with each other (Rountree 15).  The two parties have communicated beforehand to respond in a certain way when the sender starts the authentication process.  One party sends a challenge and one sends a response.  It is different than having just a username/password situation because the challenge/response system can vary a great deal.  The main issue is overhead, or future insight (Stallings 454).  Rather than no worries at the time of the authentication, users would have to be dependent on what is going to happen.  Generally, there are going to be cases where the user will not be able to partake in the challenge/response or the opponent may have already figured out what the response will be.  Users will find that a challenge/system may be inefficient for basic usage.  Combining both timestamps and challenge/response authenticators are ideal, but it may create more overhead (Stallings 455). Since the authentication process requires both a timestamp and a challenge response, both parties must have more information to work together.  This may be good for enemies if they were to figure out at least one piece of the authenticator, for example, a timestamp or challenge/response.   It could potentially be easier to figure out the missing part.

Authenticating processes are not without flaw, but it creates great good.  Many use some type of technology that requires an authentication process.  Naturally, there are times when people have access to certain types of information and other times they do not.  Perhaps, they wish to join a Wi-Fi network or a printer sharing group.  Whatever the case may be, this group is exclusive.  It is required to become authenticated in order to access the group.  Authentication protocols are in place to protect these acceptances of groups.  The IEEE 802.1X mandates a basic safety protocol that all port-based networks must abide to (“Port Based Network” 20).  Port-based networks are the entities that authorize a user’s access to the network.   The servers have specific protocols that they abide by in order to be deemed a secure communication line.  It prevents illegal transmissions, data loss, or data intrusion (“Port Based Network” 19)).   There are multiple mandates, protocols, and guidelines highlighted in IEEE 802.1X.  For example, Extensible Authentication Protocol (EAP) orders that networks support authentication servers (“Port Based Network” 65).  Authentication requirements work together with authorization protocols in order to create a secure line.  Usually, these requirements happen without the user knowing.  Although these standards are low key, they occur on a regular basis.  It is widely used every day.

A relatively new method of user authentication is federated identity management.  This concept outlines the importance of shared user authentication protocols (Stallings 478).  A single set of authentication standards would apply to multiple companies, organizations, etc.  It reduces inefficiencies such as repetition or time (Stallings 479).  Common authentication protocols allow users to basically use the same credentials for one thing to apply to many things.  Federated identity schemes separate authentication from authorization.  Individual providers/applications do not deal directly with a user’s credentials.  The system is checked regularly and has more requirements in order to access the network, but it creates a more efficient use of shared networks.  It is becoming more and more widely available, notably Google, Yahoo, Facebook, etc. (Rountree 38).  Yet, there are problems with this type of system.  One of the problems associated with federated identities is the lack of knowledge to use.  There are few technologies/applications that enable the features necessary to use federated authentication (Rountree 35).  Another common issue is that federated technologies are still expensive compared to longstanding authentication systems like Kerberos (Rountree 35).  It will still be awhile before every application can use a federated identity scheme.  Yet, with issues of identity theft and lack of organization, a centralized approach to identity management may prove useful (Shinder).  By creating a common authentication scheme, it will create many more applications for users.  All it will take is just one credential.

People will become more dependent on technology, as more applications and uses are created. Providing a credential is as harmless as making a bet without no money.  Digital information is constantly being accessed and stored.  In order to preserve the integrity of the information, user authentication is a must.  Most protocols for validating an identity depend on multiple parties.  These parties share and exchange information.  As seen with federated identities, one type of credential can be used to access multiple applications.   Even now, there are many authentication protocols in place such as the IEEE 802.1X to provide a secure environment.  Regulation of authentication is now the key to future technological advances.  Future research can be done to create even stronger authentication services.  A more efficient, secure, and stable authentication process will definitely be seen soon.

 

Works Cited

“Port Based Network Access Control.” IEEE 802.1X™-2010. IEEE Standards Association, 2010. 19-143. Print.

Rountree, Derrick. Federated Identity Primer. Burlington: Elsevier Science, 2012. Ebook Library. Web. 4 Sep. 2014.

Shinder, Deb. “Cybercrime and the online problem of identity verification”. TechRepublic. N.p., 29 Feb. 2012. Web. 04 Sept. 2014.

Stallings, William. “User Authentication.” Cryptography and Network Security: Principles and Practice. Sixth ed. Pearson Education, 2014. 33,451-490. Print.

Password Security: A meaningful policy —

Password Security: A meaningful policy

Note: Article has been updated since original submission March 12, 2015 for a college assignment. 

Introduction

Definition of Problem

A password is a mean of authentication for the user to an application.  It is of common occurrence and used daily by many applications.  Passwords can tend to be a weak point for users, since hackers can get to their information if they have the password.  It is important to safely secure a password that only the user has to prevent unwanted access to a user’s application. The following document recommends and highlights a carefully planned out way to create a secure password so that unwanted users may not have access and as well as the destruction or changing of old passwords.

Problem scope

The following document is meant for individual users who need to create multiple passwords for various applications (commercial, private, etc.).  This procedure will generate unique online (electronic) alphanumeric passwords including special characters that enable users to access their applications.  Contrastingly, the procedure will also entail the changing or destruction of passwords, following the guidelines of creating online passwords.  The passwords created by this procedure are limited to online passwords, and not physical tokens, keys, passphrases, etc.  Though further research can be expanded upon non-online passwords, the securing of online passwords is currently the most scalable and available to users.


Policy

Single password

Once someone has a password, they have access to important information.  If the information that an application contains were not important, then there would be no need for a password to protect it.  A password is a way of restricting access for unwanted users.  Hence, most users do not want other people to know their information.  As stated earlier, hackers can access a user’s applications by the use of their password.  The most common way of getting someone’s password is “brute force” attacking where a hacker will guess a password until it is correct.  They are able to do this by using generators/automated programs that randomize different inputs (alphanumeric combinations, dictionary inputs, etc.).  It becomes crucial that a user takes care in creating their password or else they risk the possibility of a hacker figuring out their password relatively easy.  Users should make it difficult to where no one else would be able to retrieve their password.  Common passwords such as personal information, significant things about a user’s life, or any common patterns such as ‘1234’ or ‘abcd’ must be ultimately be avoided since they are predictable.  The following is a more extensive of list of what must not be considered when creating a password; passwords less than eight characters, words from a dictionary, slang, common phrases, patterns, work information, and personal information.  Anything that is predictable or readily available information is a poor choice when choosing a password because most likely someone else, albeit being a hacker, may have created the same password.  A person’s password can be relatively common in comparison to the millions of users who have passwords to some application in the world.  It is necessary to create a unique password that is highly unlikely to be replicated anywhere else.  Passwords should avoid the list said above, and each character of the password should be chosen randomly.  This will include a variety of upper and lower case letters, numbers, and special characters (see Appendix A for more information).  The length of the password will be longer than eight characters.  The longer the password, the more time and effort it would take for someone to hack it.   Passwords may also be generated by password generators instead of the user creating the password, but they must follow the guidelines stated earlier and then some (see Appendix B).  Every password that a user has should be created in either of these manners.  Although having each character of the password randomly chosen seems hard to remember, it will reduce the likelihood of hackers, and creating a common password.

Multiple passwords

Ultimately, a person has many applications they need a password for.  People have to protect different types of applications and information, hence more passwords are needed.  Users will follow the policy for creating a single password in order to create multiple passwords.  Each password created must not replicate any other password the user currently has or had.  Otherwise, a hacker or unwanted user can figure out that a user has one password for multiple applications.  They would be able to access much more information about the person without having to get multiple passwords.  Having multiple distinct passwords can be a strong point for protecting a person’s information if made correctly.

Changing/Destroying passwords

Passwords are never truly destroyed.  Upon changing a password, the password may look destroyed to the user, but the application may store old passwords to a certain extent.  Hence, it is important to change passwords often.  Although, this policy outlines creating a unique password, it does go “bad”.  Eventually, with use of the same password within an application, the password becomes gradually “weaker” because of time.  It is necessary to assume that any possible unwanted user is becoming “stronger”, that they have had time to find, brute force, and get the password a person has for an application.  Another possibility is that the once unique password could be a common password because someone else could have created.  Therefore passwords are to be changed regularly, having a “time stamp” with them.  A user should make their own sort of “time stamp” of their password, by recording of how long the password has been in use/when the password was created.  Then a user should use a timer (either by regularly checking the time stamp or creating a timer of sorts) to know when said password has gone bad.  A timer and time stamp are necessary for every password a user has regardless of the significance of the information the password protects.  Another need to change password may be that the application has been accessed by a user’s password without their use/knowledge.  Usually the application will notify the person, but a user should always check the status of their password of the application.  Users should be aware of when their password is being used.  If their password with an application has been stolen or possibly compromised, users must immediately change their password in accordance to password creation policies.  Not all applications require people to change their password regularly or at all.  User must stay aware of the status of their password.  hat the password create was good.ins good for as long as it can, users must actively protect their password, which will ensure tOnce a password is needed to be updated, they must be changed in accordance to application password standards, or better yet a timely enough interval that depends on the security sensitivity of the information in the application.  These time lengths vary, but a recommendation is taking a scale of sensitive information, where 1 is least important and 5 is of the utmost importance and is secured beyond means of a password.  Then take the number of months in a year, then divide by the rating.  This is possible estimation of when to change passwords.   Upon the need to change a password associated with an application, users must first change/update old password to new password following the single password policy, and secondly destroy any location where the password may have been stored.  This is to prevent any hackers from using old passwords and storing it to their bank of possible passwords, or even a person’s old password may be valuable to some other application of user.


Procedures for Passwords

Create a password

  1. User identifies need for password.
  2. User creates passwords according to requirements of application and requirements (listed in Appendix A).
  3. User tries using password in application and checks that it works, otherwise application will reject password and most likely ask for password again or ask to change password (in this case go to Create a password).
  4. User creates time stamp and timer on each created password.
  5. Users will follow Changing a password(s) when timer of password expires or when password become compromised.

Alternate possibility for creating a password

  1. User identifies need for password.
  2. Instead of User creating their own password, user uses a password generator (which does the random choosing of each character of the password instead of the user creating it) to create password (list of requirements for password generator in Appendix B).
  3. User tries using password application and checks that it works, otherwise application will reject password and most likely ask for password again or ask to change password (in this case go to Create a password).
  1. User creates time stamp and timer on created password.
  2. Users will follow Changing a password(s) when timer of password expires or when password become compromised.

Create multiple passwords

  1. User identifies need for creating a password(s) in addition to a user’s already existing set of password(s).
  2. Users uses Create password procedure as needed to created password(s).
  3. User checks that new password(s) is not the same as old password(s) and existing password(s) for other applications by self-checking or using an automated program.
  1. User tries using password application and checks that it works, otherwise application will reject password and most likely ask for password again or ask to change password (in this case go to Create a password).

Changing a password(s)

  1. User has kept track of current passwords by maintaining time stamps/timers on passwords and user has checked validity of password with application (checking if application has been compromised with use of password).
  2. User identifies need to change password (timer has expired or password with application).
  3. User requests to change old password to a new password within an application.
  4. User creates new password in accordance to Create a password
  5. User tries using new password in application to see that it works.
  6. User creates time stamp and timer on created password.
  7. Users will follow Changing a password(s) when timer of password expires or when password become compromised.

Validation of Password Policy

Managing password(s)

People have many needs for passwords.  The majority of applications require some sort of password, such as email, social networking, bank accounts (by use of pins, but they are passwords), company servers/accounts, etc.  Hence, users must track what passwords they have, what passwords they have to create/change, and what passwords may be a point of risk and liability.   This will validate the creation and changing password process.  In general, users have to be able to monitor the behavior of their passwords.  Users should know which passwords have access to applications, when these passwords are used, and if these passwords are used on devices (computer, phone, etc.) authorized by the user.  In addition, if users are following the procedures to create and protect passwords, they must be able to repeat the same policies and also be able to remember said passwords.  Passwords may be remembered by memorization, but more realistically, a user will need to “write them down”.  Although this seems contradictory to the password protection policy of having written down passwords, it is necessary to have some way to access a user’s passwords if there happen to be more than are able to be memorized.  Users may store passwords in vaults, not physical vaults, such as a password manager which will store online a user’s many passwords (see Appendix C for recommendations).

Password Protection

In order to protect a password(s), users must avoid the spreading of password information.  This tends to be a common point of failure for many.  Users may unknowingly write down their passwords, but it is a huge risk, especially if it is in plain sight (ex. post-it note next to computer screen) or in an easily accessible place.  Another liability that users neglect is sharing a password.  A password is no longer a secure password for a user, if it is known by others who do not really need the password.  Sharing passwords across email, texts, or even the “Remember Password” function of many applications is not protecting a password.  If a password has been believed to be compromised, users must either change the password or users must delete/change their application information, and then proceed to create a new password.  In order to make sure that a password created remains good for as long as it can, users must actively protect their password, which will ensure that the password create was good.

Checking validity of password

This list is meant for those who have followed through the password creation/changing policy at least once.  Passwords are ultimately validated by the user, because only the user will be affected by the use of the password.  User can use the following methods in any given order to validate policy of creating and changing password(s).

  1. Avoid sharing passwords.
  2. Use of a password management system (go to appendix b for a recommended checklist of what to look for in a management system), which will do many features in this list.
  3. Visit applications regularly and check validity of passwords (checking if they work, if they have been compromised, etc.), even though application is not needed at time.
  4. Checking time stamps/timers on passwords and then update accordingly.
  5. Change passwords regularly and when needed.
  6. Use a password strength checker upon creation of password (usually included in password generators and managers).
  7. Set up application in order to receive updates from applications if password security is compromised.
  8. Store passwords in an authenticated encrypted vault.
  9. Monitor password use behavior by use of password management system or some independent application.

Appendix

Appendix A

List of Recommendations for Passwords (see Reference 2)

  • Contain at least 12 alphanumeric characters.
  • Contain both upper and lower case letters.
  • Contain at least one number (for example, 0-9).
  • Contain at least one special character (for example,!$%^&*()_+|~-=\`{}[]:”;'<>?,/).
  • Creation of a timestamp and timer for each password
  • Avoids words from a dictionary, slang, common phrases, patterns, work information, personal information, and anything else readily discoverable or predictable
  • Each character is randomly generated

Appendix B

Recommended requirements for password generator (see Reference 7 and 8)

  • Allows user to have the option to choose when creating password (such as password length, use of special chars, upper and lower case letters, etc.)
  • Tells user the strength of the password generated
  • Has authentication abilities for using application
  • Is able to store created passwords in an encrypted vault
  • Can synchronize passwords across multiple applications and devices

Appendix C

Recommended features of a password management system (see Reference 7)

  • Continuously being updated/fixed
  • Easy/Simple to use
  • Encrypts passwords as they are stored
  • Generates passwords in accordance to Appendix A
  • Can judge strength of passwords
  • Has tools that allow management of time stamps and timers of passwords
  • Works on all of user’s intended applications and devices
  • Can be synchronized across different platforms and devices
  • Is well known and reputed

References

  1. https://www.sans.org/security-resources/policies/general/pdf/password-protection-policy
  2. http://www.sans.org/security-resources/policies/general/pdf/password-construction-guidelines
  3. http://www.sans.org/reading-room/whitepapers/authentication/clear-text-password-risk-assessment-documentation-113
  4. http://www.sans.org/reading-room/whitepapers/sysadmin/options-secure-personal-password-management-1287
  5. http://www.giac.org/paper/gsec/4002/password-management-automation/106398#__utma=216335632.1100537299.1426116365.1426116365.1426116767.2&__utmb=216335632.16.9.1426120339773&__utmc=216335632&__utmx=-&__utmz=216335632.1426116365.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none)&__utmv=-&__utmk=141747825
  6. http://www.securingthehuman.org/newsletters/ouch/issues/OUCH-201310_en.pdf
  7. http://lifehacker.com/5529133/five-best-password-managers
  8. http://www.hongkiat.com/blog/password-tools/
  9. https://www.us-cert.gov/sites/default/files/publications/PasswordMgmt2012.pdf
  10. http://www.cnet.com/how-to/the-guide-to-password-security-and-why-you-should-care/
SSL/TSL: A Weak Point —

SSL/TSL: A Weak Point

Note: Article has been updated since original submission January 30, 2015 for a college assignment. 

Everybody is listening.  Whether a person is talking on the phone while waiting in line, or someone is conversing with another sitting in a food court.  It is easy enough to see that anyone, purposely or not, can see and hear things that someone else does not want to be known.  All pla0 is based on SSL 3.0, and they vary little.  As stated in Request for Comments (RFC) 2246, “The differences between this protocol and SSL 3.0 are not dramatic, but they are significant enough that TLS 1.0 and SSL 3.0 do not interoperate,” (Dierks and Allen 4).  SSL 3.0 and TLS 1.2 (soon to be 1.3) are the most recent versions of the protocols which work together to secure information sent over the internet.

These protocols are layered upon other protocols, such as TCP (Transmission Control Protocol) and IP (Internet Protocol), to secure internet communications.  Before SSL and TLS, TCP/IP protocols and alike did not have a means of protecting information against unwanted users (Garfinkel and Spafford 107).  TCP and IP allow for the passing of information between applications but do not necessarily secure the connection.  SSL and TLS are mechanisms that are able to encrypt the information and the session between applications.  By using asymmetric and symmetric cryptography to encrypt the data between two applications, such as a client and server, SSL and TLS can maintain a safe session.  The first thing that occurs when a client is initiating a session with a server is a handshake.  A handshake is a greeting and a first welcome sign that acknowledges another’s existence (Garfinkel and Spafford 690).  In a SSL and TLS handshake, the handshake begins with the use of a ClientHello, which is a message containing the protocol version, a random string including a time stamp, a session id, a list of supported cipher suites, and a list of compression methods that the client supports (Garfinkel and Spafford 693).  The SSL/TLS server, upon receiving the ClientHello, will then respond with a handshake failure alert or a ServerHello message that contains similar things to the ClientHello, except the server will have chosen a cipher, which is an algorithm for encryption, and a compression method to use during the session (Garfinkel and Spafford 694).  The next possible steps are certificate exchanges and key exchanges between the server and the client.  This will be further discussed later on when discussing the weaknesses of SSL and TLS.  At the end of the handshake, the client and server send ChangeCipherSpec messages, which allows all following message to be encrypted according to the agreed-upon cipher suite and compression method.  TLS has further actions, such as a record protocol that transmits information called as records between applications.   A record has more information pertaining to the security such as the content type, data payload, and message authentication codes (MAC) (Garfinkel and Spafford 690).  In either case, with the use of SSL and TLS, messages are sent between applications, and the connection is protected.

Successful use of the SSL and TLS protocol provides authentication of the server and client through digital signatures, data confidentiality through the use of encryption, and data integrity through the use of message authentication codes (MAC).  SSL and TLS are highly extensible and adaptive with internet security.  Some internet applications that are protected are web emails, login credentials, and other stream related information.  Yet, because the protocols are so extensible and adaptive to other protocols such as TCP and HTTPS, there is uncharted ground that is not protected.  While SSL and TLS do support popular protocols, they do not work well with others, such as UDP (Viega, Chandra, and Messier 20).  Specifically, SSL does not have support for non-repudiation (need to use S/MIME), does not protect against buffer overflows, race conditions, protocol errors in the design or implementation in the application (Viega, Chandra, and Messier 21).  SSL, and ergo TLS have many openings for creating compromised security measures.

When SSL 3.0 was released, there were conflicting issues with the older version SSL 2.0.  For example, if one has not upgraded to the latest versions of SSL/TLS, it is possible to downgrade/pretend that SSL 2.0 is SSL 3.0.  Considering the SSL 2.0 had errors, security can be compromised with having SSL 3.0 look like SSL 2.0 (Freier, Karlton and Kocher 63).  It seems easy enough to fix this problem by configuring their browser to allow SSL 3.0 and TLS, but most people do not configure it to only allow the most recent versions.  Another issue associated with SSL is the certificate validation process.  During this process, the client extracts a public key from the certificate, and the server gets a private key used later for encryption of data through public key cryptography (Garfinkel and Spafford 693).  This is the only validation that happens during the connection between server and client.  It is possible to protect certificate validation through Certificate Authorities (CA), who are third-party negotiators that deal with proving the validity of certificates for servers, but deciding on which to trust is potentially a lot of work (Viega, Chandra, and Messier 17).  Certificates used as an authentication mechanism may prove to be faulty for SSL and TLS.  Adversaries could potentially get certificates representing other people that is not themselves.   CAs may make the mistake of believing someone is who they claim to be.  It becomes troublesome to deal with someone having a “fake” certificate, a certificate given to the wrong person.  A method that deals with stolen certificates is the creation of Certificate Revocation Lists (CRL), in which the CA reports bad certificates and numbers them, so that clients or servers can monitor their activity (Viega, Chandra, and Messier 16).  Yet, there are factors that delay this process.  An example of such could be a time delay of not noticing what has been stolen.  For a CA, it could potentially take time to update CRL, not including that the client has to download them.  The process of finding a potential certificate stealer becomes lengthy when a client must depend on a CA acting in real time (Viega, Chandra, and Messier 17).  Also, it is questionable if having CAs are substantial enough security for security.  They lack mobility, which is important with trying to protect information in a timely manner.  In addition, some clients or servers may fail to check the entirety of the contents of the certificate, and a potential attackers may be able to get credentials.  More often than not, people are unaware of what digital certificates they do have access to, since most digital certificates are hidden in the web browser.  It is a weak point that some attackers may prey upon.  They know what to look for in SSL/TLS protocols.

Associated with certificates are private keys.  These keys are later used to help encrypt data being sent over a connection between client and server (Viega, Chandra, and Messier 15).  The possible error with using keys along with certificates is that they may not be hidden on the server, specifically their hardware.  An attacker who has access to their hardware is able to extract the key if the key is unencrypted (Viega, Chandra, and Messier 16).  Even if the key were to be kept in memory of the system, it would have to remain unencrypted in order to allow for new connections to be made.  Again the attacker would be most likely able to access this information if they knew what they were doing.  In turn, SSL and TLS could fail to protect data because of weakly protected keys.

Since SSL and TLS attempt to defend data from attackers, it takes a lot of effort.  This effort could be seen as cost or consequence of the abilities of SSL.  Assuming one would like to exchange a vast amount of information back and forth between a client and server, it takes time.  SSL and is slower than TCP/IP transmissions because its main job is to encrypt and authenticate information over a connection.  One of the first steps when starting a session with SSL is handshaking.  The initial handshake is slow because it uses public key cryptography, which requires large keys, hence the computational math required to encrypt takes longer (Viega, Chandra, and Messier 12-14).   It is an issue when first doing this because TCP/IP is not secure on its own.  The earlier it is in an SSL/TLS session, the less secure it is likely to be.

With slow workings, faulty certificates, weak keys, these are possible reasons to have Man in the Middle Attacks.  Potentially, the failure of validating certificates may make for a potential Man In The Middle attack (MITM), where an eavesdropper puts himself in between the client and server, somehow accessing their session credentials and listen and put forth his own agenda during the connected session (Opplinger, Hauser, and Basin 54).  Generally, MITM attacks are when an attacker intercepts server or client messages by pretending to be the client or server by using a proxy connection and listens to the data exchanged between the applications.  There are many variations of possible compromised connections with the use of a MITM attack.  One example of such case is called a BEAST attack, or Browser Exploit Against SSL/TLS.  BEAST attacks assume that a possible attacker can see an encrypted session taking place between a client and server.  Hence, the attacker, if knowing what to look for, can also see the initialization vector used for encryption.  Initialization vectors are used to randomize plain text messages (information before being encrypted) before encryption during cipher block chaining, which is the encryption of data in blocks with each blocks’ encryption depending on the one before(Rohit).  If an attacker were to have IV information, they could use it to uncover what the original plaintext message was (Rohit).  BEAST attacks deal with finding multiple IVs to get the most information they can get by sniffing the network.  An attacker will know that the plaintext message with the known IV is correct by checking the session cookie and seeing if the ciphertext matches.  It seems relatively easy for attackers to use the BEAST, but one failure of BEAST is that it has to modify traffic over the connections used in the browser. Also, when checking the plaintext message with the IV, it is only possible to guess one block at a time due to the next block’s information is dependent on the one before it (Rohit).  Yet, it is important to note that SSL/TLS provides leeway with its encryption standards.  If the initialization vector of the cipher block chaining process can be exploited, it is reasonable to see that other parts of the encryption process could be seen as well.

Even though TLS is more recent, it has its vulnerabilities.  One of the key features of TLS is that when it encrypts data, it compresses it.  This is necessary in order to reduce bandwidth used during Internet sessions and latency requirements while maintaining the security features of TLS (Hollenbeck).  Possible attackers could force a browser to compress and encrypt requests with attacker controlled data.  It is known as Compression Ratio Info-leak Made Easy or a CRIME attack. Attackers find this method as another way to see information encrypted with SSL/TSL protocols. First, assuming that attackers are aware that an SSL/TSL connection is occurring, attackers are able to see cookie information.  Cookies are stored pieces of data saved from a client’s previous sessions with a server that provides information on the activity of their sessions (Goodin).  These cookies are available during SSL/TLS connections because they are located in the messages between clients and servers.  With the cookie information, it is possible to make attacker requests look like it belongs in the connection.  In a CRIME attack, an attacker is trying to compress their request into the client’s request.  Since there is no space for some of the request, it becomes lossless data (Goodin).  Basically extra, but not unnecessary.  The attacker uses this information to find out to see if what the client’s now lossless data matches the encrypted data the attacker cannot read matches.  If it the two sets of data match then it is successful information stealing.  TLS tries to maintain the effectiveness of the browser by compressing data, but it may give up some security points in the process.

Going along with the idea of compressing data as a method of weakness, BREACH attacks focus on using compressed data against clients.  BREACH stands for Browser Reconnaissance and Exfiltration via Adaptive Compression of Hypertext.  During a BREACH attack, clients are forced, without knowing, into using or visiting an attacker’s website (Leyden).  When a client visits/uses the website, bytes of data from the attacker are made to match the ones encrypted in the connection between the server and client.  The web browser notices that there is an excess amount of data and tries to compress it to reduce the size of the transmission.  Eventually, an attacker can compile bytes useful enough to create useful information, such as an email address, a security token, etc. (Leyden).  Attackers do not need to know much in order to use this technique.  They do not need to know the encryption algorithm or cipher.  Their only job is to continually listen to the traffic of client and server.  Knowing that a connection exists is enough for attackers to get what they want.  SSL and TLS cannot do much to stop people from watching traffic.

In some cases, connections over the internet have to be re-authenticated.  SSL and TLS are in an already existing state, and suddenly there may be a change on the client or server’s side where they must authenticate themselves.  This is known as renegotiation (Rohit).  Perhaps a user was buying something online and they made it to checkout, but want to use their account with the website, or the server has a timed out session with a client, which makes it necessary to sign in with their credentials again.  In an SSL/Renegotiation attack, an attacker inserts plaintext into a client’s request to a web server during renegotiation (Rohit).  The attacker’s plaintext is what they want, it is their own personal request (agenda).  The client will have seemed to be the one to put forth the request themselves, but really it is the attacker’s.  Renegotiation can be a fatal point depending on what the attacker requests, but there are limitations.  Attacks can only send their requests and not alter the requests of the client and server during this process.  Hence if an attacker wants something useful to them, they would make a request to make the HTTPS (Hypertext Transfer Protocol Secure) to an HTTP (Hypertext Transfer Protocol).  HTTP is responsible for being able to exchange or transfer hypertext (interactive text used for the Internet) (Goodin).  On its own, HTTP isn’t secured, HTTPS is able to use the attributes of SSL/TLS and protect hypertext (Goodin).  So, if during a renegotiation, an attacker’s request to change an HTTPS connection to an HTTP is made, the web browsing session is compromised.  It fails without it being directly the fault of SSL or TLS.

For conclusion, it is arguably to say that even with recent versions or capabilities of SSL/TLS, there are underlying security concerns.  The attacks mentioned, BREACH, CRIME, BEAST, are some to name a few that have recently been more pertinent in recent years.  Attackers focus their attention more on finding the tradeoffs that SSL/TLS give in order to secure a session.  It could be a slower encrypting time or perhaps compressing data in order to save bandwidth.  SSL and TLS cannot do it all.  Although it seems like a miracle cure, it is still in its early stages.   Nothing is ever truly secure.  People have to start understanding how their information, their interactions are secured over the internet.  If they begin to understand how attackers do what they do, maybe then will SSL/TLS be used to the best of its abilities.  In order truly to have a secure connection, SSL and TLS, along with the other protocols it works with, have to be improved to the point where everyone knows about them.

 

Works Cited

Dierks, T.,Allen, C.. “The TLS Protocol Version 1.0.” Request for Comments 2246 (1999): 9-10.

Freier, A., Karlton, P., Kocher P..  “The SSL Protocol Version 1.0.” Request for Comments 6101 (1996): 4-6. Print.

Garfinkel, Simson; Spafford, Gene. Web Security, Privacy & Commerce. Sebastopol: O’Reilly Media, 2001. Ebook Library. Web. 24 Jan. 2015.

Goodin, Dan. “Crack in Internet’s Foundation of Trust Allows HTTPS Session Hijacking.”Arstechica. Condé Nast, 13 Sept. 2012. Web. 26 Jan. 2015. <http://arstechnica.com/security/2012/09/crime-hijacks-https-sessions/>.

Hollenbeck, S.. “Transport Layer Security Protocol Compression Methods”. Request for Comments 3749 (2004): 1-4.

Leyden, John. “Step into the BREACH: HTTPS Encrypted Web Cracked in 30 Seconds.” • The Register. The Register, 2 Aug. 2013. Web. 26 Jan. 2015. <http://www.theregister.co.uk/2013/08/02/breach_crypto_attack/>.

Rohit, T. “SSL ATTACKS – InfoSec Institute.” InfoSec Institute. InfoSec Institute, 28 Oct. 2013. Web. 26 Jan. 2015. <http://resources.infosecinstitute.com/ssl-attacks/#disqus_thread>.

Oppliger, Rolf, Ralf Hauser, and David Basin. “Protecting Ecommerce Against The Man-In-The-Middle.” Business Communications Review 37.1 (2007): 54-58. Communication & Mass Media Complete. Web. 24 Jan. 2015.

Viega, John; Messier, Matt; Chandra, Pravir. Network Security with OpenSSL : Cryptography for Secure Communications. Sebastopol: O’Reilly Media, 2002. Ebook Library. Web. 24 Jan. 2015.