Identity as the Perimeter

The perimeter of an enterprise has been its LAN and WAN for quite a number of years. The popularity of VPN based remote access did extend the definition of an enterprise’s perimeter to the remote presence of its employees, albeit for short bursts of time more often than not.

As trends like Cloud based services and BYOD emerged, enterprises have this daunting challenge of protecting their data. In the new age network, data gets hosted (e.g. public cloud services) and accessed (e.g. laptops and phones) on devices that are beyond the firewalls of an enterprise. Moreover, employees want more and more flexibility towards accessing data – at wherever they are and on whatever they carry.

RSA‘s Jason wrote this blog post in which he describes the (potentially outdated) strategy of one of the Information Security persons he met – take out access to anything that has a hint of risk. Jason identifies the problem as well as side effects of that approach.

Here are the key assumptions enterprises need to make regarding their data:

  • Data takes multiple forms: e.g. Email, documents, code, tools, configurations and employee personal data
  • Each form of data might need different levels of access in terms of confidentiality and integrity: e.g. read-only, read-write for owner, write-once, privileged read-only and limited access
  • Data gets hosted at multiple locations (often beyond the firewalls of the enterprise): e.g. E-mail service provider, private data centers, private clouds, shared public clouds
  • Data gets accessed from multiple locations (often beyond the firewalls of the enterprise): e.g. desktops, laptops, phones, and to take it a step forward, TVs and car infotainment systems capable of reading your email.

Centrify‘s Tom Kemp shares his thoughts on making identity as the new perimeter. Making identity as the new perimeter has potential to provide solutions to many of the challenges arising out of the assumptions we listed above for the enterprise.

  • Identity controlled by an enterprise can be made to control access to data that takes different forms.
  • Enterprises can use single sign on (SSO) solutions that go beyond two factor authentication to provide on-demand access to data using identity as the primary factor
  • SSO solutions make it easy for the enterprises to control identity driven access consistently across multiple service providers like public clouds, internal data centers, private clouds.
  • SSO solutions, combined with device remote access/control solutions make it easy for enterprises to control the life of data persisted on nomadic devices like phones. This helps when a device is no longer tied to the same identity.

There is lot of mindshare building around managing identity and making it as a primary factor in access management. As Jason observes in his article, identity should be managed well beyond making it a two factor authentication. Context should be clubbed with identity to make more meaningful decisions for giving access to privileged information. That requires wiring several identity management and analytics products together for dynamically determining access levels.

Google already does this for its own services. If you login from a unusual location, device and application, it has the ability to enforce additional steps in determining the identity. I am really impressed (but not at all surprised) by Google’s ability to take it to not just the location and device level, but also application level. For example, Google maintains analytics data about your favorite browser on your desktop for accessing drive and if you change it, it notifies (and often counters you with additional checks, depending on context) you about that change.

I take Google’s approach as an exemplary first step in driving the identity with augmented data around context. As identity management solutions evolve, enterprises can bank on independent and collaborating solutions that determine identity. The collaboration among these solutions would be around determining the context of the user and making decisions around whether the identity can be determined unambiguously within that context. As the definition of perimeter evolves to center more around identity, these emerging trends in identity management are both welcome and necessary.




Driverless Cars: Moral and Legal Considerations

Driverless cars are no longer a fantasy. Despite being far from general purpose use, this technology is evolving leaps and bounds, thanks to players like Google and Tesla making steady progress on this technology. As the technology evolves and enters into public life, several legal and moral issues are going to crop up.

The recent issue of Communications of the ACM carries a nice article describing the moral challenges of driverless cars. In this thought provoking article, the author brings up scenarios that bring up ethical and moral questions. To quote from the article,

However, should an unavoidable crash situation arise, a driverless car’s method of seeing and identifying potential objects or hazards is different and less precise than the human eye-brain connection, which likely will introduce moral dilemmas with respect to how an autonomous vehicle should react …

Driverless cars have potential to fare better than humans in 90% (or better) of the times. But the other small percentage of times usually bring in more ethical and legal dilemmas where humans would fare vast better than the technologies used in driverless cars. In these situations, human drivers are usually faced with multiple choices that vary in terms of amount of impact or destruction to property or humans. The senors and algorithms used in driverless cars (as they stand for the next few years) may have limitations in identifying the course that leads to least impact or least destruction. When the system operating a driverless car takes a non-optimum decision, there could be several legal and ethical ramifications.

As discussed in the above mentioned article, handing over control to a human driver in emergency situations is far from reality, given the response times needed by a disengaged human. Even the automation around a fully engaged driver’s action is being subjected to several legal questions around responsibility. For example, this article on WSJ discusses how Tesla’s autonomous car-passing feature intends to pass on the responsibility to the driver, by making it a driver initiated (e.g. turn on the signal) automation. Given that the same action of the driver in a car with and without these autonomous features results in drastically different ramifications, states like CA, NV and FL are mandating special registrations for drivers of autonomous vehicles. The registration is based on the level of autonomous features of the vehicle.

Beyond the responsibility question that touches the legal aspects, driverless cars technology needs to continually improve upon the ethical questions that come up during an emergency situation. For example, is it okay to crash the car in the next line to avoid a bicyclist who is jumping a pedestrian signal?

Then comes the integrity question around the autonomous features. What is the possibility of these features getting tampered or outdated? Is Tesla’s Over-the-air update going to be a typical standard for automakers across the globe?

In a nutshell, the legal aspects of driverless cars can be best handled by training the drivers for those specific features. However, the ethical aspects require more maturity of the technology. Add the complexity of changes in driving rules across multiple geographic regions (states, countries) and we are going to see a lot of technology evolution in this space.

Here are a few lingering thoughts that I have regarding driverless cars. I am more anxious to find the answers sooner.

  • What happens if the road sign standards change across borders? E.g. colors and sizes of signs across states, speed limits posted in miles vs. kms across countries. We may soon see a few settings on your dashboard to let the car know (or confirm) that you are driving in New Jersey or Maine or Canada.
  • Cars may be certified to run autonomously in certain areas only. Like “This car can use the autonomous features in CA and NV only, but not in AZ.”
  • Cars would be able to identify the speed limit on a signpost and ignore a similar looking sign on a billboard next to a freeway. Do they do it by improving their sensors or depend on a networked repository (say Google Maps) of speed limits in that area.
  • Visual congestion identification and taking alternate routes. Pretty simple given the current advances in maps technology.
  • In situations where disengaged drivers don’t have awareness of circumstances that led to an accident, cars may require legally acceptable sensor information logs. In other words, the cars would have scaled down versions of blackboxes like in aircrafts.
  • What if someone hacks the “car stack”? How does one get to know? Do we get to do a periodic (smog-check like) stack-check and certification? If this looks like a fantasy, please checkout the Tesla hack and fix a couple of days ago.

And here is an extreme one:

  • If it turns out that the damages caused in an accident by an autonomous car with a disengaged driver are much higher than the damages if an engaged driver were operating the car without autonomous features, what are the insurance ramifications? Would insurance companies track maturity levels of the autonomous features and charge accordingly for insurance?

I do live in interesting times.

Availability is a fundamental requirement of Security

When people talk about security, they often picture confidentiality and integrity in their mind. However, the role of availability is equally important while defining the security. In fact, the term security is defined as a combination of confidentiality, integrity and availability by major standards and certifications.

There is a quote on a lighter tone in security community: The most secure computer may be the one that is not connected to any network. But such systems hardly play any major role in providing meaningful services to customers and consumers. The goal of a security expert is to ensure that the system (and its services) are available to all the intended users, while preserving the confidentiality and integrity of the data, system and its services.

For an end user facing service (say, a shopping site or a cloud service) to operate as expected, it requires several internal or public facing infrastructure services to operate in tandem. A shopping site might require its DNS service (public), CDN service (public) payment exchange (public) and private cloud service (internal) to function properly for delivering its online services to end customers. As the comprehensiveness of online services increase, there are more and more micro-services, infrastructure services and housekeeping services that play a major role in determining the health and availability of the overarching (end user facing) service.

As big companies increasingly  outsource their IT infrastructure to cloud service vendors (DNS, mail, compute infrastructure, to name a few), they increasingly depend on availability of each of these infrastructure components. As cloud service providers mature their infrastructure services, they become more and more alluring to small enterprises and startup companies, given the lower entry cost and least effort to scale up. In a nutshell, the availability of services outside the perimeter of a company, irrespective of its size, becomes essential element in offering secure services to the employees and customers of that company. On a side node, the definition of the perimeter of a company is fast diluting with more and more cloud service providers offering infrastructure services.

Even for companies that internally host their infrastructure services, the availability of these services is the most critical component in providing secure services to their end customers or employees.

Lack of availability of contributing components severely impacts security of an online service. Lets take a look at a simple example. When an authentication and authorization component operates at lesser availability levels, users of that component (developers, IT admins) make amends to lessen the impact of non-availability. For example, they may want to cache a few things for a longer amount of time. That makes any online service that depends on that authentication and authorization mechanism more vulnerable than a service that operates on top of a highly available authentication and authorization service. As more and more amends are made to reduce the impact of availability of internal components, the online service gets more holes in its security.

Every developer and IT engineer should work towards providing hooks for availability metrics and augmenting them with actionable operating procedures when availability gets impacted. These hooks and procedures should be fine-tuned as time goes on and as new factors influence the availability.

Every security expert should look at availability of an online service and that of its internal components as fundamental requirement for ensuring security of such service. Ample bells and whistles (in the form of monitoring and management infrastructure) should be setup to catch availability issues within an online service’s eco-system. Trends related to lesser availability of a component and service need to be detected and acted up on.



చికెను వింగ్సూ, అల్లప్పచ్చడి

అల్లప్పచ్చడి (అల్లం పచ్చడి) వాడకంలో నేను అసలు సిసలు తెలుగు వాడిని. అల్లప్పచ్చడి అంటే హోటళ్ళలొ తెల్ల చట్నీతో పాటు ఇచ్చే ఎర్ర చట్నీ అనుకునేరు. నేను చెప్పేది మామూలుగా మనం సంవత్సరానికోసారి పట్టుకునే అల్లప్పచ్చడి గురించి. ఆ గట్టి అల్లప్పచ్చడిలో కొద్దిగా మంచినీళ్ళు గానీ, పెరుగు గానీ కలిపి ఎలాంటి ఫలహారాల్లోనయినా నంజుకుని తినే విషయంలో నేను అసలు సిసలు తెలుగువాడినన్నమాట. మనలో మనమాట, ఇడ్లీల్లోనూ దోశల్లోనూ అల్లప్పచ్చడి భేషుగ్గా ఉంటుంది. పెసరట్టు ఉప్మాలో అయితే మరీ భేషుగ్గా ఉంటుంది. శ్రీకృష్ణదేవరాయల పక్కన తెనాలి రామలింగడు ఉన్నప్పటి మజానే పెసరట్టు ఉప్మా పక్కన అల్లప్పచ్చడి ఉన్నప్పుడు వస్తుందన్నది నా ప్రగాఢ విశ్వాసం.

గతవారంలో ఒకరోజు సాయంత్రం అత్యవసర పనులన్నీ చక్కబెట్టుకుని (ఆఫీసు మెయిల్సు, స్వంత మెయిల్సు, ఫేసుబుక్కూ, ట్విట్టరూ వగయిరాలు చక్కబెట్టుకుని అన్నమాట) కొంచెం ఫలహారం తినే పనిలో పడ్డాను. ఎదురుగా మాంచి పసందుగా క్రిస్పీ చికెను వింగ్సు కనిపించాయి. వాటితోపాటు నంజుకోవటానికి సహజ సిద్దమయిన బార్బీక్యూ సాసు కూడా ఉంది. కానీ తిండి విషయంలో ప్రయోగాలు చెయ్యకపోతే మనం మనమెందుకవుతాము. అటూ ఇటూ చూసి ఎదురుగా కనపడ్డ అల్లప్పచ్చడిని మనదయిన రీతిలో పలచగా (ఇంచుమించుగా బార్బీక్యూ సాసులా కనపడేలా) కలిపాను. సోఫాలో చేరగిలబడి కాళ్ళు టేబులుపై పెట్టుకుని టీవీ చూస్తూ ఫలహారానికి ఉపక్రమించాను. అల్లప్పచ్చడిలో ఒక చికెను వింగుని బాగా తిప్పి నోట్లో పెట్టుకోగానే అప్రయత్నంగా “మహాప్రభో” అనిపించింది.

ఈ “మహాప్రభో” వెనుక ఒక విషయం ఉంది. శుభసంకల్పం సినిమాలో రాయుడు పాత్ర వేసిన కళాతపస్వి విశ్వనాధ్ గారు ఒక సన్నివేశంలో దాసు (కమల హాసన్) చేసిన చేపల పులుసు రుచి గురించి చెబుతూ “మహాప్రభో” అంటాడు. ఆ సినిమా చూసినప్పటినుండీ ఏ తిండి పదార్ధం చాలా బాగున్నా నాకు మహాప్రభో అనిపిస్తుంది.

అలా మహాప్రభో అనిపింపచేసిన చికెను వింగ్సూ అల్లప్పచ్చడి కాంబినేషను ఫలహారం క్షణాల్లో ఖాళీ అయిపోయిందని వేరే చెప్పక్కర్లేదనుకుంటాను. మీకుగనక ఇలాంటి ప్రయోగాలు ఇష్టమయితే మీరు కూడా ప్రయత్నించండి.

authbind vs iptables on AWS

Here is a short description of the scenario I was working on. I am using a standard AWS AMI to run tomcat (tomcat7, to be specific.) The default configuration of AWS AMIs (and many other off-the-shelf unix based servers) is such that tomcat (or any other program that runs with a non-superuser credentials) can’t bind to privileged ports. However, tomcat needs to use these privileged ports (443 for TLS and 80 for standard HTTP) to serve public facing pages.

Making tomcat run as superuser is really a bad idea (the why question is beyond this article.) So there are a few tricks to make tomcat work on privileged ports.


There is lot of mindshare around authbind when it comes to hosted environments. The manpage of authbind describes how authbind can be used to make a program bind to sockets on privileged ports. However, if you are using a standard AWS AMI, you may have some challenges using authbind. Also, for automated environments (read Chef) in AWS, I felt authbind is more complicated to work with.


Port redirection using NAT features of iptables is very simple and straight forward. However, it requires an additional configuration on tomcat to use proxy mode on privileged ports.

Here is the NAT configuration using iptables.

sudo /sbin/iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo /sbin/iptables -t nat -I PREROUTING -p tcp --dport 443 -j REDIRECT --to-port 8443
sudo service iptables save

Once this is done, all inbound traffic on 80 will be redirected to 8080. The same is the case with port pair 8443 and 443. This way, tomcat can still bind to port 8080 for HTTP and 8443 for TLS while serving incoming connections on 80 and 443 respectively.

When a client program queries the port information from tomcat, it should respond with port 80 and 443 instead of 8080 and 8443. To ensure that, one can use the proxy support feature of tomcat. Here is the additional configuration in tomcat connector settings in server.xml

<Connector port="8443" proxyPort="443" .../>
<Connector port="8080" proxyPort="80" .../>

Other Considerations

There are better ways to handle this port redirection when you have front-ending loadbalancers and/or proxy servers in place. Having proxy/loadbalancers solves helps mitigate more issues than just solving the redirection problems. However, the iptables approach is better than authbind approach when you are using a single server on AWS without lot of additional infrastructure and configurations in place.

Data Insurance: to Limelight and Mainstream

In contrast with other essential elements of human life like death and taxes, the history of insurance has been very short. However, in terms of evolution, the concept of insurance has been constantly changing and continuously embracing new domains. Insurance of properties, life, health, beauty, athletic talent and limbs are very trivial now. Data insurance, which has been limited once to multi-billion dollar corporates and that too for limited scenarios, is now taking center stage.

The drivers for data insurance existed for quite some time, but they haven’t proliferated into human life and organizational practices as it happens now. The key drivers pushing the trend towards data insurance are the protections we need against data loss, data compromise and data misuse.

Organizations, as they evolve in their presence over web, social networks and mobile applications, are capturing more and more data. The rest of the discussion in this article focuses on two categories of this data.

  • Acquired data: All the customer information, employee information and any other user information collected directly or indirectly from the users constitutes this acquired data. By nature, this class of data is highly likely to have sensitive information that includes personally identifiable information (PII), credit card information, etc.
  • Generated data: All the housekeeping, analytics and user behavior data in an organization falls into this category. This data is very vital in delivering  better user experience to both end users and internal teams. This data is mostly generated by an organization’s web/mobile applications that interface with end users and may be augmented with data inferred from other user interactions like support calls and email exchanges.

Any compromise on acquired data leads to a very big exposure – loss of face, legal tangles and/or customer loyalty issues. The data compromises detected at companies like Target and Home Depot are leading to customer unrest, loss of loyalty and severe financial implications from legal consequences.

Any compromise of generated data makes an organization limp (often heavily) in their business process. Generated data compromise mostly leads to inefficiencies and exposure of the secret sauces to competition.

The impact of a compromise on generated can’t be taken any lightly when compared to the impact of acquired data compromise. The generated data may also include intellectual property related items that could hurt a company in the long run when that data is compromised.

Digital (or digitized) data captured by humans also is increasing in its prominence,  value and the risk of compromise. Whether it is personal pictures of celebrities or tax data of individuals, the risk associated with any compromise of this data is increasing over time. As the data access avenues are increasing (e.g. health data accessed via a wearable device), the potential for compromise of personal data is also increasing.

Given all this increased focus on data and its risks, we see a bigger shift towards insuring the data by corporations and individuals. Data Insurance is taking new paths that are less traveled by insurance companies in the past. Data Insurance packages now contain and cover a wide variety of data sets.

Just like humans undergo a set of prerequisite tests before taking a new health insurance package, data sets might undergo certain audits that cover the access controls and security risks associated with this data. We may also see a trend towards re-audits during renewals of data insurance to re-validate the access controls and risks.

The key factor in Data Insurance is determining the value of data. Human life insurance packages usually cover sums like 5x annual income. Vehicle insurances usually cover up to the Bluebook value of a vehicle. Coming up with valuation for data is not that straight forward though. The valuation process might differ greatly between acquired data and generated data. Unlike constant depreciation of a vehicle’s Bluebook value, the value of data may either decrease (data that becomes stale over time) or increase (with volumes or with increased sensitivity of same data) over time. Data Insurance companies and the insured organizations/individuals will often be re-evaluating the value of data to optimize costs and minimize the impact of exposure.

In summary, here are some of the primary factors by which data insurance evolves:

  • Categorization of data
  • Valuation of data
  • Data audits

As data insurance hits mainstream, all these factors experience market growth and some sort of standardization beyond what we have today.




Libressl ( is a recent fork of OpenSSL. The goal of libressl is to provide a more secure alternative to openssl and the developers who forked the code feel that openssl is beyond repair at this point. Quoting from libressl website,

LibreSSL is a version of the TLS/crypto stack forked from OpenSSL in 2014, with goals of modernizing the codebase, improving security, and applying best practice development processes.

The best documentation of libressl features (or default configurations) can be found in the release notes from 5.6 version of OpenBSD. Looking at the list, this is an impressive push towards securing the implementation by default. Without worrying too much about the backward compatibility, some of the lesser secure configurations and protocols are simply left out from the implementation.

By dropping support for a bunch of hardware engines and platforms, libressl probably has less things to worry about. For example, dropping support for big-endian i386 and amd64 systems liberates it a bit. With classic adopters of big-endian architectures evenutally becoming bi-endian, there is not much to lose here, in my opinion. However, reusing the standard C library routines like malloc() and snprintf() could take an interesting turn. Dropping kerberos support is interesting too – don’t we still have a lot of academic community working on it?

I like changes like dropping SSLv2 support and stopping the use of current time as random seed among a few others.

There are several discussions in the past on which of these opensource SSL implementations are better. Being a legacy implementation, OpenSSL at this time requires a considerable set of configurations to make it secure. From that view point, libressl might look better in terms of its out of the box readiness for a more secure implementation. However, in the world of automated deployments and continuous integrations, recipes exist to configure openssl to avoid less secure protocols and algorithms.

I am not sure at this point whether libressl will surpass openssl in future in terms of adoption, but sure I am glad to see a drive towards being “more secure by default.”


Swachh Bharat Campaign: My Thoughts

Now that (apparently) the initial euphoria around the Swachh Bharat initiative has died down and people are settling back to their normal course of action, here are my thoughts on this great initiative.

The Swachh Bharat initiative is my long term wish for India come true. The moment I set my foot on western hemisphere almost a couple of decades ago, I realized how different surroundings can be made to look like. After relocating back to India a while ago, lack of cleanliness has been one of my big pain points that I have been trying fix across the board.

The Swachh Bharat initiative by our Prime Minister Shri Narendra Modi is right on spot and we all should thrive to see a clean and green India. However, just like many of the good initiatives, this one might make people get carried away in executing it the wrong way.

For an initiative to get popularity, we either need to document widespread participation or measurable results. Some popular initiatives get their popularity due to participation and others get popularity due to socializing of sustained results. Often, people take the first route and document the participation. Three hundred people posting their pictures on a social networking site for an event gets an event more popularity than documenting the fact that three thousand people actually participated in it.

People seem to be more inclined to post their participation in Swachh Bharat by clicking a few pictures while cleaning up a road or premises. I haven’t seen anyone posting a picture of a road or premises that stayed clean over a period of time.

In other words, instead of fixing the symptoms, we should fix the root cause and make sure that the symptoms don’t show up time and again. That is the best sustainable path to success.

For Swachh Bharat to become a lifestyle (not just an initiative), we need to focus on the following:

  • Reducing the opportunities to make any road or premises unclean. For example, Indian Railways came a long way in keeping many platforms and stations in clean state when compared to 15 years ago. The train tracks, compartments and some stations are not clean enough yet, but we have seen a good improvement recently. All they did is to force every vendor to keep a trash bin next to the stall and increased the number of general purpose trash bins. This led people to eventually get to the habit of using the trash bins than platforms to dump the waste. We need to take similar approach to ensure that people participate more in keeping things clean than making things clean.
  • Ensuring that people understand the importance of keeping things clean. We need to slowly, but surely, eradicate the “not my job” attitude when it comes to keeping public and common places clean. Some part of it comes from forced legislation (I like the positive impact of “No smoking in public places” rule) and rest of it should come from people’s belief and passion. This is where politicians and celebrities can help by taking the message to masses. I like a celebrity’s picture of cleaning a road, but that should somehow translate to a message that keep things clean first.
  • Clean up – This is how the initiative is currently being perceived in mass media. Even though it is a good start, it should slowly get to the back stage and give room to the other two focus points mentioned above. Clean ups should be regular, can even be voluntary by people who are no way in that role, but shouldn’t be just momentary.

In summary, I want to see Swachh Bharat to become a lifestyle than an being an initiative by our Prime Minister. We all should focus on keeping places clean than cleaning up places as an aftereffect. That way, we can head to seeing a sustainable Swachh Bharat.

Shellshock bug and the risks

Bash, the quarter century old shell utility on almost all popular unix based systems, is found to be vulnerable. The exploit works by injecting specially crafted values into an environment variable and using it to invoke a shell command. Once the exploit gets to that level, there is hardly any limit on what can be executed as part of the shell command.

The problem gets worse for the fact that many of the day to day usages of the network facing services have potential to use bash internally. For example, CGI scripts on web servers, convenience utilities offered by network routers and any other limited command execution tools might be the key vulnerability public and guest access private networks. Mitre warns that sshd with ForceCommand is a potential attack vector.

The bug is being termed as Shellshock bug or bash bug. RedHat’s security blog article is one of the earliest articles that discussed the Shellshock bug in detail. Robert Graham of Errata Security is the best known tracker of the issue and has ongoing observations and comments on his blog/twitter account.

Here is how you can check if the current bash is vulnerable on your system. If it prints vulnerable on the first line, then patch your bash package.

$ env x='() { :;}; echo vulnerable' bash -c "echo test completed"
 test completed

For web servers, here is the test suggested:

$ curl -i -X HEAD "" -A '() { :;}; echo "Warning: Server Vulnerable"'

The output looks somewhat like the following listing. If it contains “Warning” text, then it is highly likely that the web server’s bash is (and cgi’s based on bash are) vulnerable. This test doesn’t assure that the system is not vulnerable. You may still have other CGIs run with bash that are vulnerable.

HTTP/1.1 200 OK
Date: Fri, 26 Sep 2014 02:51:52 GMT
Server: Apache
X-Powered-By: PHP/5.4.32
Link: <>; rel=shortlink
Content-Type: text/html; charset=UTF-8

Since the Shellshock bug existed for quite a while, all versions of bash that are currently out there in active usage are likely to be vulnerable. Patching some of these devices might be trivial, but there still might be several other devices that are hard to patch.

  • Servers that run services like web/ftp might be vulnerable if the CGI scripts end up using bash. Invoking bash from PHP code is considered not vulnerable, unless there are ways to circumvent input parameter validations of the PHP code. The RedHat article mentioned above has links to instructions on how to fix this on RedHat variants of linux. For Ubuntu, this is a good thread to follow.
  • Desktops that use network facing services like DHCP over wireless and sshd are vulnerable as long as these services internally use bash commands or bash as the shell for the session. There are still discussions on whether Mac OS X DHCP is vulnerable or not, because Apple modified its DHCP and claims that the DHCP utilities don’t use bash internally. Mac OS X branched version 3 of bash and does its own updates to the shell. There are instructions on how to patch OS X, tailored more for unix admins (and requires xcode) than normal users.
  • There are some suggestions on renaming bash to a different name, but that might break more things than fixing them. Use this technique with utmost caution.
  •  Beyond Desktops and Servers, devices like internet routers may have vulnerabilities due to utilities and services they offer. For these devices, waiting for vendor released patches is the best option, but explore the possibility of turning off these convenience utilities.

Errata Security also has notes on wormable nature of the Shellshock bug. So patch your bash package as early as you can.

Upcoming AWS / EC2 instance reboot

If you are using AWS and EC2 instances, a reboot of most those instances is on the horizon. Amazon’s AWS informed of this reboot that is scheduled between 02:00 GMT on September 26th and 23:59 GMT on September 30th.

Read more about this reboot on Gigaom and Rightscale. Technical Forums on AWS and other sites are already buzzing with lot of traffic, discussing the potential impact and how to ensure that the services are not impacted.

Given the urgency and magnitude of the instances that are impacted, it looks like the patch is going potentially going to address a security vulnerability. The actual details of the patch and the issues that are fixed by it will be known around October 01st.

Summarizing various discussions on related forums, here is a quick summary of what to watch out for during this AWS / EC2 instance reboot

  • The reboot is not limited to any single availability zone. It spawns across all the availability zones
  • Good news is that the EC2 instances on all availability zones are not rebooted at the same time. So if your instances spawn across multiple availability zones, you are on a relatively safer side.
  • The reboot does not impact instances of the type T1, T2, M2, R3, and HS1. However, if the patch fixes issues on these instance types too, then you might be on your own. We will know more around October 1st.

Here are a few quick checks for those who are getting impacted.

  • Check your mailbox for a notice from AWS and it is likely to give more details about the reboots, impact and schedules
  • Ensure that the key services on your instances are configured for auto restart when the system boots up. It looks silly, but I have seen code that takes good care of newly spawned instances but doesn’t address reboots that well.
  • Ensure that your network paths (non-Elastic IPs, Route 53 entries, S3 buckets) survive reboot of the instances.
  • For those whose instances are NOT rebooted by AWS, watch out for the issues fixed by AWS during this reboot and evaluate their impact on your instances. Take corrective measures as soon as possible.

For those who can afford to be heroic enough – why wait till AWS reboots your instances? Reboot these on your own in each availability zone and test the resilience.