FedRAMP Industry Day

•December 23, 2011 • Leave a Comment

So last Friday I attended the FedRAMP (Federal Risk and Authorization Management Program) industry day and I have to give GSA credit for their approach with the “unveiling.” I want give a little background on what FedRAMP is intending to do first (try to keep up with the acronyms!) Government agencies wishing to procure services from a Cloud service provider (CSP), would leverage FedRAMP to find a CSP that is pre-authorized through the FedRAMP Joint Authorization Board (JAB.) In theory, this helps save time and reduces redundancies in effort due to the fact that the CSP would have a baseline of controls tested. In order for CSPs to become pre-authorized, the would need to procure the services of a 3rd Party Assessment Organization (3PAO – I move to pronounce this “3-pow.”)  To read more about the characteristics, benefits, and overall process, check out the FedRAMP Page.

This is a great idea, but GSA has tried similar things before with their latest effort being the Risk Management Framework (RMF) Blanket Purchase Agreement (BPA) which was intended to standardize RMF services (for the traditional C&A.) This was part of their information systems security line of business (ISSLoB) effort. A few things that are working in FedRAMPs favor that were not in RMF’s:

  • The amount of visibility the FedRAMP program has gotten in the past couple of years
  • The amount of private industry support with vendors trying to find ways to logically C&A their cloud systems
  • Government anticipation in leveraging emerging cloud technologies
  • Most importantly, the overall NEED!

There are actually a few other differentiators but these are the major ones that stuck out to me. Anyway, I was surprised that more questions werent asked and am wondering if that’s a function of either a thorough understanding of FedRAMP, or lack thereof. The “unveiling” was good, but there were some pretty tall orders made, such as the fact that FedRAMP itself would improve real-time security visibility. O RLY?

There are still a couple of things that we are waiting for from the program, some more surprising than others. The Concept of Operations (CONOPs) is due 2/8 and the actual list of cloud controls by 1/27, with the latter being a pretty major milestone.  The repository of authorization paperwork, however, is still going to be paper-based!  There seems to be some difficulty in getting a location secure enough to store such a sensitive compilation of security information across many private industry vendors. Hmm… I wonder what the problem really is here…

As a quick aside, Matt Goodrich said something at one point that really made wonder about how the government is looking at RMF and continuous monitoring.  He said that many agencies are looking at developing solutions for ongoing assessments and authorization, but then said that this is also what they call “continuous monitoring.”  Yikes. I’ve long disagreed with the direction continuous monitoring was being taken as a concept.  It is used almost exclusively as a vendor tool issue and network/asset visibility problem.  I was able to get over this by thinking that the larger concept was something akin to ongoing authorization.  To equate the two feels like taking a couple steps back.  Now, I have no idea how accurate it is for me to speculate either way (that agencies do or do not equate the two), but there’s certainly a very important distinction to be made here.  More on this in my next post, though.

Gordon Gillerman speaking at the industry day

The last thing about the whole FedRAMP program that I wanted to point out is that there is definitely a contingent of industry participants, the ones who would want to become 3PAOs, who seemed to have been frustrated with the complexity of the applications process.  I gathered this from the publicly asked questions and the conversations I had and overheard.  I would agree with GSA here that the process isn’t complicated, but it’s not a cakewalk either.  What would be the point of FedRAMP if any company the conducted Security Tests and Evaluation (ST&Es) could be grandfathered in as an authorized cloud assessor without knowing some of the important things about cloud environments?  The application process is designed to make sure the people who are claiming to know how to assess clouds actually do know a thing or two about it.

Edit: Reading the FedRAMP memo in greater detail, there are some interesting ways in which the required use of FedRAMP is outlined. So perhaps the most important differentiator is that this is actually a must.

Curtains up…

•September 29, 2011 • Leave a Comment

I recently had a conversation with a few colleagues on the migration of systems to data centers. A problem we’re encountering, which is unfortunate,  stems from the arch-nemesis of any type of efficiency: politics.  From the perspective of cataloging and accounting for all security controls, data centers provide two things – a common set of inheritable controls and a documented set of outsourced services for assets within your boundary.

The problem is differentiating the two; the bigger problem is in getting operational managers and security managers to agree to the scope of services from a security perspective that is inherit in certain operational services. This is an important problem because, although quite justifiably, people prefer the elegance of simplicity even in cases where we cannot afford to simplify further.  In this particular case, both outsourced services and inherited controls are being referred to as “common controls” and this is leading to a deep level of confusion that leaves everyone walking away from meetings wondering if they really agree or not.

Here’s an example.  Lets say system A is a general support system that hosts system B (along with several others, C, D, etc.), within a data center located in Alaska. System B is a highly specialized application that provides clearance information to investigators, but is managed remotely by personnel in Seattle.  System B inherits its boundary protection controls (firewall, IDS, etc.) from system A. These devices are outside of it’s accreditation boundary. It inherits these controls with (potentially) no management authority over them.

Until today, the backups of all application data for system B have been performed remotely by a guy who management hired solely to ensure backup data was managed in-house and physically located far away from production servers.  Today management has realized that the hosts of the Alaska data center can do the backups for them at a fraction of the cost. System B management fires their “Back-up Specialist” and purchases the backup services instead.

Here’s where the confusion I’m seeing is stepping in. System B management and much of its personnel now consider backups to be outside of their accreditation boundary.  They are not viewed as an outsourced service.  The problem is augmented when Seattle management wants a complete list of all security services being provided, mistaking this list as a set of common controls. Not realizing that both inherited and outsourced controls are mixed together, they will claim to have responsibility for only the leftovers.

Redefining the lexicon is tough enough and even impossible depending on the momentum of the entanglement. So fixing the problem leaves one feeling like a foreman trying to build the Tower of Babel.  My fleeting desire here is the exhausted information security mantra: get the security folks involved from the start.  If you can’t do that, STALL!

 
Follow

Get every new post delivered to your Inbox.