Why ApacheCon

It’s the middle of the night, just hours before my return flight home, and can’t sleep.  The tape recorder inside my mind continues to play and won’t stop.  And so, much like my first Apachecon, I choose to write rather than toss and turn.

The theme of this week’s entry is ‘Why ApacheCon’.  I mean, after seven trips, on both sides of the pond, one might expect to grow weary of the routine.  I’m not saying that I don’t like traveling.  It’s just that, well, after almost thirty years as a professional software developer, I’ve had my fair share.

But here’s the deal, it’s not the trip that makes it worthwhile although I’ll admit the venues are always nice.  Certainly Montreal in September is not a bad gig.

It’s the people, and their stories, that make this event special.

A perfect example is Cliff Schmidt, founder of Amplio, who left a lucrative technology career, to pursue a new mission — saving lives in Africa through education via starting a non-profit that supplies battery operated listening devices, i.e. ‘talking books’, to poor rural farmers in Ghana.

DoBbhPxVsAA6wK-

Cliff Schmidt

Another example of Apache members doing good is Myrle Krantz who’s mission is building an open source system for core banking as a platform.  The reason?  To provide a reliable and affordable solution for the world’s 2 billion unbanked, via Apache Fineract.

There’s also Justin McClean, who’s working on an incubating project to provide a real-time operating system featuring a robust and reliable platform to run embedded systems, a.k.a IoT.  The project is Apache Mynewt.  With Mynewt the playing field has been leveled, opening the dedicated hardware market to anyone with a good idea and access to a cheap embedded processor.

Dn_m2T-WkAU8UTS

Justin McClean

And Christopher Dutz who’s striving to break Siemens’ stranglehold on the programmable logic controller market, to offer cost-effective options to gather their data, for small to medium-sized manufacturing facilities.  His incubating project is Apache PLX4J.  This affords small business’ the same capabilities of command and control of their equipment, enabling them to compete with giant corporations

Dn_nTU8XkAEBvxs.jpg large

Christopher Dutz

Or how about Daniel Ruggeri, who’s taken it upon himself to create (and teach) a college-level course on how to introduce a successful open source practice into the enterprise.  This brings more talent in, enabling innovation, across a broader spectrum of companies.

DoBc3i2UwAEkK1C.jpg large

Daniel Ruggeri

What do these people have in common?  Bringing about positive change in the world, via open source projects.

This is why I come to ApacheCon.  It’s not the beautiful venues.  It’s not the education and learning.  It’s not the fun gatherings.  (Although these things are good too of course.)

It’s so that I may be inspired by stories such as these.

Who put ABAC in my RBAC?

Readers know that Attribute-based Access Control (ABAC) is a bit of an obsession with me.  It stems from the want to have something like an ABAC system in my little bag of tricks.  An authorization engine that scales to everyday usage, without proprietary, bloated or cumbersome baggage to weigh it down.

So I comment, lament and nothing seems to come of it.

Until I leaned that ABAC can be combined with RBAC.

We like RBAC, use it in our everyday applications, but it has some serious shortcomings, and we don’t know what to do about them.

ABAC also good.  It’s adaptable, but lacks meaningful standards, we struggle during implementations, and are left wanting more.

Now, let’s somehow combine the two.  Hopefully allowing the strengths of each to be preserved while eliminating their shortcomings.

What would such a system look like?

  1. Simple apis that are easy to understand and use.
  2. Standard data and api formats, something that can be shared between all of my apps and systems.
  3. Flexible decision expressions allowing unlimited instance data types and values to be considered.

How would this system work?

Standards-based RBAC adheres to the NIST model, later becoming an ANSI standard — INCITS 359.  Long story short, RBAC allows attributes to be applied during two separate phases of the access control decision:

1. User-Role Activation – instance data used to constrain whether an assigned role is eligible to be considered in the access control decision, i.e. permission check, that happens later.  For example, user may only activate the cashier role at store 314.

2. Role-Permission Activation – these constraints apply during the permission check itself.  An example is the action may only be performed if account #456789.

Apache Fortress 2.0.2 now supports type 1.  For a test drive, there’s this rbac-abac-sample in Github.  Have a look under-the-hood section of the README.

 

Towards an Attribute-Based Role-Based Access Control System

[Link to the Apache Fortress RBAC-ABAC-SAMPLE project on Github]

We’ve all heard the complaint, RBAC doesn’t work.  It leads to Role Explosion, defined as an inordinate number of roles in a production environment.  Nobody knows who must be assigned to what because there are hundreds if not thousands of them.

What’s a system implementor to do?  We could give Attribute-Based Access Control a try, but that has its own problems and we need not go there again.

There’s another way.  RBAC allows the usage of dynamic attributes.

  • Recent standards include dynamic policies, most notably, ANSI INCITS 494 RBAC Policy-Enhanced
  • The existence of entities to conveniently apply dynamic policies, e.g. User-Role and Role-Permission.
  • No language discouraging the usage of dynamic attributes alongside RBAC in the standard.

Indeed, dynamic attributes are encouraged if not prescribed.  Here’s where I should be pointing to evidence substantiating my arguments.

http://lmgtfy.com/?q=NIST+and+ANSI+and+RBAC+and+attributes

This brings us to Apache Fortress and a new enhancement to use dynamic attributes.

What is Apache Fortress?

Both followers of this blog (wife and boss) know about Apache Fortress.  Especially my wife.  It’s the itch that leads me to three years of work in a garage, alongside two of my brothers, who got dragged in also.

It’s also an implementation of the classic RBAC specification – ANSI INCITS 359.  If anything’s prone to exploding roles, it’s Apache Fortress.

How are we going to stop the dang exploding?

Described in a JIRA ticket  yesterday, and checked into the Apache Fortress Core Repo last night.  The idea is best explained with a story.

The Tale of Three Stooges and Three Branches

Once upon a time there were three branches, North, South and East managed by The Three Stooges that worked there, Curly, Moe and Larry.

They were nice blokes, but a tad unruly, and so we try to keep them separated.  Curly works in the East, Moe the North and Larry runs amok in the South.  All three are Tellers, but each may also substitute as coin Washers at the other two.

All is well because each Branch has only one Teller.  It’s never good when two Stooges combine without one being in charge.

Here are the Users and their Role assignments:

Curly: Teller, Washer

Moe: Teller, Washer

Larry: Teller, Washer

By now we know where this storyline’s headed.  How do we prevent one going off-script, wandering into another branch, activating Teller, and running slipshod?

The classic Role explosion theory goes like…

Create Roles by Location with User-Role assignments:

Curly: TellerEast, WasherNorth, WasherSouth

Moe: TellerNorth, WasherEast, WasherSouth

Larry: TellerSouth, WasherNorth, WasherEast

This works pretty good with three branches and two roles but what about the real-world?  How many branches will the medium-sized bank have, a thousand?  How many types of roles, at least ten?  If we follow the same Role-by-Location pattern there’d be over 10,000 Roles to manage!  We may be keeping our Stooges in check, but at the IT team’s expense.  Our roles have indeed exploded.  What now?

Time for something different, back to the earlier discussion over using attributes.  Let’s try controlling role activation by location, but store the required attributes on the user object itself.

User-Properties to store Role-Locale constraints:

Curly: Teller:East, Washer:North, Washer:South

Moe: Teller:North, Washer:East, Washer:South

Larry: Teller:South, Washer:North, Washer:East

What just happened here?  It kind of looks the same but it’s not.  We go back to only needing two Roles, but have added dynamic policies, Role-Locale, to properties stored on the User.  Our medium-sized bank only needs 10 roles not 10,000.

Now, when the security system logs in a User (createSession), it pushes its physical location attribute into the runtime context, e.g. North, South or East, along with the already present Userid attribute.  The security system compares that physical location, along with its corresponding properties stored on the User, to determine access rights, specifically which Roles may be activated into their Session.

Sprinkle in a policy that defines the role to constraint relationships.

Global Config Properties store Role-Constraint mappings:

Teller:Locale

Washer:Locale

That way when the security system activates roles it knows to perform the extra check on a particular role, and which attribute to verify.

In addition to location, we can constrain role activation by project, organization, customer, account balance, hair color, favorite ice cream, and any other form of instance data imaginable.  There may be multiple types of constraints applied to any or all roles in the system.  It truly is a dynamic policy mechanism placed on top of a traditional Role-Based Access Control System.

With this minor change to the security system, our IT guys return to the good life without worrying about exploding roles or what the Stooges are up to.  🙂

The End

 

 

DirtyKanza Training on Zwift?

Due to an injury on March 14th, discussed here, the last couple months of my Dirty Kanza training was done on a smart trainer, using the Zwift virtual training app.

IMG_20180422_091742_846

My Setup includes a CycleOps Magnus Smart Trainer

The Dirty Kanza is an ultra-endurance cycling event held every year over the Flint Hills of Kansas.

20180602_203806

Sundown was around mile one fifty during this year’s Kanza ride

The plan was pretty simple.  Focus on time and power rather than distance or speed.  The goal, twelve hours in the saddle, at whatever wattage could be mustered, 75% of the expected duration to complete the 206 mile course, in a single day.  Every week go a bit longer on the long day.  Work up to the peak, May 13th.  Afterwards, taper down, sprinkle in some real rides on pavement and gravel and prepare for the event on June 2nd.

Here’s the training plan in Strava (hours):

Screen Shot 2018-06-08 at 3.29.59 PM

Screenshot of Training Plan in Strava (hours)

How’d it go?  Still in the Breakfast Club (back of the pack) but shaved a bit off last year’s time.  It was another tough year, featuring stiff headwinds during the last half of the ride.  Out of 1,016 starters, 746 finished.

The official time:

Screen Shot 2018-06-09 at 9.38.36 AM

Screenshot of Results on Chronotrack

The ride on Strava:

Screen Shot 2018-06-08 at 3.36.05 PM

Screenshot of Ride on Strava