May 26, 2015

Are you getting dac_override AVC message?

Some time ago, Dan Walsh wrote “Why doesn’t SELinux give me the full path in an error message?” blog related to DAC_OVERRIDE capability.

“According to SELinux By Example. DAC_OVERRIDE allows a process to ignore Discretionary Access Controls including access lists.”

In Fedora 22, we have still a quite large number of DAC_OVERRIDE allowed by default. You can check it using

$ sesearch -A -p dac_override -C |grep -v ^DT |wc -l
387

So the question is if they are still needed. Basically most of them have been added because of a bad ownership of files/directories located in /var/lib, /var/log, /var/cache directories. But as you probably realize, we just “mask” bugs in applications and open backdoors in the Fedora SELinux policy.

For this reason, we want to introduce a new Fedora 23 feature to remove these capabilities where it is possible.

Let’s test it on the following real example:

$ sesearch -A -s psad_t -t psad_t -c capability
Found 1 semantic av rules:
allow psad_t psad_t : capability { dac_override setgid setuid net_admin net_raw } ;

$ ls -ldZ /var/lib/psad /var/log/psad /var/run/psad /etc/psad/
drwxr-xr-x. 3 root root system_u:object_r:psad_etc_t:s0 4096 May 26 12:40 /etc/psad/
drwxr-xr-x. 2 root root system_u:object_r:psad_var_lib_t:s0 4096 May 26 12:35 /var/lib/psad
drwxr-xr-x. 4 root root system_u:object_r:psad_var_log_t:s0 4096 May 26 12:47 /var/log/psad
drwxr-xr-x. 2 root root system_u:object_r:psad_var_run_t:s0 100 May 26 12:44 /var/run/psad

$ ps -efZ |grep psad
system_u:system_r:psad_t:s0 root 25461 1 0 12:44 ? 00:00:00 /usr/bin/perl -w /usr/sbin/psad
system_u:system_r:psad_t:s0 root 25466 1 0 12:44 ? 00:00:00 /usr/sbin/psadwatchd -c /etc/psad/psad.con
f

which looks correct. So is dac_override really needed for psad_t? How could I check it?

On my Fedora 23 system, I run with

$ cat dacoverride.cil
(typeattributeset cil_gen_require domain)
(auditallow domain self (capability (dac_override)))

policy module which audits all dac_override as granted in /var/log/audit/audit.log if they are needed.

For example I see

type=AVC msg=audit(1432639909.704:380132): avc: granted { dac_override } for pid=28878 comm="sudo" capability=1 scontext=staff_u:staff_r:staff_sudo_t:s0-s0:c0.c1023 tcontext=staff_u:staff_r:staff_sudo_t:s0-s0:c0.c1023 tclass=capability

which is expected. But I don’t see it for psad_t if I try to use it. So this is probably a bug in the policy and dac_override should be removed for psad_t. Also we should ask psad maintainers for their agreement.

And what happens if you go with the following ownership change

$ ls -ldZ /var/log/psad/
drwxr-xr-x. 4 mgrepl mgrepl system_u:object_r:psad_var_log_t:s0 4096 May 26 13:53 /var/log/psad/

? You get

type=AVC msg=audit(1432641212.164:380373): avc: granted { dac_override } for pid=30333 comm="psad" capability=1 scontext=system_u:system_r:psad_t:s0 tcontext=system_u:system_r:psad_t:s0 tclass=capability

 

 


May 20, 2015

How to create a new initial policy using sepolicy-generate tool?

I have a service running without own SELinux domain and I would like to create a new initial policy for it.

How can I create a new initial policy? Is there a tool for it?

We get these questions very often. And my answer is pretty easy. Yes, there is a tool which can help you with this task.

Let’s use a real example to demonstrate how to create own initial policy for the running lttng-sessiond service on my system.

I see

$ ps -efZ |grep lttng-sessiond
system_u:system_r:unconfined_service_t:s0 root 29186 1 0 12:31 ? 00:00:00 /usr/bin/lttng-sessiond -d

unconfined_service_t tells us the lttng-sessiond service runs without SELinux confinement.

Basically there is no problem with a service running as unconfined_service_t if this service does “everything” or this service is a third party software. A problem occurs if there are another services with own SELinux domains and they want to access objects created by your service.

Then you can see AVCs like

type=AVC msg=audit(1431724248.950:1003): avc: denied { getattr } for pid=768 comm="systemd-logind" path="/dev/shm/lttng-ust-wait-5" dev="tmpfs" ino=25832 scontext=system_u:system_r:systemd_logind_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=file permissive=0

In that case, you want to create  SELinux policy from the scratch to get objects created by your service with the specific SELinux labeling to see if you can get a proper SELinux confinement.

Let’s start.

1. You need to identify an executable file which is used to start a service. From

system_u:system_r:unconfined_service_t
$:s0 root 29186 1 0 12:31 ? 00:00:00 /usr/bin/lttng-sessiond -d

you can see /usr/bin/lttng-sessiond is used. Also

$ grep ExecStart /usr/lib/systemd/system/lttng-sessiond.service
ExecStart=/usr/bin/lttng-sessiond -d

is useful.

2. Run sepolicy-generate to create initial policy files.

sepolicy generate --init -n lttng /usr/bin/lttng-sessiond
Created the following files:
/home/mgrepl/Devel/RHEL/selinux-policy/lttng.te # Type Enforcement file
/home/mgrepl/Devel/RHEL/selinux-policy/lttng.if # Interface file
/home/mgrepl/Devel/RHEL/selinux-policy/lttng.fc # File Contexts file
/home/mgrepl/Devel/RHEL/selinux-policy/lttng_selinux.spec # Spec file
/home/mgrepl/Devel/RHEL/selinux-policy/lttng.sh # Setup Script

3. Run

# sh lttng.sh

4. YOU ARE DONE. CHECK YOUR RESULTS.

# ls -Z /usr/bin/lttng-sessiond
system_u:object_r:lttng_exec_t:s0 /usr/bin/lttng-sessiond
# systemctl restart lttng-sessiond
# ps -eZ |grep lttng-sessiond
system_u:system_r:lttng_t:s0 root 29850 1 0 12:50 ? 00:00:00 /usr/bin/lttng-sessiond -d
# auseaarch -m avc -ts recent
... probably you see a lot of AVCs ...

Now you have created/loaded own initial policy for your service. In this point, you can work on AVCs, you can ask us to help you with these AVCs.


JSON, Homoiconicity, and Database Access

During a recent review of an internal web application based on the Node.js platform, we discovered that combining JavaScript Object Notation (JSON) and database access (database query generators or object-relational mappers, ORMs) creates interesting security challenges, particularly for JavaScript programming environments.

To see why, we first have to examine traditional SQL injection.

Traditional SQL injection

Most programming languages do not track where strings and numbers come from. Looking at a string object, it is not possible to tell if the object corresponds to a string literal in the source code, or input data which was read from a network socket. Combined with certain programming practices, this lack of discrimination leads to security vulnerabilities. Early web applications relied on string concatenation to construct SQL queries before sending them to the database, using Perl constructs like this to load a row from the users table:

# WRONG: SQL injection vulnerability
$dbh->selectrow_hashref(qq{
  SELECT * FROM users WHERE users.user = '$user'
})

But if the externally supplied value for $user is "'; DROP TABLE users; --", instead of loading the user, the database may end up deleting the users table, due to SQL injection. Here’s the effective SQL statement after expansion of such a value:

  SELECT * FROM users WHERE users.user = ''; DROP TABLE users; --'

Because the provenance of strings is not tracked by the programming environment (as explained above), the SQL database driver only sees the entire query string and cannot easily reject such crafted queries.

Experience showed again and again that simply trying to avoid pasting untrusted data into query strings did not work. Too much data which looks trustworthy at first glance turns out to be under external control. This is why current guidelines recommend employing parametrized queries (sometimes also called prepared statements), where the SQL query string is (usually) a string literal, and the variable parameters are kept separate, combined only in the database driver itself (which has the necessary database-specific knowledge to perform any required quoting of the variables).

Homoiconicity and Query-By-Example

Query-By-Example is a way of constructing database queries based on example values. Consider a web application as an example. It might have a users table, containing columns such as user_id (a serial primary key), name, password (we assume the password is stored in the clear, also this practice is debatable), a flag that indicates if the user is an administrator, a last_login column, and several more.

We could describe a concrete row in the users table like this, using JavaScript Object Notation (JSON):

{
  "user_id": 1,
  "name": "admin",
  "password": "secret",
  "is_admin": true,
  "last_login": 1431519292
}

The query-by-example style of writing database queries takes such a row descriptor, omits some unknown parts, and treats the rest as the column values to match. We could check user name an password during a login operation like this:

{
  "name": "admin",
  "password": "secret",
}

If the database returns a row, we know that the user exists, and that the login attempt has been successful.

But we can do better. With some additional syntax, we can even express query operators. We could select the regular users who have logged in today (“1431475200” refers to midnight UTC, and "$gte" stands for “greater or equal”) with this query:

{
  "last_login": {"$gte": 1431475200},
  "is_admin": false
}

This is in fact the query syntax used by Sequelize, a object-relational mapping tool (ORM) for Node.js.

This achieves homoiconicity refers to a property of programming environment where code (here: database queries) and data look very much alike, roughly speaking, and can be manipulated with similar programming language constructors. It is often hailed as a primary design achievement of the programming language Lisp. Homoiconicity makes query construction with the Sequelize toolkit particularly convenient. But it also means that there are no clear boundaries between code and data, similar to the old way of constructing SQL query strings using string concatenation, as explained above.

Getting JSON To The Database

Some server-side programming frameworks, notably Node.js, automatically decode bodies of POST requests of content type application/json into JavaScript JSON objects. In the case of Node.js, these JSON objects are indistinguishable from other such objects created by the application code.  In other words, there is no marker class or other attribute which allows to tell apart objects which come from inputs and objects which were created by (for example) object literals in the source.

Here is a simple example of a hypothetical login request. When Node.js processes the POST request on he left, it assigns a JavaScript object to the the req.body field in exactly the same way the JavaScript code on the right does.

POST request Application code
POST /user/auth HTTP/1.0
Content-Type: application/json

{"name":"admin","password":"secret"}
req.body = {
  name: "admin",
  password: "secret"
}

In a Node.js application using Sequelize, the application would first define a model User, and then use it as part of the authentication procedure, in code similar to this (for the sake of this example, we still assume the password is stored in plain text, the reason for that will be come clear immediately):

User.findOne({
  where: {
    name: req.body.name,
    password: req.body.password
  }
}).then(function (user) {
  if (user) {
    // We got a user object, which means that login was successful.
    …
  } else {
    // No user object, login failure.
    …
  }
})

The query-by-example part is highlighted.

However, this construction has a security issue which is very difficult to fix. Suppose that the POST request looks like this instead:

POST /user/auth HTTP/1.0
Content-Type: application/json

{
  "name": {"$gte": ""},
  "password": {"$gte": ""}
}

This means that Sequelize will be invoked with this query (and the markers included here are invisible to the Sequelize code, they just illustrate the data that came from the post request):

User.findOne({
  where: {
    name: {"$gte": ""},
    password: {"$gte": ""}
  }
})

Sequelize will translate this into a query similar to this one:

SELECT * FROM users where name >= ''  AND password >= '';

Any string is greater than or equal to the empty string, so this query will find any user in the system, regardless of the user name or password. Unless there are other constraints imposed by the application, this allows an attacker to bypass authentication.

What can be done about this? Unfortunately, not much. Validating POST request contents and checking that all the values passed to database queries are of the expected type (string, number or Boolean) works to mitigate individual injection issues, but the experience with SQL injection issues mentioned at the beginning of this post suggests that this is not likely to work out in practice, particularly in Node.js, where so much data is exposed as JSON objects. Another option would be to break homoiconicity, and mark in the query syntax where the query begins and data ends. Getting this right is a bit tricky. Other Node.js database frameworks do not describe query structure in terms of JSON objects at all; Knex.js and Bookshelf.js are in this category.

Due to the prevalence of JSON, such issues are most likely to occur within Node.js applications and frameworks. However, already in July 2014, Kazuho Oku described a JSON injection issue in the SQL::Maker Perl package, discovered by his colleague Toshiharu Sugiyama.

Update (2015-05-26): After publishing this blog post, we learned that a very similar issue has also been described in the context of MongoDB: Hacking NodeJS and MongoDB.

Other fixable issues in Sequelize

Sequelize overloads the findOne method with a convenience feature for primary-key based lookup. This encourages programmers to write code like this:

User.findOne(req.body.user_id).then(function (user) {
  … // Process results.
}

This allows attackers to ship a complete query object (with the “{where: …}” wrapper) in a POST request. Even with strict query-by-example queries, this can be abused to probe the values of normally inaccessible table columns. This can be done efficiently using comparison operators (with one bit leaking per query) and binary search.

But there is another issue. This construct

User.findOne({
  where: "user_id IN (SELECT user_id " +
    "FROM blocked_users WHERE unblock_time IS NULL)"
}).then(function (user) {
  … // Process results.
}

pastes the marked string directly into the generated SQL query (here it is used to express something that would be difficult to do directly in Sequelize (say, because the blocked_users table is not modeled). With the “findOne(req.body.user_id)” example above, a POST request such as

POST /user/auth HTTP/1.0
Content-Type: application/json

{"user_id":{"where":"0=1; DROP TABLE users;--"}}

would result in a generated query, with the highlighted parts coming from the request:

SELECT * FROM users WHERE 0=1; DROP TABLE users;--;

(This will not work with some databases and database drivers which reject multi-statement queries. In such cases, fairly efficient information leaks can be created with sub-queries and a binary search approach.)

This is not a defect in Sequelize, it is a deliberate feature. Perhaps it would be better if this functionality were not reachable with plain JSON objects. Sequelize already supports marker objects for including literals, and a similar marker object could be used for verbatim SQL.

The Sequelize upstream developers have mitigated the first issue in version 3.0.0. A new method, findById (with an alias, findByPrimary), has been added which queries exclusively by primary keys (“{where: …}” queries are not supported). At the same time, the search-by-primary-key automation has been removed from findOne, forcing applications to choose explicitly between primary key lookup and full JSON-based query expression. This explicit choice means that the second issue (although not completely removed from version 3.0.0) is no longer directly exposed. But as expected, altering the structure of a query by introducing JSON constructs (as with the "$gte example is still possible, and to prevent that, applications have to check the JSON values that they put into Sequelize queries.

Conclusion

JSON-based query-by-example expressions can be an intuitive way to write database queries. However, this approach, when taken further and enhanced with operators, can lead to a reemergence of injection issues which are reminiscent of SQL injection, something these tools try to avoid by operating at a higher abstraction level. If you, as an application developer, decide to use such a tool, then you will have to make sure that data passed into queries has been properly sanitized.

May 19, 2015

Is SELinux good anti-venom?
SELinux to the Rescue 

If you have been following the news lately you might have heard of the "Venom" vulnerabilty.

Researchers found a bug in Qemu process, which is used to run virtual machines on top of KVM based linux
machines.  Red Hat, Centos and Fedora systems were potentially vulnerable.  Updated packages have been released for all platforms to fix the problem.

But we use SELinux to prevent virtual machines from attacking other virtual machines or the host.  SELinux protection on VM's is often called sVirt.  We run all virtual machines with the svirt_t type.  We also use MCS Separation to isolate one VM from other VMs and thier images on the system.

While to the best of my knowlege no one has developed an actual hack to break out of the virtualization layer, I do wonder whether or not the break out would even be allowed by SELinux. SELinux has protections against executable memory, which is usually used for buffer overflow attacks.  These are the execmem, execheap and execstack access controls.  There is a decent chance that these would have blocked the attack. 

# sesearch -A -s svirt_t -t svirt_t -c process -C
Found 2 semantic av rules:
   allow svirt_t svirt_t : process { fork sigchld sigkill sigstop signull signal getsched setsched getsession getcap getattr setrlimit } ; 
DT allow svirt_t svirt_t : process { execmem execstack } ; [ virt_use_execmem ]

Examining the policy on my Fedora 22 machine, we can look at the types that a svirt_t process would be allowed to write. These are the types that SELinux would allow the process to write, if they had matching MCS labels, or s0.

# sesearch -A -s svirt_t -c file -p write -C | grep open 
   allow virt_domain qemu_var_run_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain svirt_home_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain svirt_tmp_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain svirt_image_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain svirt_tmpfs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow virt_domain virt_cache_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
DT allow virt_domain fusefs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; [ virt_use_fusefs ]
DT allow virt_domain cifs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; [ virt_use_samba ]
ET allow virt_domain dosfs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; [ virt_use_usb ]
DT allow virt_domain nfs_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; [ virt_use_nfs ]
ET allow virt_domain usbfs_t : file { ioctl read write getattr lock append open } ; [ virt_use_usb ]

Lines beginning with the D are disabled, and only enabled by toggling the boolean.  I did a video showing the access avialable to an OpenShift process running as root on your system using the same
technology.  Click here to view.

SELinux also blocks capabities, so the qemu process even if running as root would only have the net_bind_service capabilty, which allows it to bind to ports < 1024.

# sesearch -A -s svirt_t -c capability -C
Found 1 semantic av rules:
   allow svirt_t svirt_t : capability net_bind_service ; 

Dan Berrange, creator of libvirt, sums it up nicely on the Fedora Devel list:

"While you might be able to crash the QEMU process associated with your own guest, you should not be able to escalate from there to take over the host, nor be able to compromise other guests on the same host. The attacker would need to find a second independent security flaw to let them escape SELinux in some manner, or some way to trick libvirt via its QEMU monitor connection. Nothing is guaranteed 100% foolproof, but in absence of other known bugs, sVirt provides good anti-venom for this flaw IMHO."

Did you setenforce 1?

May 13, 2015

VENOM, don’t get bitten.
CC BY-SA CrowdStrike

CC BY-SA CrowdStrike

QEMU is a generic and open source machine emulator and virtualizer and is incorporated in some Red Hat products as a foundation and hardware emulation layer for running virtual machines under the Xen and KVM hypervisors.

CVE-2015-3456 (aka VENOM) is a security flaw in the QEMU’s Floppy Disk Controller (FDC) emulation. It can be exploited by a malicious guest user with access to the FDC I/O ports by issuing specially crafted FDC commands to the controller. It can result in guest controlled execution of arbitrary code in, and with privileges of, the corresponding QEMU process on the host. Worst case scenario this can be guest to host exit with the root privileges.

This issue affects all x86 and x86-64 based HVM Xen and QEMU/KVM guests, regardless of their machine type, because both PIIX and ICH9 based QEMU machine types create ISA bridge (ICH9 via LPC) and make FDC accessible to the guest. It is also exposed regardless of presence of any floppy related QEMU command line options so even guests without floppy disk explicitly enabled in the libvirt or Xen configuration files are affected.

We believe that code execution is possible but we have not yet seen any working reproducers that would allow this.

This flaw arises because of an unrestricted indexed write access to the fixed size FIFO memory buffer that FDC emulation layer uses to store commands and their parameters. The FIFO buffer is accessed with byte granularity (equivalent of FDC data I/O port write) and the current index is incremented afterwards. After each issued and processed command the FIFO index is reset to 0 so during normal processing the index cannot become out-of-bounds.

For certain commands (such as FD_CMD_READ_ID and FD_CMD_DRIVE_SPECIFICATION_COMMAND) though the index is either not reset for certain period of time (FD_CMD_READ_ID) or there are code paths that don’t reset the index at all (FD_CMD_DRIVE_SPECIFICATION_COMMAND), in which case the subsequent FDC data port writes result in sequential FIFO buffer memory writes that can be out-of-bounds of the allocated memory. The attacker has full control over the values that are stored and also almost fully controls the length of the write. Depending on how the FIFO buffer is defined, he might also have a little control over the index as in the case of Red Hat Enterprise Linux 5 Xen QEMU package, where the index variable is stored after the memory designated for the FIFO buffer.

Depending on the location of the FIFO memory buffer, this can either result in stack or heap overflow. For all of the Red Hat Products using QEMU the FIFO memory buffer is allocated from the heap.

Red Hat has issued security advisories to fix this flaw and instructions for applying the fix are available on the knowledge-base.

Mitigation

The sVirt and seccomp functionalities used to restrict host’s QEMU process privileges and resource access might mitigate the impact of successful exploitation of this issue.  A possible policy-based workaround is to avoid granting untrusted users administrator privileges within guests.

May 09, 2015

Setting up an RDO deployment to be Identity V3 Only

The OpenStack Identity API Version 3 provides support for many features that are not available in version 2. Much of the installer code from Devstack, Puppet Modules, and Packstack, all assumes that Keystone is operating with the V2 API. In the interest of hastening the conversion, I set up a deployment that is V3 only. Here is how I did it.

The order I performed these operations was:

  1. Convert Horizon
  2. Convert the Servcfice Catalog
  3. Disable the V2 API in Keystone
  4. Convert the authtoken stanze and the Endpoint config files to use discovery

Horizon

Horizon was the simplest. To change Horizon to use the V3 API, edit the local_settings. For RDO, this file is in:
/etc/openstack-dashboard/local_settings

At the end, I added:

 OPENSTACK_API_VERSIONS = {
     "identity": 3
 }
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

You might want to make the default domain value something different, especially if you are using a domain specific backend for LDAP.

Service Catalog

Next up is migrating the Keystone service catalog. You can query the current values by using direct SQL.

mysql  --user keystone_admin --password=SECRETE   keystone -e "select interface, url from endpoint where service_id =  (select id from service where service.type = 'identity');" 

By Default, the Responses will have V2.0 at the end of them:

+-----------+-------------------------------+
| interface | url                           |
+-----------+-------------------------------+
| admin     | http://10.10.10.40:35357/v2.0 |
| public    | http://10.10.10.40:5000/v2.0  |
| internal  | http://10.10.10.40:5000/v2.0  |
+-----------+-------------------------------+

I used SQL to modify them. For example:

mysql  --user keystone_admin --password=SECRETE   keystone -e "update endpoint set   url  = 'http://10.10.10.40:5000/v3' where  interface ='internal' and  service_id =  (select id from service where service.type = 'identity');" 
mysql  --user keystone_admin --password=SECRETE   keystone -e "update endpoint set   url  = 'http://10.10.10.40:5000/v3' where  interface ='public' and  service_id =  (select id from service where service.type = 'identity');" 
mysql  --user keystone_admin --password=SECRETE   keystone -e "update endpoint set   url  = 'http://10.10.10.40:35357/v3' where  interface ='admin' and  service_id =  (select id from service where service.type = 'identity');" 

You cannot use the openstack cli to perform this; attempting to change an URL:

$ openstack  endpoint set --interface public  --service keystone http://10.10.10.40:5000/v2.0
ERROR: openstack More than one endpoint exists with the name 'http://10.10.10.40:5000/v2.0'.

I’ll Open a ticket for that.

To Use the V3 API for Operations, you are going to want a V3 Keystone RC. Here is mine:

export OS_USERNAME=admin
export OS_PROJECT_NAME=admin
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PASSWORD=SECRETE
export OS_AUTH_URL=http://$HOSTNAME:5000/v3
export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '
export OS_IDENTITY_API_VERSION=3

Disabling V2.0

In order to Ensure you are using V3, it is worth while to disable V2.0. The simplest way to do that is to modify the paste file that controls the pipelines. On and RDO system this is /etc/keystone/keystone-paste.ini. I did it By commenting out the following lines:

#[pipeline:public_api]
# The last item in this pipeline must be public_service or an equivalent
# application. It cannot be a filter.
#pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension user_crud_extension public_service

#[pipeline:admin_api]
# The last item in this pipeline must be admin_service or an equivalent
# application. It cannot be a filter.
#pipeline = sizelimit url_normalize request_id build_auth_context token_auth admin_token_auth json_body ec2_extension s3_extension crud_extension admin_service

and I removed them from the composites:

[composite:main]
use = egg:Paste#urlmap
#/v2.0 = public_api
/v3 = api_v3
/ = public_version_api

[composite:admin]
use = egg:Paste#urlmap
#/v2.0 = admin_api
/v3 = api_v3
/ = admin_version_api

Configuring Other services

THis setup was not using Neutron, so I only had to handle Nova, GLance, and Cinder. The process should be comparable for Neutron.

RDO adds configuration values under /use/share/<service>/<service>-dist.conf That over ride the defaults from the python code. For example, the Nova packages has:
/usr/share/nova/nova-dist.conf. I commented out the following values, as they are based on old guidance for setting up authtoken, and are not how the Auth plugins for Keystone Client should be configured:

[keystone_authtoken]
#admin_tenant_name = %SERVICE_TENANT_NAME%
#admin_user = %SERVICE_USER%
#admin_password = %SERVICE_PASSWORD%
#auth_host = 127.0.0.1
#auth_port = 35357
#auth_protocol = http
# Workaround for https://bugs.launchpad.net/nova/+bug/1154809
#auth_version = v2.0

To set the proper values, I put the following in /etc/nova/nova.conf

[keystone_authtoken]
auth_plugin = password
auth_url = http://10.10.10.40:35357
username = nova
password = SECRETE
project_name = services
user_domain_name = Default
project_domain_name = Default
#this values is not needed unless you do not modify /usr/share/nova/nova-dist.conf
#auth_version=v3

A Big thanks to Jamie Lennox for helping me get this straight.

I made a comparable change for glance. For Cinder, the change needs to be made in /etc/cinder/api-paste.ini, but the values are comparable:

[filter:authtoken]
paste.filter_factory = keystonemiddleware.auth_token:filter_factory
auth_plugin = password
auth_url = http://10.10.10.40:35357
username = cinder
password=SECRETE
project_name = services
user_domain_name = Default
project_domain_name = Default

You can restart services using the command openstack-service. To Restart Nova, run:

sudo openstack-service restart nova

And comparable commands for Cinder and Glance. I tested the endpoint using the Horizon API. for Glance, use the images page, and for cinder, the volume page. All other pages were Nova controlled. Neutron would obviously be the Network administration. If you get errors on the page saying “cannot access” it is a sign that they are wstill attempting to do V2 API token verification. Looking in the Keystone access log verified that for me. If you see lines like:

10.10.10.40 - - [09/May/2015:03:20:23 +0000] "GET /v2.0 HTTP/1.1" 404 93 "-" "python-keystoneclient"

You know something is trying to use the V2 API.

May 06, 2015

Explaining Security Lingo

This post is aimed to clarify certain terms often used in the security community. Let’s start with the easiest one: vulnerability. A vulnerability is a flaw in a selected system that allows an attacker to compromise the security of that particular system. The consequence of such a compromise can impact the confidentiality, integrity, or availability of the attacked system (these three aspects are also the base metrics of the CVSS v2 scoring system that are used to rate vulnerabilities). ISO/IEC 27000, IETF RFC 2828, NIST, and others have very specific definitions of the term vulnerability, each differing slightly. A vulnerability’s attack vector is the actual method of using the discovered flaw to cause harm to the affected software; it can be thought of as the entry point to the system or application. A vulnerability without an attack vector is normally not assigned a CVE number.

When a vulnerability is found, an exploit can be created that makes use of this vulnerability. Exploits can be thought of as a way of utilizing one or more vulnerabilities to compromise the targeted software; they can come in the form of an executable program, or a simple set of commands or instructions. Exploits can be local, executed by a user on a system that they have access to, or remote, executed to target certain vulnerable services that are exposed over the network.

Once an exploit is available for a vulnerability, this presents a threat for the affected software and, ultimately, for the person or business operating the affected software. ISO/IEC 27000 defines a threat as “A potential cause of an incident, that may result in harm of systems and organization”. Assessing threats is a crucial part of the threat management process that should be a part of every company’s IT risk management policy. Microsoft has defined a useful threat assessment model, STRIDE, that is used to assess every threat in several categories: Spoofing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege. Each of these categories correlates to a particular security property of the affected software; for example, if a vulnerability allows the attacker to tamper with the system (Tampering), the integrity of the that system is compromised. A targeted threat is a type of a threat that is specific to a particular application or system; such threats usually involve malware designed to utilize a variety of known vulnerabilities in specific applications that have a large user base, for example, Flash, WordPress, or PHP.

A related term often considered when assessing a threat is a vulnerability window. This is the time from the moment a vulnerability is published, regardless of whether an exploit exists, up to the point when a fix or a workaround is available that can be used to mitigate the vulnerability. If a vulnerability is published along with a fix, then the vulnerability window can also represent the time it takes to patch that particular vulnerability.

A zero-day vulnerability is a subclass of all vulnerabilities that is published while the affected software has no available patch that would mitigate the issue. Similarly, a zero-day exploit is an exploit that uses a vulnerability that has not yet been patched. Edit: Alternatively, the term zero-day can be used to refer to a vulnerability that has not yet been published publicly or semi-publicly (for example, on a closed mailing list). The term zero-day exploit would then refer to an exploit for an undisclosed vulnerability. The two differing definitions for the term zero-day may be influenced with the recent media attention security issues received. Media, maybe unknowingly, have coined the term zero-day to represent critical issues that are disclosed without being immediately patched. Nevertheless, zero-day as a term is not strictly defined and should be used with care to avoid ambiguity in communication.

Unpatched vulnerabilities can allow malicious users to conduct an attack. Attacking a system or an application is the act of using a vulnerability’s exploit to compromise the security policy of the attacked asset. Attacks can be categorized as either active, which directly affect integrity or availability of the system, or passive, which is used to compromise the confidentiality of the system without affecting the system. An example of an ongoing active attack can be a distributed denial of service attack that targets a particular website with the intention of compromising it’s availability.

The terminology described above is only the tip of the iceberg when it comes to the security world. IETF RFC 2828, for example, consists of 191 pages of definitions and 13 pages of references strictly relevant to IT security. However, the knowing the difference between terms such as threat or exploit can be quite crucial when assessing and communicating a vulnerability within a team or a community.

May 01, 2015

Automating Kerberos Authentication

Sometimes you need unattended authentication. Sometimes you are just lazy. Whatever the reason, if a user (human or otherwise) wants to fetch a Ticket Granting Ticket (TGT) from a Kerberos Key Distribution Center (KDC) automatically, the Global Security Services API (GSSAPI) library shipped with most recent distributions support it.

Kerberos is based on symmetric cryptography. If a user needs to store a symmetric key in a filesystem, she uses a file format known as a Key table, or keytab for short. Fetching a keytab is not a standard action, but FreeIPA has shipped with a utility to make it easier: ipa-getkeytab

Before I attempt to get a keytab, I want to authenticate to my KDC and get a TGT manually:

$ kinit ayoung@YOUNGLOGIC.NET
Password for ayoung@YOUNGLOGIC.NET: 
[ayoung@ayoung530 tempest (master)]$ klist
Ticket cache: KEYRING:persistent:14370:krb_ccache_H4Ss9cA
Default principal: ayoung@YOUNGLOGIC.NET

Valid starting       Expires              Service principal
05/01/2015 09:07:06  05/02/2015 09:06:55  krbtgt/YOUNGLOGIC.NET@YOUNGLOGIC.NET

To fetch a keytab and store it in the users home directory, you can run the following command. I’ve coded it to talk to my younglogic.net KDC, so modify it for yours.

ipa-getkeytab -p $USER@YOUNGLOGIC.NET -k $HOME/client.keytab -s ipa.younglogic.net

You can get your own principal from the klist output:

export KRB_PRINCIPAL=$(klist | awk '/Default principal:/ {print $3}')

If you are running on an ipa-client enrolled machine, much of the info you need is in /etc/ipa/default.conf.

$ cat   /etc/ipa/default.conf 
#File modified by ipa-client-install

[global]
basedn = dc=younglogic,dc=net
realm = YOUNGLOGIC.NET
domain = younglogic.net
server = ipa.younglogic.net
host = rdo.younglogic.net
xmlrpc_uri = https://ipa.younglogic.net/ipa/xml
enable_ra = True

You can convert these values into environment variables with:

 $(awk '/=/ {print "export IPA_" toupper($1)"="$3}' < /etc/ipa/default.conf)

Now a user could manually kinit using that keytab and the following commands:

 
$(awk '/=/ {print "export IPA_" toupper($1)"="$3}' < /etc/ipa/default.conf)
kinit -k -t $HOME/client.keytab $USER@$IPA_REALM

We can skip the kinit step by putting the keytab in a specific location. If you look inthe man page for krb5.conf you can find the following section:

default_client_keytab_name
This relation specifies the name of the default keytab for obtaining client credentials. The default is FILE:/var/kerberos/krb5/user/%{euid}/client.keytab. This relation is subject to parameter expansion

What is %{euid}? It is the numeric userid for a user. For yourself, the value is set in $EUID. What if you need it for a different user? Use the getent command to configure the name service switch configured database for this value:

export AYOUNG_EUID=getent passwd ayoung | cut -d: -f3

You need to create that directory before you can put something in it. You only want the user to be able to read or write in that directory.

sudo mkdir  /var/kerberos/krb5/user/$EUID
sudo chown $USER:$USER  /var/kerberos/krb/$EUID 
chmod 700  /var/kerberos/k5/user/$EUID

Now use that to store the keytab:

 ipa-getkeytab -p $KRB_PRINCIPAL -k   /var/kerberos/krb5/user/$EUID/client.keytab -s $IPA_SERVER

To test out the new keytab, kdestroy to remove the existing TGTS then try performing an action that would require a service ticket.

Here I show an initially cleared credential cache that gets automatically populated when I connect to a remote system via ssh.

[ayoung@ayoung530 tempest (master)]$ kdestroy -A
[ayoung@ayoung530 tempest (master)]$ klist -A
[ayoung@ayoung530 tempest (master)]$ ssh -K rdo.younglogic.net
Last login: Fri May  1 16:42:28 2015 from c-1-2-3-4.imadethisup.net
-sh-4.2$ exit
logout
Connection to rdo.younglogic.net closed.
[ayoung@ayoung530 tempest (master)]$ klist -A
Ticket cache: KEYRING:persistent:14370:krb_ccache_WotXvlm
Default principal: ayoung@YOUNGLOGIC.NET

Valid starting       Expires              Service principal
05/01/2015 12:42:46  05/02/2015 12:42:45  host/rdo.younglogic.net@YOUNGLOGIC.NET
05/01/2015 12:42:46  05/02/2015 12:42:45  host/rdo.younglogic.net@
05/01/2015 12:42:45  05/02/2015 12:42:45  krbtgt/YOUNGLOGIC.NET@YOUNGLOGIC.NET

I would not recommend doing this for normal users. But for service users that need automated access to remote services, this is the correct approach.

April 29, 2015

Creating Hierarchical Projects in Keystone

Hierarchical Multitenancy is coming. Look busy.

Until we get CLI support for creating projects with parent relationships, we have to test via curl. This has given me a chance to clean up a few little techniques on using jq andd heredocs.

#!/usr/bin/bash -x
. ./keystonerc_admin

TOKEN=$( curl -si  -H "Content-type: application/json"  -d@- $OS_AUTH_URL/auth/tokens <<EOF | awk '/X-Subject-Token/ {print $2}'
{
    "auth": {
        "identity": {
            "methods": [
                "password"
            ],
            "password": {
                "user": {
                    "domain": {
                        "name": "$OS_USER_DOMAIN_NAME"
                    },
                    "name": "admin",
                    "password": "$OS_PASSWORD"
                }
            }
        },
        "scope": {
            "project": {
                "domain": {
                    "name": "$OS_PROJECT_DOMAIN_NAME"
                },
                "name": "$OS_PROJECT_NAME"
            }
        }
    }
}
EOF
)

PARENT_PROJECT=$( curl  -H "Content-type: application/json" -H"X-Auth-Token:$TOKEN"  -d@- $OS_AUTH_URL/projects <<EOF |  jq -r '.project  | {id}[]  '
{
    "project": {
        "description": "parent project",
        "domain_id": "default",
        "enabled": true,
        "name": "Parent"
    }
}
EOF
)

echo $PARENT_PROJECT


curl  -H "Content-type: application/json" -H"X-Auth-Token:$TOKEN"  -d@- $OS_AUTH_URL/projects <<EOF 
{
    "project": {
        "description": "demo-project",
        "parent_project_id": "$PARENT_PROJECT",
        "domain_id": "default",
        "enabled": true,
        "name": "child"
    }
}
EOF


Note that this uses V3 of the API. I have the following keystone_adminrc

export OS_USERNAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_PASSWORD=cf8dcb8aae804722
export OS_AUTH_URL=http://192.168.1.80:5000/v3/

export OS_IDENTITY_API_VERSION=3

export OS_REGION_NAME=RegionOne
export PS1='[\u@\h \W(keystone_admin)]\$ '

Container Security: Just The Good Parts

Security is usually a matter of trade-offs. Questions like: “Is X Secure?”, don’t often have direct yes or no answers. A technology can mitigate certain classes of risk even as it exacerbates others.

Containers are just such a recent technology and their security impact is complex. Although some of the common risks of containers are beginning to be understood, many of their upsides are yet to be widely recognized. To emphasize the point, this post will highlight three of advantages of containers that sysadmins and DevOps can use to make installations more secure.

Example Application

To give this discussion focus, we will consider an example application: a simple imageboard application. This application allows users to create and respond in threads of anonymous image and text content. Original posters can control their posts via “tripcodes” (which are basically per-post passwords). The application consists of the following “stack”:

  • nginx to serve static content, reverse proxy the active content, act as a cache-layer, and handle SSL
  • node.js to do the heavy lifting
  • mariadb to enable persistence

The Base Case

The base-case for comparison is the complete stack being hosted on a single machine (or virtual machine). It is true that this is a simple case, but this is not a straw man. A large portion of the web is served from just such unified instances.

The Containerized Setup

The stack naturally splits into three containers:

  • container X, hosting nginx
  • container J, hosting node.js
  • container M, hosting mariadb

Additionally, three /var locations are created on the host: (1) one for static content (a blog, theming, etc.), (2) one for the actual images, and (3) one for database persistence. The node.js container will have a mount for the the image-store, the mariadb container will have a mount for the database, and the nginx container will have mounts for both the image-store and static content.

Advantage #1: Isolated Upgrades

Let’s look at an example patch Tuesday under both setups.

The Base Case

The sysadmin has prepared a second staging instance for testing the latest patches from her distribution. Among the updates is a critical one for SSL that prevents a key-leak from a specially crafted handshake. After applying all updates, she starts her automatic test suite. Everything goes well until the test for tripcodes. It turns out that the node.js code uses the SSL library to hash the tripcodes for storage and the fix either changed the signature or behavior of those methods. This puts the sysadmin in a tight spot. Does she try to disable tripcodes? Hold back the upgrade?

The Contained Case

Here the sysadmin has more work to do. Instead of updating and testing a single staging instance, she will update and test each individual container, promoting them to production on a container-by-container basis. The nginx and mariadb containers suceed and she replaces them in production. Her keys are safe. As with the base case, the tripcode tests don’t succeed. Unlike the base case, the sysadmin has the option of holding back just the node.js’s SSL library and the nature of the flaw being key-exposure at handshake means that this is not an emergency requiring her to rush developers for a fix.

The Advantage

Of course, isolated upgrades aren’t unique to containers. node.js provides them itself, in the form of npm. So—depending on code specifics—the base case sysadmin might have been able to hold back the SSL library used for tripcodes. However, containers grant all application frameworks isolated upgrades, regardless of whether they provide them themselves. Further, they easily provide them to bigger portions of the stack.

Containers also simplify isolated upgrades. Technologies like rubygems or python virtualenvs create reliance on yet another curated collection of dependencies. It’s easy for sysadmins to be in a position where they need three or more such curated collections to update before their application is safe from a given vulnerability. Container-driven isolated upgrades let sysadmins lean on single collections, such as Linux distributions. These are much more likely to have—for example—paid support or guaranteed SLA’s. They also unify the dependency management to the underlying distribution’s update mechanism.

Containers can also make existing isolation mechanisms easier to manage. While the above case might have been handled via node.js’s npm mechanism, containers would have allowed the developers to deal with that complexity, simply handing an updated container to the sysadmin.

Of course, isolated upgrades are not always an advantage. In large-use environments the resource savings from shared images/memory may make it worth the additional headaches to move all applications forward in lock-step.

Advantage #2: Containers Simplify Real Isolation

Containers do not contain.” However, what containers do well is group related processes and create natural (if undefended) trusts boundaries. This—it turns out—simplifies the task of providing real containment immensely. SELinux, cgroups, iptables, and kernel capabilities have a—mostly undeserved—reputation of being complicated. Complemented with containers, these technologies become much simpler to leverage.

The Base Case

A sysadmin trying to lock-down their installation in the traditional case faces a daunting task. First, they must identify what processes should be allowed to do what. Does node.js as used in this application use /tmp? What kernel capabilities does mariadb need? The need to answer these questions is one of the reasons technologies such as SELinux are considered complicated. They require a deep understanding of the behavior of not just application code, but the application runtime and the underlying OS itself. The tools available to trouble-shoot these issues are often limited (e.g. strace).

Even if the sysadmin is able to nail down exactly what processes in her stack need what capabilities (kernel or otherwise) the question of how to actually bind the application by those restrictions is still a complicated one. How will the processes be transitioned to the correct SELinux context? The correct cgroup?

The Contained Case

In contrast, a sysadmin trying to secure a container has four advantages:

  1. It is trivial (and usually automatic) to transition an entire container into a particular SELinux context and/or cgroup (Docker has –security-opt, OpenShift PID-based groups, etc.).
  2. Operating system behavior need not be locked down, only the container/host relationship.
  3. The container is—usually—placed on a virtual network and/or interface (often the container runtime environment even has supplemental lock-down capabilities).
  4. Containers naturally provide for experimentation. You can easily launch a container with a varying set of kernel capabilities.

Most frameworks for launching containers do so with sensible “base” SELinux types. For example, both Docker and systemd-nspawn (when using SELinux under RHEL or Fedora) launch all containers with variations of svirt types based on previous work with libvirt. Additionally, many container launchers also borrow libvirt’s philosophy of giving each launched container unique Multi-Category Security (MCS) labels that can optionally be set by the admin. Combined with read-only mounting and the fact that an admin only needs to worry about container/host interactions, this MCS functionality can go a long way towards restricting an applications behavior.

For this application, it is straight-forward to:

  • Label the static, image, and database stores with unique MCS labels (e.g. c1, c2, and c3).
  • Launch the nginx container with labels and binding options (i.e. :ro) appropriate for reading only the image and static stores (-v /path:/path:ro and –security-opt=label:level:s0:c1,c2 for Docker).
  • Launch the node.js container binding the image store read/write and with a label giving it only access to that store.
  • Launching the mariadb container with only the data persistence store mounted read/write and with a label giving it access only to that store.

Should you need to go beyond what MCS can offer, most container frameworks support launching containers with specific SELinux types. Even when working with derived or original SELinux types, containers make everything easier as you need only worry about the interactions between the container and host.

With containers, there are many tools for restricting intra-container communication. Alternatively, for all container frameworks that give each container a unique IP, iptables can also be applied directly. With iptables—for example—it is easy to restrict:

  • The nginx container from speaking anything but HTTP to the nginx container and HTTPS to the outside world.
  • Block the node.js container from doing anything but speaking HTTP to the nginx container and using the database port of the mariadb container.
  • Block mariadb from doing anything but receiving request from the node.js container on it’s database port.

For preventing DDOS or other resource-based attacks, we can use the container launchers built-in tools (e.g. Docker’s ulimit options) or cgroups directly. Either way it is easy to—for example—restrict the node.js and mariadb containers to some hard resource limit (40% of RAM, 20% of CPU and so on).

Finally, container frameworks combined with unit tests are a great way for finding a restricted set of kernel capabilities with which to run an application layer. Whether the framework encourages starting with a minimal set and building up (systemd-nspawn) or with a larger set and letting you selectively drop (Docker), it’s easy to keep launching containers until you find a restricted—but workable—collection.

The configuration to isolation ratio of the above work is extremely high compared to “manual” SELinux/cgroup/iptables isolation. There is also much less to “go wrong” as it is much easier to understand the container/host relationship and its needs than it is to understand the process/OS relationship. Among other upsides, the above configuration: prevents a compromised nginx from altering any data on the host (including the image-store and database), prevents a compromised mariadb from altering anything other than the database, and—depending on what exact kernel capabilities are absolutely required—may go a long way towards prevention of privilege escalation.

The Advantage

While containers do not allow for any forms of isolation not already possible, in practice they make configuring isolation much simpler. They limit isolation to container/host instead of process/OS. By binding containers to virtual networks or interfaces, they simplify firewall rules. Container implementations often provide sensible SELinux or other protection defaults that can be easily extended.

The trade-off is that containers expose an additional container/container attack-surface that is not as trivial to isolate.

Advantage #3: Containers Have More Limited and Explicit Dependencies

The Base Case

Containers are meant to eliminate “works for me” problems. A common cause of “works for me” problems in traditional installations is hidden dependencies. An example is a software component depending on a common command line utility without a developer knowing it. Besides creating instability over installation types, this is a security issue. A sysadmin cannot protect against a vulnerability in a component they do not know is being used.

The flip-side of unknown dependencies and of much greater concern is extraneous or cross-over components. Components needed by one portion of the stack can actually make other components not designed with them in mind extremely dangerous. Many privilege escalation flaws involve abusing access to suid programs that, while essential to some applications, are extraneous to others.

The Contained Case

Obviously, container isolation helps prevent component dependency cross-over but containers also help to minimize extraneous dependencies. Containers are not virtual machines. Containers do not not have to boot, they do not have to support interactive usage, they are usually single user, and can be simpler than a full operating system in any number of ways. Thus containers can eschew service launchers, shells, sensitive configuration files, and other cruft that serves (from an application perspective) to only serve as an attack surface.

Truly minimal custom containers will more or less look like just the top few layers of their RPM/Deb/pkg “pyramid” without any of the bottom layers. Even “general” purpose containers are undergoing a healthy “race to the bottom” to have as minimal a starting footprint as possible. The Docker version of RHEL 7, an operating system not exactly famous for minimalism, is itself less than 155 megs uncompressed.

The Advantage

Container isolation means that when a portion of your application stack has a dependency, that dependency’s attack surface is available only to that portion of your application. This is in stark contrast to traditional installations where attack surfaces are always additive. Exploitation almost always involves chaining multiple vulnerabilities, so this advantage may be one of containers’ most powerful.

A common security complaint regarding containers is that in many ways they are comparable to statically linked binaries. The flip side is that this puts pressure on developers and maintainers to minimize the size of these blobs, which minimizes their attack surface. Shellshock is a good example of the kind of vulnerability this mitigates. It is nearly impossible for a traditional container to not have a highly complex shell, but many containers ship without a shell of any kind.

Beyond containers themselves this pressure has resulted in the rise of the minimal host operating system (e.g. Atomic, CoreOS, RancherOS). This has brought a reduced attack surface (and in the case of Atomic a certain degree of immutability) to the host as well as the container.

Containers Is As Containers Do

Other security advantages of containers include working well in an immutable and/or stateless paradigms, good content auditability (especially compared to virtual machines), and—potentially—good verifiability. A single blog post can’t cover all of the upsides of containers, much less the upsides and downsides. Ultimately, a large part of understanding the security impact of containers is coming to terms with the fact that containers are neither degenerate virtual machines nor superior jails. They are a unique technology whose impact needs to be assessed on its own.

April 23, 2015

Fedora Security Team’s 90-day Challenge

Earlier this month the Fedora Security Team started a 90-day challenge to close all critical and important CVEs in Fedora that came out in 2014 and before.  These bugs include packages affected in both Fedora and EPEL repositories.  Since we started the process we’ve made some good progress.

Of the thirty-eight Important CVE bugs, six have been closed, three are on QA, and the rest are open.  The one critical bug, rubygems-activesupport in EPEL, still remains but maybe fixed as early as this week.

Want to help?  Please join us in helping make Fedora (and EPEL) and safer place and pitch in to help close these security bugs.


April 22, 2015

Regular expressions and recommended practices

Whenever a security person crosses a vulnerability report, one of the the first steps is to ensure that the reported problem is actually a vulnerability. Usually, the issue falls into well known and studied categories and this step is done rather quickly. Occasionally, however, one can come across bugs where this initial triage is a bit more problematic. This blog post is about such an issue, which will ultimately lead us to the concept of “recommended practice”.

What happened?

On July 31st 2014, Maksymilian Arciemowicz of cxsecurity reported that “C++11 [is] insecure by default.”, with upstream GCC bugs 61601 and 61582. LLVM/Clang’s libc++ didn’t dodge the bullet either, more details are available in LLVM bug 20291.

Not everybody can be bothered to go through so many links, so here is a quick summary: C++11, a new C++ standard approved in 2011, introduced support for regular expressions. Regular expressions (regexes from here on) are an amazingly powerful processing tool – but one that can become extremely complex to handle correctly. Not only can the regex itself become hideous and hard to understand, but also the way how the regex engine deals with it can lead to all sorts of problems. If certain complex regexes are passed to a regex engine, the engine can quickly out-grow the available CPU and memory constraints while trying to process the expression, possibly leading to a catastrophic event, which some call ReDoS, a “regular expression denial of service”.

This is exactly what Maksymilian Arciemowicz exploits: he passes specially crafted regexes to the regex engines provided by the C++11 implementations of GCC and Clang, causing them to use a huge amount of CPU resources or even crash (e.g. due to extreme recursion, which will exhaust all the available stack space, leading to a stack-overflow).

Is it a vulnerability?

CPU exhaustion and crashes are often good indicators for a vulnerability. Additionally, the C++11 standard even suggests error return codes for the exact problems triggered, but the implementations at hand fail to catch these situations. So, this must be a vulnerability, right? Well, this is the point where opinions differ. In order to understand why, it’s necessary to introduce a new concept:

The “recommended practice” concept

“Recommended practice” is essentially a mix of common sense and dos and don’ts. A huge problem is that they are informal, so there’s no ultimate guide on the subject, which leaves best practices open to personal experiences and opinion. Nevertheless, the vast majority of the programming community should know about the dangers of regular expressions; dangers just like the issues Maksymilian Arciemowicz reported in GCC/Clang. That said, passing arbitrary, unfiltered regexes from an untrusted source to the regex engine should be considered as a recommended practice case of “don’t do this; it’ll blow up in your face big time”.

To further clear this up: if an application uses a perfectly reasonable, well defined regex and the application crashes because the regex engine chocked when processing certain specially crafted input, it’s (most likely) a vulnerability in the regex engine. However, if the application uses a regex thought to be well defined, efficient and trusted, but turns out to e.g. take overly long to process certain specially crafted input, while other, more efficient regexes will do the job just fine, it’s (probably) a vulnerability in the application. But if untrusted regexes are passed to the regex engine without somehow filtering them for sanity first (which is incredibly hard to do for anything but the simplest of regexes, so better to avoid it), it is violating what a lot of people believe to be recommended practice, and thus it is often not considered to be a strict vulnerability in the regex engine.

So, next time you feel inclined to pass regexes verbatim to the engine, you’ll hopefully remember that it’s not a good idea and refrain from doing so. If you have done so in the past, you should probably go ahead and fix it.

April 16, 2015

Creating a new Network for a dual NIC VM

I need a second network for testing a packstack deployment. Here is what I did to create it, and then to boot a new VM connected to both networks.

Once again the tables are too big for the stylesheet I am using, but I don’t want to modify the output. The view source icon gives a more readable view.

The Common client supports creating networks.

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack network create ayoung-private
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| id          | 9f2948fa-77dd-483d-8841-f9461ee50aee |
| name        | ayoung-private                       |
| project_id  | fefb11ea894f43c0ae5c9686d2f49a9d     |
| router_type | Internal                             |
| shared      | False                                |
| state       | UP                                   |
| status      | ACTIVE                               |
| subnets     |                                      |
+-------------+--------------------------------------+
[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron  subnet create ayoung-private 192.168.52.0/24 --name ayoung-subnet1
Invalid command u'subnet create ayoung-private 192.168.52.0/24 --name'

But not any of the other neutron operations…at least not at first glance. we’ll see later if that is the case, but for now, use the neutron client, which seems to support the V3 Keystone API for Auth. Create a subnet:

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron  subnet-create ayoung-private 192.168.52.0/24 --name ayoung-subnet1
Created a new subnet:
+-------------------+----------------------------------------------------+
| Field             | Value                                              |
+-------------------+----------------------------------------------------+
| allocation_pools  | {"start": "192.168.52.2", "end": "192.168.52.254"} |
| cidr              | 192.168.52.0/24                                    |
| dns_nameservers   |                                                    |
| enable_dhcp       | True                                               |
| gateway_ip        | 192.168.52.1                                       |
| host_routes       |                                                    |
| id                | da738ad8-8469-4aa8-ab91-448bd3878ae6               |
| ip_version        | 4                                                  |
| ipv6_address_mode |                                                    |
| ipv6_ra_mode      |                                                    |
| name              | ayoung-subnet1                                     |
| network_id        | 9f2948fa-77dd-483d-8841-f9461ee50aee               |
| tenant_id         | fefb11ea894f43c0ae5c9686d2f49a9d                   |
+-------------------+----------------------------------------------------+

Create router for the subnet

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron router-create ayoung-private-router
Created a new router:
+-----------------------+--------------------------------------+
| Field                 | Value                                |
+-----------------------+--------------------------------------+
| admin_state_up        | True                                 |
| external_gateway_info |                                      |
| id                    | 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78 |
| name                  | ayoung-private-router                |
| routes                |                                      |
| status                | ACTIVE                               |
| tenant_id             | fefb11ea894f43c0ae5c9686d2f49a9d     |
+-----------------------+--------------------------------------+

Now I need to find the external network and create a router that points to it:

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron net-list
+--------------------------------------+------------------------------+-------------------------------------------------------+
| id                                   | name                         | subnets                                               |
+--------------------------------------+------------------------------+-------------------------------------------------------+
| 63258623-1fd5-497c-b62d-e0651e03bdca | idm-v4-default               | 3227f3ea-5230-411c-89eb-b1e51298b4f9 192.168.1.0/24   |
| 9f2948fa-77dd-483d-8841-f9461ee50aee | ayoung-private               | da738ad8-8469-4aa8-ab91-448bd3878ae6 192.168.52.0/24  |
| eb94d7e2-94be-45ee-bea0-22b9b362f04f | external                     | 3a72b7bc-623e-4887-9499-de8ba280cb2f                  |
+--------------------------------------+------------------------------+-------------------------------------------------------+
[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron router-gateway-set 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78 eb94d7e2-94be-45ee-bea0-22b9b362f04f
Set gateway for router 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78

The router needs an interface on the subnet.

[ayoung@ayoung530 rdo-federation-setup (openstack)]$  neutron router-interface-add 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78 da738ad8-8469-4aa8-ab91-448bd3878ae6
Added interface 782fdf26-e7c1-4ca7-9ec9-393df62eb11e to router 51ad4cf6-10de-455f-8a8d-ab9dd3c0fd78.

Not sure if I need to create a port, but worth testing out;

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ neutron port-create ayoung-private --fixed-ip ip_address=192.168.52.20
Created a new port:
+-----------------------+--------------------------------------------------------------------------------------+
| Field                 | Value                                                                                |
+-----------------------+--------------------------------------------------------------------------------------+
| admin_state_up        | True                                                                                 |
| allowed_address_pairs |                                                                                      |
| binding:vnic_type     | normal                                                                               |
| device_id             |                                                                                      |
| device_owner          |                                                                                      |
| fixed_ips             | {"subnet_id": "da738ad8-8469-4aa8-ab91-448bd3878ae6", "ip_address": "192.168.52.20"} |
| id                    | 80f302db-6c27-42a0-a1a3-45fcfe0b23fe                                                 |
| mac_address           | fa:16:3e:bf:e3:7d                                                                    |
| name                  |                                                                                      |
| network_id            | 9f2948fa-77dd-483d-8841-f9461ee50aee                                                 |
| security_groups       | 6c13abed-81cd-4a50-82fb-4dc98b4f29fd                                                 |
| status                | DOWN                                                                                 |
| tenant_id             | fefb11ea894f43c0ae5c9686d2f49a9d                                                     |
+-----------------------+--------------------------------------------------------------------------------------+

Now to create the vm. I specify the –nic param twice.

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack server create   --flavor m1.medium   --image "CentOS-7-x86_64" --key-name ayoung-pubkey  --security-group default  --nic net-id=63258623-1fd5-497c-b62d-e0651e03bdca  --nic net-id=9f2948fa-77dd-483d-8841-f9461ee50aee     test2nic.cloudlab.freeipa.org
+--------------------------------------+--------------------------------------------------------+
| Field                                | Value                                                  |
+--------------------------------------+--------------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                                 |
| OS-EXT-AZ:availability_zone          | nova                                                   |
| OS-EXT-STS:power_state               | 0                                                      |
| OS-EXT-STS:task_state                | scheduling                                             |
| OS-EXT-STS:vm_state                  | building                                               |
| OS-SRV-USG:launched_at               | None                                                   |
| OS-SRV-USG:terminated_at             | None                                                   |
| accessIPv4                           |                                                        |
| accessIPv6                           |                                                        |
| addresses                            |                                                        |
| adminPass                            | Exb7Qw3syfDg                                           |
| config_drive                         |                                                        |
| created                              | 2015-04-16T03:35:27Z                                   |
| flavor                               | m1.medium (3)                                          |
| hostId                               |                                                        |
| id                                   | fffef6e0-fcce-4313-af7a-81f9306ef196                   |
| image                                | CentOS-7-x86_64 (38534e64-5d7b-43fa-b59c-aed7a262720d) |
| key_name                             | ayoung-pubkey                                          |
| name                                 | test2nic.cloudlab.freeipa.org                          |
| os-extended-volumes:volumes_attached | []                                                     |
| progress                             | 0                                                      |
| project_id                           | fefb11ea894f43c0ae5c9686d2f49a9d                       |
| properties                           |                                                        |
| security_groups                      | [{u'name': u'default'}]                                |
| status                               | BUILD                                                  |
| updated                              | 2015-04-16T03:35:27Z                                   |
| user_id                              | 64951f595aa444b8a3e3f92091be364d                       |
+--------------------------------------+--------------------------------------------------------+
[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack server list
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+
| ID                                   | Name                                | Status  | Networks                                                                    |
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+
| 820f8563-28ae-43fb-a0ff-d4635bd6dd38 | ecp.cloudlab.freeipa.org            | SHUTOFF | idm-v4-default=192.168.1.77, 10.16.19.28                                    |
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+

Set a Floating IP and ssh in:

[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack  ip floating list | grep None | sort -R | head -1
| a5abf332-68dc-46c5-a4f1-188b91f8dbf8 | external | 10.16.18.225 | None           | None                                 |
[ayoung@ayoung530 rdo-federation-setup (openstack)]$ openstack ip floating add  10.16.18.225 test2nic.cloudlab.freeipa.org

echo 10.16.18.225 test2nic.cloudlab.freeipa.org | sudo tee -a /etc/hosts
10.16.18.225 test2nic.cloudlab.freeipa.org
$ ssh centos@test2nic.cloudlab.freeipa.org
The authenticity of host 'test2nic.cloudlab.freeipa.org (10.16.18.225)' can't be established.
ECDSA key fingerprint is e3:dd:1b:d6:30:f1:f5:2f:14:d7:6f:98:d6:c9:08:0c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'test2nic.cloudlab.freeipa.org,10.16.18.225' (ECDSA) to the list of known hosts.
[centos@test2nic ~]$ ifconfig eth1
eth1: flags=4098<broadcast>  mtu 1500
        ether fa:16:3e:ab:14:2e  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
Using the openstack command line interface to create a new server.

I have to create a new virtual machine. I want to use the V3 API when authentication to Keystone, which means I need to use the common client, as the keystone client is deprecated and only supports the V2.0 Identity API.

To do anything with the client, we need to set some authorization data variables. Create keystonerc with the following and source it:

  1 export OS_AUTH_URL=http://oslab.exmapletree.com:5000/v3
  2 export OS_PROJECT_NAME=Syrup
  3 export OS_PROJECT_DOMAIN_NAME=Default
  4 export OS_USER_DOMAIN_NAME=Default
  5 export OS_USERNAME=ayoung
  6 export OS_PASSWORD=yeahright

The Formatting for the output of the commands is horribly rendered here, but if you click the little white sheet of paper icon that pops up when you float your mouse cursor over the black text, you get a readable table.

Sanity Check: list servers

[ayoung@ayoung530 oslab]$ openstack server list
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+
| ID                                   | Name                                | Status  | Networks                                                                    |
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+
| a5f70f90-7d97-4b79-b0f0-044d8d9b4c77 | centos7.cloudlab.freeipa.org        | ACTIVE  | idm-v4-default=192.168.1.72, 10.16.19.63                                    |
| 35b116e4-fdd2-4580-bb1a-18f1f6428dd5 | mysql.cloudlab.freeipa.org          | ACTIVE  | idm-v4-default=192.168.1.70, 10.16.19.92                                    |
| a5ca7644-d703-44d7-aa95-fd107e18aefd | horizon.cloudlab.freeipa.org        | ACTIVE  | idm-v4-default=192.168.1.67, 10.16.19.24                                    |
| f7aca565-4439-4a2f-9c31-911349ce8943 | ldapqa.cloudlab.freeipa.org         | ACTIVE  | idm-v4-default=192.168.1.66, 10.16.19.100                                   |
| 2b7b5cc1-83c4-45c3-8ca3-cd4ba4b589d3 | federate.cloudlab.freeipa.org       | ACTIVE  | idm-v4-default=192.168.1.61, 10.16.18.6                                     |
| a8649175-fd18-483c-acb7-2933226fd3a6 | horizon.kerb-demo.org               | ACTIVE  | kerb-demo.org=192.168.0.5, 10.16.19.183                                     |
| 38d24fb3-0dd3-4cf0-98d6-12ea22a1d718 | openstack.kerb-demo.org             | ACTIVE  | kerb-demo.org=192.168.0.3, 10.16.19.101                                     |
| ca9a8249-1f09-4b1a-b8d4-850019b7c4e5 | ipa.kerb-demo.org                   | ACTIVE  | kerb-demo.org=192.168.0.2, 10.16.18.218                                     |
| 29d00b3b-5961-424e-b95c-9d90b3ecf9e3 | ipsilon.cloudlab.freeipa.org        | ACTIVE  | idm-v4-default=192.168.1.60, 10.16.18.207                                   |
| 028df8d8-7ce9-4f61-b36f-a080dd7c4fb8 | ipa.cloudlab.freeipa.org            | ACTIVE  | idm-v4-default=192.168.1.59, 10.16.18.31                                    |
+--------------------------------------+-------------------------------------+---------+-----------------------------------------------------------------------------+

I made a pretty significant use of the help output. To show the basic help string

openstack --help

Gives you a list of the options. To see help on a specific comming, such as the server create command we are going to work towards executing, run:

openstack help server create

A Server is a resource golem composed by stitching together resources from other services. To create this golem I am going to stitch together:

  1. A flavor
  2. An image
  3. A Security Group
  4. A Private Key
  5. A network
First, to find the flavor:

[ayoung@ayoung530 oslab]$ openstack flavor list
+----+---------------------+------+------+-----------+-------+-----------+
| ID | Name                |  RAM | Disk | Ephemeral | VCPUs | Is Public |
+----+---------------------+------+------+-----------+-------+-----------+
| 1  | m1.tiny             |  512 |    1 |         0 |     1 | True      |
| 2  | m1.small            | 2048 |   20 |         0 |     1 | True      |
| 3  | m1.medium           | 4096 |   40 |         0 |     2 | True      |
| 4  | m1.large            | 8192 |   80 |         0 |     4 | True      |
| 6  | m1.xsmall           | 1024 |   10 |         0 |     1 | True      |
| 7  | m1.small.4gb        | 4096 |   20 |         0 |     1 | True      |
| 8  | m1.small.8gb        | 8192 |   20 |         0 |     1 | True      |
| 9  | oslab.4cpu.20hd.8gb | 8192 |   20 |         0 |     4 | True      |
+----+---------------------+------+------+-----------+-------+-----------+

I think this one should taste like cherry. But, since we don’t have a cherry flavor, I guess I’ll pick m1.small.4gb as that has the 4 GB RAM I need.

To FInd an image:

[ayoung@ayoung530 oslab]$ openstack image list
+--------------------------------------+----------------------------------------------+
| ID                                   | Name                                         |
+--------------------------------------+----------------------------------------------+
| 415162df-4bec-474f-9f3b-0a79c2ed3848 | Fedora-Cloud-Base-22_Alpha-20150305          |
| b89dc25b-6f62-4001-b979-05ac14f60e9b | rhel-guest-image-7.1-20150224.0              |
| 38534e64-5d7b-43fa-b59c-aed7a262720d | CentOS-7-x86_64                              |
| bc3c35a2-cf96-4589-ad51-a8d499708128 | Fedora-Cloud-Base-20141203-21.x86_64         |
| 9ea16df1-f178-4589-b32b-0e2e32305c61 | FreeBSD 10.1                                 |
| 6ec77e6e-7ad4-4994-937d-91003fa2d6ac | rhel-6.6-latest                              |
| e61ec961-248b-4ee6-8dfa-5d5198690cab | ubuntu-12.04-precise-cloudimg                |
| 54ba6aa9-7d20-4606-baa6-f8e45a80510c | rhel-guest-image-6.6-20141222.0              |
| bee6e762-102f-467e-95a8-4a798cb5ec75 | heat-functional-tests-image                  |
| 812e129c-6bfd-41f5-afba-6817ac6a23e5 | RHEL 6.5 20140829                            |
| f2dfff20-c403-4e53-ae30-947677a223ce | Fedora 21 20141203                           |
| 473e6f30-a3f0-485b-a5e5-3c5a1f7909a5 | RHEL 6.6 20140926                            |
| b12fe824-c98a-4af5-88a6-b1e11a511724 | centos-7-cloud                               |
| 601e162f-87b4-4fc1-a0d3-1c352f3c2988 | fedora-21-atomic                             |
| 12616509-4c4f-47a5-96b1-317a99ef6bf8 | Fedora 21 Beta                               |
| 77dcb29b-3258-4955-8ca4-a5952c157a2b | RHEL6.6                                      |
| 8550a6db-517b-47ea-82f3-ec4fd48e8c09 | centos-7-x86_64                              |
+--------------------------------------+----------------------------------------------+

Although I really want a Fedora Cloud image…I guess I’ll pick fedora-21-atomic. Close enough for Government work.

[ayoung@ayoung530 oslab]$ openstack keypair list
+---------------+-------------------------------------------------+
| Name          | Fingerprint                                     |
+---------------+-------------------------------------------------+
| ayoung-pubkey | 37:81:08:b2:0e:39:78:0e:62:fb:0b:a5:f1:d7:41:fc |
+---------------+-------------------------------------------------+

That decision is tough.

[ayoung@ayoung530 oslab]$ openstack network list
+--------------------------------------+------------------------------+--------------------------------------+
| ID                                   | Name                         | Subnets                              |
+--------------------------------------+------------------------------+--------------------------------------+
| 3b799c78-ca9d-49d0-9838-b2599cc6b8d0 | kerb-demo.org                | c889bb6b-98cd-47b8-8ba0-5f2de4fe74ee |
| 63258623-1fd5-497c-b62d-e0651e03bdca | idm-v4-default               | 3227f3ea-5230-411c-89eb-b1e51298b4f9 |
| 650fc936-cc03-472d-bc32-d56f56116761 | tester1                      |                                      |
| de4300cc-8f71-46d7-bec5-c0a4ad54954d | BROKEN                       | 6c390add-108c-40d5-88af-cb5e784a9d31 |
| eb94d7e2-94be-45ee-bea0-22b9b362f04f | external                     | 3a72b7bc-623e-4887-9499-de8ba280cb2f |
+--------------------------------------+------------------------------+--------------------------------------+

Tempted to using BROKEN, but I shall refrain. I set up idm-v4-default so I know that is good.

[ayoung@ayoung530 oslab]$ openstack security group list
+--------------------------------------+---------+-------------+
| ID                                   | Name    | Description |
+--------------------------------------+---------+-------------+
| 6c13abed-81cd-4a50-82fb-4dc98b4f29fd | default | default     |
+--------------------------------------+---------+-------------+

Another tough call. OK, with all that, we have enough information to create the server:

One note, the –nic param is where you can distinguish which network to use. That param takes a series of key/value params connected by equal signs. I figured that out from the old nova command line parameters, and would jhave been hopeless lost if I hadn’t stumbled across the old how to guide.

openstack server create   --flavor m1.medium   --image "fedora-21-atomic" --key-name ayoung-pubkey    --security-group default  --nic net-id=63258623-1fd5-497c-b62d-e0651e03bdca ayoung-test

In order to ssh to the machine, we need to assign it a floating IP Address. TO find one that is unassigned:

[ayoung@ayoung530 oslab]$ openstack  ip floating list | grep None
| 943b57ea-4e52-4d05-b665-f808a5fbd887 | external | 10.16.18.61  | None           | None                                 |
| a1f5bb26-4e47-4fe7-875e-d967678364a0 | external | 10.16.18.223 | None           | None                                 |
| a419c144-dbfd-4a42-9f5e-880526683ea0 | external | 10.16.18.235 | None           | None                                 |
| a5abf332-68dc-46c5-a4f1-188b91f8dbf8 | external | 10.16.18.225 | None           | None                                 |
| b5c21c4a-3f12-4744-a426-8d073b3be3c8 | external | 10.16.18.70  | None           | None                                 |
| b67edf85-2e54-4ad1-a014-20b7370e38ba | external | 10.16.18.170 | None           | None                                 |
| c43eb490-1910-4adf-91b6-80375904e937 | external | 10.16.18.196 | None           | None                                 |
| c44a4a56-1534-4200-a227-90de85a218eb | external | 10.16.19.28  | None           | None                                 |
| e98774f9-fe6e-4608-a85d-92a5f39ef2c8 | external | 10.16.19.182 | None           | None                                 |
| f2705313-b03f-4537-a2d8-c01ff1baaee1 | external | 10.16.18.203 | None           | None                                 |

I’ll chose one at random

[ayoung@ayoung530 oslab]$ openstack  ip floating list | grep None | sort -R | head -1
| a419c144-dbfd-4a42-9f5e-880526683ea0 | external | 10.16.18.235 | None           | None                                 |

And Add it to the server:

openstack ip floating add  10.16.18.235  ayoung-test

Test it out:

[ayoung@ayoung530 oslab]$ ssh root@10.16.18.235
The authenticity of host '10.16.18.235 (10.16.18.235)' can't be established.
ECDSA key fingerprint is 2a:cd:5f:37:63:ef:7f:2d:9d:83:fd:85:76:4d:03:3c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '10.16.18.235' (ECDSA) to the list of known hosts.
Please login as the user "fedora" rather than the user "root".

And log in:

[ayoung@ayoung530 oslab]$ ssh fedora@10.16.18.235
[fedora@ayoung-test ~]$ 

April 15, 2015

JWCrypto a python module to do crypto using JSON

Lately I had the need to do use some crypto in a web-like scenario, a.k.a over-HTTP(S) so I set out to look at what could be used.

Pretty quickly it came clear that the JSON Web Encryption standard proposed in the IETF JOSE Working Group would be a good fit and actually the JSON Web Signature would come useful too.

Once I was convinced this was the standard to use I tried to find out a python module that implemented it as the project I am going to use this stuff in (FreeIPA ultimately) is python based.

The only implementation I found initially (since then I've found other projects scattered over the web) was this Jose project on GitHub.

After a quick look I was not satisfied by three things:

  • It is not a complete implementation of the specs
  • It uses obsolete python crypto-libraries wrappers
  • It is not Python3 compatible
While the first was not a big problem as I could simply contribute the missing parts, the second is, and the third is a big minus too. I wanted to use the new Python Cryptography library as it has proper interfaces and support for modern crypto, and neatly abstracts away the underlying crypto-library bindings.

So after some looking over the specs in details to see how much work it would entail I decided to build a python modules to implement all relevant specs myself.

The JWCrypto project is the result of a few weeks of work, complete of Documentation hosted by ReadTheDocs.

It is an almost complete implementation of the JWK, JWE, JWS and JWT specs and implements most of the algorithms defined in the JWA spec. It has been reviewed internally by a member of the Red Hat Security Team and has an extensive test suite based on the specs and the test vectors included in the JOSE WG Cookbook. It is also both Python2.7 and Python3.3 compatible!

I had a lot of fun implementing it, so if you find it useful feel free to drop me a note.

April 08, 2015

Don’t judge the risk by the logo

It’s been almost a year since the OpenSSL Heartbleed vulnerability, a flaw which started a trend of the branded vulnerability, changing the way security vulnerabilities affecting open-source software are being reported and perceived. Vulnerabilities are found and fixed all the time, and just because a vulnerability gets a name and a fancy logo doesn’t mean it is of real risk to users.

ven1

So let’s take a tour through the last year of vulnerabilities, chronologically, to see what issues got branded and which issues actually mattered for Red Hat customers.

“Heartbleed” (April 2014)CVE-2014-0160

Heartbleed was an issue that affected newer versions of OpenSSL. It was a very easy to exploit flaw, with public exploits released soon after the issue was public. The exploits could be run against vulnerable public web servers resulting in a loss of information from those servers. The type of information that could be recovered varied based on a number of factors, but in some cases could include sensitive information. This flaw was widely exploited against unpatched servers.

For Red Hat Enterprise Linux, only customers running version 6.5 were affected as prior versions shipped earlier versions of OpenSSL that did not contain the flaw.

Apache Struts 1 Class Loader RCE (April 2014) CVE-2014-0114

This flaw allowed attackers to manipulate exposed ClassLoader properties on a vulnerable server, leading to remote code execution. Exploits have been published but they rely on properties that are exposed on Tomcat 8, which is not included in any supported Red Hat products. However, some Red Hat products that ship Struts 1 did expose ClassLoader properties that could potentially be exploited.

Various Red Hat products were affected and updates were made available.

OpenSSL CCS Injection (June 2014) CVE-2014-0224

After Heartbleed, a number of other OpenSSL issues got attention. CCS Injection was a flaw that could allow an attacker to decrypt secure connections. This issue is hard to exploit as it requires a man in the middle attacker who can intercept and alter network traffic in real time, and as such we’re not aware of any active exploitation of this issue.

Most Red Hat Enterprise Linux versions were affected and updates were available.

glibc heap overflow (July 2014) CVE-2014-5119

A flaw was found inside the glibc library where an attacker who is able to make an application call a specific function with a carefully crafted argument could lead to arbitrary code execution. An exploit for 32-bit systems was published (although this exploit would not work as published against Red Hat Enterprise Linux).

Some Red Hat Enterprise Linux versions were affected, in various ways, and updates were available.

JBoss Remoting RCE (July 2014) CVE-2014-3518

A flaw was found in JBoss Remoting where a remote attacker could execute arbitrary code on a vulnerable server. A public exploit is available for this flaw.

Red Hat JBoss products were only affected by this issue if JMX remoting is enabled, which is not the default. Updates were made available.

“Poodle” (October 2014) CVE-2014-3566

Continuing with the interest in OpenSSL vulnerabilities, Poodle was a vulnerability affecting the SSLv3 protocol. Like CCS Injection, this issue is hard to exploit as it requires a man in the middle attack. We’re not aware of active exploitation of this issue.

Most Red Hat Enterprise Linux versions were affected and updates were available.

“ShellShock” (September 2014) CVE-2014-6271

The GNU Bourne Again shell (Bash) is a shell and command language interpreter used as the default shell in Red Hat Enterprise Linux. Flaws were found in Bash that could allow remote code execution in certain situations. The initial patch to correct the issue was not sufficient to block all variants of the flaw, causing distributions to produce more than one update over the course of a few days.

Exploits were written to target particular services. Later, malware circulated to exploit unpatched systems.

Most Red Hat Enterprise Linux versions were affected and updates were available.

RPM flaws (December 2014) CVE-2013-6435, CVE-2014-8118

Two flaws were found in the package manager RPM. Either could allow an attacker to modify signed RPM files in such a way that they would execute code chosen by the attacker during package installation. We know CVE-2013-6435 is exploitable, but we’re not aware of any public exploits for either issue.

Various Red Hat Enterprise Linux releases were affected and updates were available.

“Turla” malware (December 2014)

Reports surfaced of a trojan package targeting Linux, suspected as being part of an “advance persistent threat” campaign. Our analysis showed that the trojan was not sophisticated, was easy to detect, and unlikely part of such a campaign.

The trojan does not use any vulnerability to infect a system, it’s introduction onto a system would be via some other mechanism. Therefore it does not have a CVE name and no updates are applicable for this issue.

“Grinch” (December 2014)

An issue was reported which gained media attention, but was actually not a security vulnerability. No updates were applicable for this issue.

“Ghost” (January 2015) CVE-2015-0235

A bug was found affecting certain function calls in the glibc library. A remote attacker that is able to make an application call to an affected function could execute arbitrary code. While a proof of concept exploit is available, not many applications were found to be vulnerable in a way that would allow remote exploitation.

Red Hat Enterprise Linux versions were affected and updates were available.

“Freak” (March 2015) CVE-2015-0204

It was found that OpenSSL clients accepted EXPORT-grade (insecure) keys even when the client had not initially asked for them. This could be exploited using a man-in-the-middle attack, which could downgrade to a weak key, factor it, then decrypt communication between the client and the server. Like Poodle and CCS Injection, this issue is hard to exploit as it requires a man in the middle attack. We’re not aware of active exploitation of this issue.

Red Hat Enterprise Linux versions were affected and updates were available.

Other issues of customer interest

We can also get a rough guide of which issues are getting the most attention by looking at the number of page views on the Red Hat CVE pages. While the top views were for the  issues above, also of increased interest was:

  • A kernel flaw (May 2014) CVE-2014-0196, allowing local privilege escalation. A public exploit exists for this issue but does not work as published against Red Hat Enterprise Linux.
  • “BadIRET”, a kernel flaw (December 2014) CVE-2014-9322, allowing local privilege escalation. Details on how to exploit this issue have been discussed, but we’re not aware of any public exploits for this issue.
  • A flaw in BIND (December 2014), CVE-2014-8500. A remote attacker could cause a denial of service against a BIND server being used as a recursive resolver.  Details that could be used to craft an exploit are available but we’re not aware of any public exploits for this issue.
  • Flaws in NTP (December 2014), including CVE-2014-9295. Details that could be used to craft an exploit are available.  These serious issues had a reduced impact on Red Hat Enterprise Linux.
  • A flaw in Samba (February 2015) CVE-2015-0240, where a remote attacker could potentially execute arbitrary code as root. Samba servers are likely to be internal and not exposed to the internet, limiting the attack surface. No exploits that lead to code execution are known to exist, and some analyses have shown that creation of such a working exploit is unlikely.

Conclusion

ven2

We’ve shown in this post that for the last year of vulnerabilities affecting Red Hat products the issues that matter and the issues that got branded do have an overlap, but they certainly don’t closely match. Just because an issue gets given a name, logo, and press attention does not mean it’s of increased risk. We’ve also shown there were some vulnerabilities of increased risk that did not get branded.

At Red Hat, our dedicated Product Security team analyse threats and vulnerabilities against all our products every day, and provide relevant advice and updates through the customer portal. Customers can call on this expertise to ensure that they respond quickly to address the issues that matter, while avoiding being caught up in a media whirlwind for those that don’t.

April 05, 2015

On Load Balancers and Kerberos

I've recently witnessed a lot of discussions around using load balancers and FreeIPA on the user's mailing list, and I realized there is a lot of confusion around how to use load balancers when Kerberos is used for authentication.

One of the issues is that Kerberos depends on accurate naming as server names are used to build the Service Principal Name (SPN) used to request tickets from a KDC.

When people introduce a load balancer on a network they usually assign it a new name which is used to redirect all clients to a single box that redirects traffic to multiple hosts behind the balancer.

From a transport point of view this is just fine, the box just handles packets. But from the client point of view all servers now look alike (same name). They have, intentionally, no idea what server they are going to hit.

This is the crux of the problem. When a client wants to authenticate using Kerberos it needs to ask the KDC for a ticket for a specific SPN. The only name available in this case is that of the load balancer, so that names is used to request a ticket.

For example, if we have three HTTP servers in a domain: uno.ipa.dom, due.ipa.dom, tre.ipa.dom; and for some reason we want to load balance them using the name all.ipa.dom then all a client can do is to go to the KDC and ask for a ticket for the SPN named: HTTP/all.ipa.dom@IPA.DOM

Now, once the client actually connect to that IP address and gets redirected to one of the servers by the load balancer, say uno.ipa.dom it will present this server a ticket that can be utilized only if the server has the key for the SPN named HTTP/all.ipa.dom@IPA.DOM

There are a few ways to satisfy this condition depending on what a KDC supports and what is the use case.

Use only one common Service Principal Name

One of the solutions is to create a new Service Principal in the KDC for the name HTTP/all.ipa.dom@IPA.DOM then generate a keytab and distribute it to all servers. The servers will use no other key, and they will identify themselves with the common name, so if a client tries to contact them using their individual name, then authentication will fail, as the KDC will not have a principal for the other names and the services themselves are not configure to use their hostname only the common name.

Use one key and multiple SPNs

A slightly friendlier way is to assign aliases to a single principal name, so that clients can contact the servers both with the common name and directly using the server's individual names. This is possible if the KDC can create aliases to the canonical principal name. The SPNs HTTP/uno.ipa.dom, HTTP/due.ipa.dom, HTTP/tre.ipa.dom are created as aliases of HTTP/all.ipa.dom, so when a client asks for a ticket for any of these names the same key is used to generate it.

Use multiple keys, one per name

Another way again is to assign servers multiple keys. For example the server named uno.ipa.dom will be given a keytab with keys for both HTTP/uno.ipa.dom@IPA.DOM and HTTP/all.ipa.dom@IPA.DOM, so that regardless of how the client tries to access it, the KDC will return a ticket using a key the service has access to.

It is important to note that the acceptor, in this case, must not be configured to use a specific SPN or acquire specific credentials before trying to accept a connection if using GSSAPI, otherwise the wrong key may be selected from the keytab and context establishment may fail. If no name is specified then GSSAPI can try all keys in the keytab until one succeeds in decrypting the ticket.

Proxying authentication

One last option is to actually terminate the connection on a single server which then proxies out to the backend servers. In this case only the proxy has a keytab and the backend servers trust the proxy to set appropriate headers to identify the authenticated client principal, or set a shared session cookie that all servers have access to. In this case clients are forbidden from getting access to the backend server directly by firewalling or similar network level segregation.

Choosing a solution

Choosing which option is right depends on many factors, for example, if (some) clients need to be able to authenticate directly to the backend servers using their individual names, then using only one name only like in the first and fourth options is clearly not possible. Using or not aliases may or not be possible depending on whether the KDC in use supports them.

More complex cases, the FreeIPA Web UI

The FreeIPA Web UI adds more complexity to the aforementioned cases. The Web UI is just a frontend to the underlying LDAP database and relies on constrained delegation to access the LDAP server, so that access control is applied by the LDAP server using the correct user credentials.

The way constrained delegation is implemented requires the server to obtain a TGT using the server keytab. What this means is that only one Service Principal Name can be used in the FreeIPA HTTP server and that name is determined before the client connects. This factor makes it particularly difficult for FreeIPA servers to be load balanced. For the HTTP server the FreeIPA master could theoretically be manually reconfigured to use a single common name and share a keytab, this would allow clients to connect to any FreeIPA server and perform constrained delegation using the common name, however admins wouldn't be able to connect to a specific server and change local settings. Moreover, internal operations and updates may or may not work going forward.

In short, I wouldn't recommend it until the FreeIPA project provides a way to officially access the Web UI using aliases.

A poor man solution if you want to offer a single name for ease of access and some sort of load balancing could be to stand up a server at the common name and a CGI script that redirects clients randomly to one of the IPA servers.

Horizon WebSSO via SSSD

I’ve shown how to set up OpenStack Keystone Federation with SSSD. We know we can set up Horizon with Federation using SAML. Here is how to set up Web Single Sign On (WebSSO) for SSSD and Kerberos.

This is a long one, but I’m trying to include all the steps on one document. Much is a repeat of previous blog posts. However, some details have changed, I want the explanation here to be consistent.

I’m starting with a RHEL 7.1 VM. I tend to use internal Yum repos for packages, to avoid going across the network for Updates, but the general steps should work regardless of update mecahnism. This is, once again, using devstack, as the bits are very fresh for the WebSSO code, and I need to work off master for several projects. We’ll work on an automated version for RDO once the packages are up to date.

I have an IPA server up and running. I want to make this VM the IPA client. Since I’m done with Nova managing my VM config values, I’ll first disable cloud-init; There are many ways to do this, but that is outside the scope of this article.

 sudo yum -y update 
 sudo yum -y groupinstall "Development Tools"
 sudo yum -y install ipa-client sssd-dbus sudo mod_lookup_identity mod_auth_kerb
 sudo yum -y erase cloud-init

Now to use network manager CLI to set some basics. To list the connections:

$ nmcli c
NAME         UUID                                  TYPE            DEVICE 
System eth0  5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03  802-3-ethernet  eth0

I want this machine to keep its host name, and to keep the DNS server I set:

$ sudo nmcli c edit 5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03

===| nmcli interactive connection editor |===

Editing existing '802-3-ethernet' connection: '5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03'

Type 'help' or '?' for available commands.
Type 'describe [<setting>.<prop>]' for detailed property description.

You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, ipv4, ipv6, dcb
nmcli> goto ipv4
You may edit the following properties: method, dns, dns-search, addresses, gateway, routes, route-metric, ignore-auto-routes, ignore-auto-dns, dhcp-hostname, dhcp-send-hostname, never-default, may-fail, dhcp-client-id
nmcli ipv4> set ignore-auto-dns yes
nmcli ipv4> set dns 192.168.1.59
nmcli ipv4> set dhcp-hostname horizon.cloudlab.freeipa.org
nmcli ipv4> save
Connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) successfully updated.
nmcli ipv4> quit

Set the hostname via

sudo vi  /etc/hostname

And lets see what happens when I reboot.

$ sudo reboot
Connection to horizon.cloudlab.freeipa.org closed by remote host.
Connection to horizon.cloudlab.freeipa.org closed.
[ayoung@ayoung530 python-keystoneclient (review/ayoung/access_info_split)]$ ssh cloud-user@horizon.cloudlab.freeipa.org
Last login: Wed Apr  1 16:06:27 2015 from 10.10.55.194
[cloud-user@horizon ~]$ hostname
horizon.cloudlab.freeipa.org

Let’s see if we can talk to the IPA server:

$ nslookup ipa.cloudlab.freeipa.org
Server:		192.168.1.59
Address:	192.168.1.59#53

Name:	ipa.cloudlab.freeipa.org
Address: 192.168.1.59
$ sudo ipa-client-install
WARNING: ntpd time&date synchronization service will not be configured as
conflicting service (chronyd) is enabled
Use --force-ntpd option to disable it and force configuration of ntpd

Discovery was successful!
Hostname: horizon.cloudlab.freeipa.org
Realm: CLOUDLAB.FREEIPA.ORG
DNS Domain: cloudlab.freeipa.org
IPA Server: ipa.cloudlab.freeipa.org
BaseDN: dc=cloudlab,dc=freeipa,dc=org

Continue to configure the system with these values? [no]: yes

Much elided, suffice to say it succeeded. On to devstack

sudo mkdir /opt/stack
sudo chown cloud-user /opt/stack/
cd /opt/stack/
git clone https://git.openstack.org/openstack-dev/devstack
cd devstack

Now, one workaround. edit the file files/rpms/general and comment out the libyaml-devel package. The functionality is provided by a different package, and that package does not exist in in RHEL7.

...
which
bc
#libyaml-devel
gettext  # used for compiling message catalogs
net-tools
java-1.7.0-openjdk-headless  # NOPRIME rhel7,f20
java-1.8.0-openjdk-headless  # NOPRIME f21,f22

Here is my local.conf file:

[[local|localrc]]
ADMIN_PASSWORD=FreeIPA4All
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=$ADMIN_PASSWORD
LIBS_FROM_GIT=python-openstackclient,python-keystoneclient

I tried with Django-openstack-auth as well, but it seems devstack does not know to fetch that. I ended up cloning that repo by hand.

Run devstack:

./stack

And wait.

Ok, it is done!

Then edit /etc/sssd/sssd.conf and set sssd to start the info pipe services

[sssd]
services = nss, sudo, pam, ssh, ifp

And, in the same file, let infopipe know it can respond with a subset of the LDAP values.

[ifp]
allowed_uids = apache, root, cloud-user
user_attributes = +givenname, +sn, +uid


Ah, forgot to add in a cloud-user user to IPA, as that is what devstack is set to use. I need the ipa client.

$ sudo yum install ipa-admintools
$ kinit ayoung
$ ipa user-add cloud-user --uid=1000
First name: Cloud
Last name: User

And now:

 sudo service sssd restart

Test infopipe:

sudo dbus-send --print-reply --system --dest=org.freedesktop.sssd.infopipe /org/freedesktop/sssd/infopipe org.freedesktop.sssd.infopipe.GetUserGroups string:ayoung

Returns

method return sender=:1.17 -> dest=:1.29 reply_serial=2
   array [
      string "admins"
      string "ipausers"
      string "wheel"
   ]

Now to configure HTTPD. I’m not going to bother with HTTPS for this setup, as it is only proof of concept, and there is a good bit of Horizon to reset if you do HTTPS.

$ sudo yum install mod_lookup_identity mod_auth_kerb

OK…now we start treading new ground. Instead of a whole new Kerberized setup for Keystone, I’m only going to Kerberize the segment protected by Federation. That is

Create a file with the V3 and admin env vars set:

$ cat openrc.v3 
. ./openrc
export OS_AUTH_URL=http://192.168.1.67:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_USERNAME=admin

source that and test:

$ openstack project list
+----------------------------------+--------------------+
| ID                               | Name               |
+----------------------------------+--------------------+
| 07a369b34c6f41948143f6ff75dc81a6 | alt_demo           |
| 0edb00180b3d4676baf5c39325e0639d | demo               |
| 18c3137a9a4e4266adb3b143c0d62ac3 | service            |
| 64378db96ab845dd8346ce0bcff9709d | admin              |
| 9a08ef972d7f4ef9a52190085b6b25d0 | invisible_to_admin |
+----------------------------------+--------------------+

create the groups and mappings for Federation

Here is the contents of mapping.json

[
     {
         "local": [
             {
                 "user": {
                     "name": "{0}",
                     "id": "{0}",
                      "domain": {"name": "Default"}
                 }
             }
         ],
         "remote": [
             {
                 "type": "REMOTE_USER"
             }
         ]
     },

     {
         "local": [
             {
                 "groups": "{0}",
                 "domain": {
                     "name": "Default"
                 }
             }
         ],
         "remote": [
             {
                 "type": "REMOTE_USER_GROUPS",
                 "blacklist": []
             }
         ]
     }

 ]

  openstack group create admins
  openstack group create ipausers
  openstack role add  --project demo --group ipausers member
  openstack identity provider create sssd
  openstack mapping create  --rules /home/cloud-user/mapping.json  kerberos_mapping
  openstack federation protocol create --identity-provider sssd --mapping kerberos_mapping kerberos
  openstack identity provider set --remote-id SSSD sssd

Get the Keytab

$ ipa service-add HTTP/horizon.cloudlab.freeipa.org
----------------------------------------------------------------------
Added service "HTTP/horizon.cloudlab.freeipa.org@CLOUDLAB.FREEIPA.ORG"
----------------------------------------------------------------------
  Principal: HTTP/horizon.cloudlab.freeipa.org@CLOUDLAB.FREEIPA.ORG
  Managed by: horizon.cloudlab.freeipa.org
$ ipa-getkeytab -s ipa.cloudlab.freeipa.org -k /tmp/openstack.keytab -p HTTP/horizon.cloudlab.freeipa.org
Keytab successfully retrieved and stored in: /tmp/openstack.keytab
$ sudo mv /tmp/openstack.keytab /etc/httpd/conf
$ sudo chown apache /etc/httpd/conf/openstack.keytab
$ sudo chmod 600 /etc/httpd/conf/openstack.keytab

Enable Kerberos for Keystone. In /etc/httpd/conf.d/keystone.conf

Listen 5000
Listen 35357

#enable modules for Kerberos and getting id from sssd
LoadModule lookup_identity_module modules/mod_lookup_identity.so
LoadModule auth_kerb_module modules/mod_auth_kerb.so

<VirtualHost *:5000>
    WSGIDaemonProcess keystone-public processes=5 threads=1 user=cloud-user display-name=%{GROUP} 
    WSGIProcessGroup keystone-public
    WSGIScriptAlias / /var/www/keystone/main
    WSGIApplicationGroup %{GLOBAL}
    WSGIPassAuthorization On
    <IfVersion >= 2.4>
      ErrorLogFormat "%{cu}t %M"
    </IfVersion>
    ErrorLog /var/log/httpd/keystone.log
    CustomLog /var/log/httpd/keystone_access.log combined


    #This tells WebSSO what IdP to use
    SetEnv IDP_ID SSSD
    #Protect the urls that have kerberos in their path with Kerberos.

    <location ~ "kerberos" >
    AuthType Kerberos
    AuthName "Kerberos Login"
    KrbMethodNegotiate on
    KrbMethodK5Passwd off
    KrbServiceName HTTP
    KrbAuthRealms CLOUDLAB.FREEIPA.ORG
    Krb5KeyTab /etc/httpd/conf/openstack.keytab
    KrbSaveCredentials on
    KrbLocalUserMapping on
    Require valid-user
    #  SSLRequireSSL
     LookupUserAttr mail REMOTE_USER_EMAIL " "
    LookupUserGroups REMOTE_USER_GROUPS ";"
   </location>

Make sure Keystone can handle Kerberos. In /etc/keystone/keystone.conf

[auth]
methods = external,password,token,oauth1,kerberos
kerberos = keystone.auth.plugins.mapped.Mapped

and under federation

[federation]
# I tried this first, and it worked.  It will have issues when sharing WebSSO with other mechanisms
remote_id_attribute = IDP_ID
trusted_dashboard = http://horizon.cloudlab.freeipa.org/auth/websso/
sso_callback_template = /etc/keystone/sso_callback_template.html

#this is supposed to work but does not yet.
[kerberos]
remote_id_attribute=IDP_ID

Once this is set, copy the templacte file for the webSSO post response from the Keystone repo to /etc/keystone

 cp /opt/stack/keystone/etc/sso_callback_template.html /etc/keystone/
curl   --negotiate -u:   $HOSTNAME:5000/v3/OS-FEDERATION/identity_providers/sssd/protocols/kerberos/auth

Returns

{"token": {"methods": ["kerberos"], "expires_at": "2015-04-02T21:17:51.054150Z", "extras": {}, "user": {"OS-FEDERATION": {"identity_provider": {"id": "sssd"}, "protocol": {"id": "kerberos"}, "groups": [{"id": "482eb4e6a0c64348845773b506d1db77"}, {"id": "6da803796a4540d48a0aff3b3185edad"}, {"id": "f0bf681ae2e84d1580a7ff54ea49bf27"}]}, "domain": {"id": "Federated", "name": "Federated"}, "id": "ayoung", "name": "ayoung"}, "audit_ids": ["J-wAsamnQ5-NHjXRYHSAbA"], "issued_at": "2015-04-02T20:17:51.054182Z"}}

And to test WebSSO

curl   --negotiate -u:   $HOSTNAME:5000/v3/auth/OS-FEDERATION/websso/kerberos?origin=http://horizon.cloudlab.freeipa.org/auth/websso/

Returns

<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
    
  </head>
  <body>
     
Please wait...
<noscript> </noscript>
<script type="text/javascript"> window.onload = function() { document.forms['sso'].submit(); } </script> </body> </html>

On to Horizon:

Since this is devstack, the config changes go in:
/opt/stack/horizon/openstack_dashboard/local/local_settings.py

I put all my custom settings at the bottom:

WEBSSO_ENABLED = True


WEBSSO_CHOICES = (
        ("credentials", _("Keystone Credentials")),
        ("kerberos", _("Kerberos")),
      )

WEBSSO_INITIAL_CHOICE="kerberos"


COMPRESS_OFFLINE=True

OPENSTACK_KEYSTONE_DEFAULT_ROLE="Member"

OPENSTACK_HOST="horizon.cloudlab.freeipa.org"

OPENSTACK_API_VERSIONS = {
    "identity": 3
}

OPENSTACK_KEYSTONE_URL="http://horizon.cloudlab.freeipa.org:5000/v3"

Around here is where I cloned the django-openstack-auth repo and made it available to the system using:

 cd /opt/stack/
 git clone https://git.openstack.org/openstack/django_openstack_auth
 cd django_openstack_auth
 sudo python setup.py develop
 sudo systemctl restart httpd.service

When you hit Horizon from a browser it should look like this:

Horizon  Login Screen set up for Federation  Showing Kerberos by Default

Horizon Login Screen set up for Federation Showing Kerberos by Default


There was a lot of trial and error in making this work, and the cause of the error is not always clear. Some of the things that tripped me up bpoth the first time and trying to replicate:

  • There was one outstanding bug that needed to be fixed. I patched this inline. The fix has already merged.
  • Getting the HOSTNAME right on the host. Removing cloud-init worked for me, although I’ve been assured there are better ways to do that.
  • The Horizon config needs to use the hostnames, not the IP addresses, for Kerberos to work.
  • If you do the sssd setup before installing apache HTTPD, and the apache user does not exist, the sssd daemon won’t restart. However, if you forget to add the apache user to the sssd.conf the webserver won’t be able to read from dbus, and thus the REMOTE_USER_GROUPS env var won’t be passed to HTTPD. The error message is
    IndexError: tuple index out of range
  • The keytab needs to be owned by the apache user.
  • Horizon needs to both use the V3 API explicitly and use the AUTH_URL that ends in /v3. It might be possible to drop the /v3 and depend on discovery, but leaving the v2.0 on there will certainly break auth.
  • As I had buried in the devstack instructions: edit the file files/rpms/general and comment out the libyaml-devel package. The functionality is provided by a different package, and that package does not exist in in RHEL7.

April 01, 2015

JOSE – JSON Object Signing and Encryption

Federated Identity Management has become very widespread in past years – in addition to enterprise deployments a lot of popular web services allow users to carry their identity over multiple sites. Social networking sites especially are in a good position to drive the federated identity management, as they have both critical mass of users and the incentive to become an identity provider. As the users move away from a single device to using multiple portable devices, there is a constant pressure to make the federated identity protocols simpler (with respect to complexity), more user friendly (especially for developers) and easier to implement (on wide range of devices and platforms).

Unfortunately older technologies are deeply rooted in enterprise environments and are unsuitable for Internet (Kerberos), or are based on more or less complicated data serialization (e.g. OpenID 2.0 or SAML). Canonicalization, whitespace handling and representation of binary data are among the challenges that various serialization formats face.

OpenID Connect

Another approach to the problem of serialization of structured data in context of identity management is to use already widespread and simple JSON in combination with base64url encoding of data. The advantage in context of federated identity management is obvious – while being sufficiently universal, they have almost native support in web clients. Getting this level of simplicity and interoperability is very compelling, despite these having some shortcomings in e.g. bandwidth-efficiency.

This approach is taken by an upcoming standard OpenID Connect, a third revision of OpenID protocol. The protocol describes a method of providing identity-based claims from identity provider to relying party, with end user being authenticated by Identity Provider and authorizing the request. The communication between client, relying party and identity provider follows OAUTH 2.0 protocol, just like in previous version of the protocol, OpenID 2.0. Claims have a format of JSON key-value hash and the task of protection of integrity, and possibly confidentiality, is addressed by JSON Object Signing and Encryption (JOSE) standard.

JOSE

The standard provides a general approach to signing and encryption of any content, not necessarily in JSON. However, it is deliberately built on JSON and base64url to be easily usable in web applications. Also, while being used in OpenID Connect, it can be used as a building block in other protocols.

JOSE is still an upcoming standard, but final revisions should be available shortly. It consists of several upcoming RFCs:

  • JWA – JSON Web Algorithms, describes cryptographic algorithms used in JOSE
  • JWK – JSON Web Key, describes format and handling of cryptographic keys in JOSE
  • JWS – JSON Web Signature, describes producing and handling signed messages
  • JWE – JSON Web Encryption, describes producting and handling encrypted messages
  • JWT – JSON Web Token, describes representation of claims encoded in JSON and protected by JWS or JWE

JWK

JSON Web Key is a data structure representing a cryptographic key with both the cryptographic data and other attributes, such as key usage.

{ 
  "kty":"EC",
  "crv":"P-256",
  "x":"MKBCTNIcKUSDii11ySs3526iDZ8AiTo7Tu6KPAqv7D4",
  "y":"4Etl6SRW2YiLUrN5vfvVHuhp7x8PxltmWWlbbM4IFyM",
  "use":"enc",
  "kid":"1"
}

Mandatory “kty” key type parameter describes the cryptographic algorithm associated with the key. Depending on the key type, other parameters might be used – as shown in the example elliptic curve key would contain “crv” parameter identifying the curve, “x” and “y” coordinates of point, optional “use” to denote intended usage of the key and “kid” as key ID. The specification now describes three key types: “EC” for Elliptic Curve, “RSA” for, well, RSA, and “oct” for octet sequence denoting the shared symmetric key.

JWS

JSON Web Signature standard describes process of creation and validation of datastructure representing signed payload. As example take following string as a payload:

'{"iss":"joe",
 "exp":1300819380,
 "http://example.com/is_root":true}'

Incidentally, this string contains JSON data, but this is not relevant for the signing procedure and it might as well be any data. Before signing, the payload is always converted to base64url encoding:

eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6Ly9leGFtcGxlLm
NvbS9pc19yb290Ijp0cnVlfQ

Additional parameters are associated with each payload. Required parameter is “alg”, which denotes the algorithm used for generating a signature (one of the possible values is “none” for unprotected messages). The parameters are included in final JWS in either protected or unprotected header. The data in protected header is integrity protected and base64url encoded, whereas unprotected header human readable associated data.

As example, the protected header will contain following data:

{"alg":"ES256"}

which in base64url encoding look like this:

eyJhbGciOiJFUzI1NiJ9

The “ES356″ here is identifier for ECDSA signature algorithm using P-256 curve and SHA-256 digest algorithm.

Unprotected header can contain a key id parameter:

{"kid":"e9bc097a-ce51-4036-9562-d2ade882db0d"}

The base64url encoded payload and protected header are concatenated with ‘.’ to form a raw data, which is fed to the signature algorithm to produce the final signature.

Finally, the JWS output is serialized using one of JSON or Compact serializations. Compact serialization is simple concatenation of comma separated base64url encoded protected header, payload and signature. JSON serialization is a human readable JSON object, which for the example above would look like this:

{
  "payload": "eyJpc3MiOiJqb2UiLA0KICJleHAiOjEzMDA4MTkzODAsDQogImh0dHA6
              Ly9leGFtcGxlLmNvbS9pc19yb290Ijp0cnVlfQ",
  "protected":"eyJhbGciOiJFUzI1NiJ9",
  "header":
    {"kid":"e9bc097a-ce51-4036-9562-d2ade882db0d"},
     "signature":
     "DtEhU3ljbEg8L38VWAfUAqOyKAM6-Xx-F4GawxaepmXFCgfTjDxw5djxLa8IS
      lSApmWQxfKTUJqPP3-Kg6NU1Q"
}

Such process for generating signature is pretty straightforward, yet still supports some advanced use-cases, such as multiple signatures with separate headers.

JWE

JSON Web Encryption follows the same logic as JWS with a few differences:

  • by default, for each message new content encryption key (CEK) should be generated. This key is used to encrypt the plaintext and is attached to the final message. Public key of recipient or a shared key is used only to encrypt the CEK (unless direct encryption is used, see below).
  • only AEAD (Authenticated Encryption with Associated Data) algorithms are defined in the standard, so users do not have to think about how to combine JWE with JWS.

Just like with JWS, header data of JWE object can be transmitted in either integrity protected, unprotected or per-recipient unprotected header. The final JSON serialized output then has the following structure:

{
  "protected": "<integrity-protected header contents>",
  "unprotected": <non-integrity-protected header contents>,
  "recipients": [
    {"header": <per-recipient unprotected header 1 contents>,
     "encrypted_key": "<encrypted key 1 contents>"},
     ...
    {"header": <per-recipient unprotected header N contents>,
     "encrypted_key": "<encrypted key N contents>"}],
  "aad":"<additional authenticated data contents>",
  "iv":"<initialization vector contents>",
  "ciphertext":"<ciphertext contents>",
  "tag":"<authentication tag contents>"
}

The CEK is encrypted for each recipient separately, using different algorithms. This gives us ability to encrypt a message to recipients with different keys, e.g. RSA, shared symmetric and EC key.

The two used algorithms need to be specified as a header parameters. “alg” parameter specified the algorithm used to protect the CEK, while “enc” parameter specifies the algorithm used to encrypt the plaintext using CEK as key. Needless to say, “alg” can have a value of “dir”, which marks direct usage of the key, instead of using CEK.

As example, assume we have RSA public key of the first recipient and share a symmetric key with second recipient. The “alg” parameter for the first recipient will have value “RSA1_5″ denoting RSAES-PKCS1-V1_5 algorithm and “A128KW” denoting AES 128 Keywrap for the second recipient, along with key IDs:

{"alg":"RSA1_5","kid":"2011-04-29"}

and

{"alg":"A128KW","kid":"7"}

These algorithms will be used to encrypt content encryption key (CEK) to each of the recipients. After CEK is generated, we use it to encrypt the plaintext with AES 128 in CBC mode with HMAC SHA 256 for integrity:

{"enc":"A128CBC-HS256"}

We can protect this information by putting it into a protected header, which, when base64url encoded, will look like this:

eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0

This data will be fed as associated data to AEAD encryption algorithm and therefore be protected by the final signature tag.

Putting this all together, the resulting JWE object will looks like this:

{
  "protected": "eyJlbmMiOiJBMTI4Q0JDLUhTMjU2In0",
  "recipients":[
    {"header": {"alg":"RSA1_5","kid":"2011-04-29"},
     "encrypted_key":
       "UGhIOguC7IuEvf_NPVaXsGMoLOmwvc1GyqlIKOK1nN94nHPoltGRhWhw7Zx0-
        kFm1NJn8LE9XShH59_i8J0PH5ZZyNfGy2xGdULU7sHNF6Gp2vPLgNZ__deLKx
        GHZ7PcHALUzoOegEI-8E66jX2E4zyJKx-YxzZIItRzC5hlRirb6Y5Cl_p-ko3
        YvkkysZIFNPccxRU7qve1WYPxqbb2Yw8kZqa2rMWI5ng8OtvzlV7elprCbuPh
        cCdZ6XDP0_F8rkXds2vE4X-ncOIM8hAYHHi29NX0mcKiRaD0-D-ljQTP-cFPg
        wCp6X-nZZd9OHBv-B3oWh2TbqmScqXMR4gp_A"},
    {"header": {"alg":"A128KW","kid":"7"},
     "encrypted_key":
        "6KB707dM9YTIgHtLvtgWQ8mKwboJW3of9locizkDTHzBC2IlrT1oOQ"}],
  "iv": "AxY8DCtDaGlsbGljb3RoZQ",
  "ciphertext": "KDlTtXchhZTGufMYmOYGS4HffxPSUrfmqCHXaI9wOGY",
  "tag": "Mz-VPPyU4RlcuYv1IwIvzw"
}

JWA

JSON Web Algorithms defines algorithms and their identifiers to be used in JWS and JWE. The three parameters that specify algorithms are “alg” for JWS, “alg” and “enc” for JWE.

"enc": 
    A128CBC-HS256, A192CBC-HS384, A256CBC-HS512 (AES in CBC with HMAC), 
    A128GCM, A192GCM, A256GCM

"alg" for JWS: 
    HS256, HS384, HS512 (HMAC with SHA), 
    RS256, RS384, RS512 (RSASSA-PKCS-v1_5 with SHA), 
    ES256, ES384, ES512 (ECDSA with SHA), 
    PS256, PS384, PS512 (RSASSA-PSS with SHA for digest and MGF1)

"alg" for JWE: 
    RSA1_5, RSA-OAEP, RSA-OAEP-256, 
    A128KW, A192KW, A256KW (AES Keywrap), 
    dir (direct encryption), 
    ECDH-ES (EC Diffie Hellman Ephemeral+Static key agreement), 
    ECDH-ES+A128KW, ECDH-ES+A192KW, ECDH-ES+A256KW (with AES Keywrap), 
    A128GCMKW, A192GCMKW, A256GCMKW (AES in GCM Keywrap), 
    PBES2-HS256+A128KW, PBES2-HS384+A192KW, PBES2-HS512+A256KW 
    (PBES2 with HMAC SHA and AES keywrap)

On the first look the wealth of choice for “alg” in JWE is balanced by just two options for “enc”. Thanks to “enc” and “alg” being separate, algorithms suitable for encrypting cryptographic key and content can be separately defined. AES Keywrap scheme defined in RFC 3394 is a preferred way to protect cryptographic key. The scheme uses fixed value of IV, which is checked after decryption and provides integrity protection without making the encrypted key longer (by adding IV and authentication tag). But here`s a catch – while A128KW refers to AES Keywrap algorithm as defined in RFC 3394, word “keywrap” in A128GCMKW is used in a more general sense as synonym to encryption, so it denotes simple encryption of key with AES in GCM mode.

JWT

While previous parts of JOSE provide a general purpose cryptographic primitives for arbitrary data, JSON Web Token standard is more tied to the OpenID Connect. JWT object is simply JSON hash with claims, that is either signed with JWS or encrypted with JWE and serialized using compact serialization. Beware of a terminological quirk – when JWT is used as plaintext in JWE or JWS, it is referred to as nested JWT (rather than signed, or encrypted).

JWT standard defines claims – name/value pair asserting information about subject. The claims include

  • “iss” to identify issuer of the claim
  • “sub” identifying subject of JWT
  • “aud” (audience) identifying intended recipients
  • “exp” to mark expiration time of JWT
  • “nbf” (not before) to mark time before which JWT must be rejected
  • “iat” (issued at) to mark time when JWT was created
  • “jti” (JWT ID) as unique identifier for JWT

While standard mandates what are mandatory values of the claims, all of them are optional to use in a valid JWT. This means applications can use any structure for JWT if it`s not intended to use publicly, and for public JWT set of claims is defined and collisions in names are prevented.

The good

The fact that JOSE combines JSON and base64url encoding making it simple and web friendly is a clear win. Although we will definitely see JOSE adopted in web environment first, it does have ambition to become more general purpose standard.

The design promotes secure choices, e.g. use of unique CEK per message, which makes users default to secure configurations while still giving option to use less secure methods (“dir” for encryption, “none” in JWS). Being a new standard authors did seize the opportunity to define only secure algorithms. This is certainly good, but as the advances in cryptography weaken the algorithms, ability to deprecate algorithms (with backwards compatibility always being an issue) will be more important in the future.

The bad

Despite the effort to keep the standard simple, some complexities inevitably slipped through. One obvious comes from a different serializations. JOSE standards support two serializations – JSON and Compact. JSON serialization is human readable and gives the users more freedom as it supports some advances features like multiple recipients. However, since single recipient is a much more common case, standard also defines a variant called flattened JSON serialization. In this type nested parameters from nested fields (like “recipients”) is moved directly to top level JSON object, making multiple recipients with flattened serialization impossible.

The compact serialization is created, according to the standard, for “space constrained environments”. The result of the serialization is comma separated concatenation of base64url encoded segments of the original JSON object. For example for JWS the serialization is constructed as follows:

BASE64URL(UTF8(JWS Protected Header)) || '.' ||
BASE64URL(JWS Payload) || '.' ||
BASE64URL(JWS Signature)

Astute readers notice that compact serialization further restricts the available features of JWS, e.g. it is not possible to include unprotected header anymore. The space saving comes from dropping the keys that denote the parts of JSON object (“payload”, “signature” etc.). On the other hand, base64url encoding expands the length of the cleartext data (i.e. unprotected header), so in extreme example compact serialized JWS might actually be longer that JSON serialized one if the header contains enough data (to compensate inefficiency of specifying keys in the object) and is stored as unprotected header. Of importance is also the number of dot separated sections, since their number is the only method of differentiating between compact serialized JWE and JWS. Other proposed extensions, such as Key Managed JSON Web Signature (KMJWS), must take this into account.

The standard also still contains several ambiguities, e.g. JWK defines a JWK Set and states that “The member names within a JWK Set MUST be unique;”  without specifying what member name actually is.

The ugly

The JOSE standard is already incorporated in OpenID Connect standard. As the standards evolved side by side, OpenID Connect standard is based on older revisions of JOSE. More importantly, 15.6.1. Pre-Final IETF Specifications section of OpenID Connect states:

“Implementers should be aware that this specification uses several IETF specifications that are not yet final specifications … While every effort will be made to prevent breaking changes to these specifications, should they occur, OpenID Connect implementations should continue to use the specifically referenced draft versions above in preference to the final versions …”

The compatibility issues are always bane of cryptographic standard and this decision to prefer pre-final revisions of JOSE standard might force implementations to make some hard decisions.

Future

The JOSE standard seems to be quickly approaching the final revisions and we will most probably see more of it on the web. Implementations for most of the popular languages are in place and we will see whether the decision to award Special European Identity Award for Best Innovation for Security in the API Economy to JOSE will also stand the test of time.

March 31, 2015

Report on IoT (Internet of Things) Security

IoT (Internet of Things) devices have – and in many cases have earned! – a rather poor reputation for security. It is easy to find numerous examples of security issues in various IoT gateways and devices.

So I was expecting the worst when I had the opportunity to talk to a number of IoT vendors and to attend the IoT Day at EclipseCon. Instead, I was pleasantly surprised to discover that considerable attention is being paid to security!

  • Frameworks, infrastructure, and lessons from the mobile phone space are being applied to IoT. The mobile environment isn’t perfect, but has made considerable progress over the last few years. This is actually a pretty good starting point.
  • Code signing is being emphasized. This means that the vendor has purchased a code signing certificate from a known Certificate Authority and used it to sign their application. This ensures that the code has not been corrupted or tampered with and provides some assurance that it is coming from a known source. Not an absolute guarantee, as the Certificate Authorities aren’t perfect, but a good step.
  • Certificate based identity management, based on X.509 certificates, is increasingly popular. This provides a strong mechanism to identify systems and encrypt their communications.
  • Oauth based authentication and authorization is becoming more widely used.
  • Encrypted communications are strongly recommended. The Internet of Things should run on https!
  • Encrypted storage is recommended.

Julian Vermillard of Sierra Wireless gave a presentation at EclipseCon on 5 Elements of IoT Security. His points included:

  • Secure your hardware. Use secure storage and secure communications. Firmware and application updates should be signed.
  • “You can’t secure what you can’t update.”
    • Upgrades must be absolutely bulletproof – you can never “brick” a device!
    • Need rollback capabilities for all updates. An update may fail for many reasons, and you may need to revert to an earlier version of the code. For example, an update might not work with other software in your system.
  • Secure your communications
    • Recommends using Perfect Forward Secrecy.
    • Use public key cryptography:
      • X.509 certificates (see above discussions on X.509). Make sure you address certificate revocation.
      • Pre-Shared Keys. This is often easier to implement but weaker than a full Public Key X.509 infrastructure.
      • Whatever approach you take, make sure you can handle regular secret rotation or key rotation.
    • For low end devices look at TLS Minimal. I’m not familiar with this; it appears to be an IETF Draft.

Julian also recommended keeping server security in mind – the security of the backend service the IoT device or gateway is talking to is as important as device level security!

The challenge now is to get actual IoT manufacturers and software developers to build robust security into their devices. For industrial devices, where there is a high cost for security failures, we may be able to do this.

For consumer IoT devices you will have to vote with your wallet. If secure IoT devices sell better than insecure ones, manufacturers will provide security. If cost and time to market are everything, we will get insecure devices.


March 26, 2015

OpenStack keeps resetting my hostname

No matter what I changed, something kept setting the hostname on my vm to federate.cloudlab.freeipa.org.novalocal. Even forcing the /etc/hostname file to be uneditable did not prevent this change. Hunting this down took far too long, and here is the result of my journey.

Old Approach

A few releases ago, I had a shell script for spinning up new virtual machines that dealt with dhclient resetting values by putting overrides into /etc/dhclient.conf.  Find this file was a moving target.  First it moved into

/etc/dhcp/dhclient.conf.

Then to a file inside

/etc/dhcp/dhclient.d

And so on.  The change I wanted to make was to do two things:

  1.  set the hostname explicitly and keep it that way
  2. Use my own dnsserver, not the dhcp managed one

Recently, I started working on a RHEL 7.1 system running on our local cloud.  No matter what I did, I could not fix the host name.  Here are some  of the things I tried:

  1. Setting the value in /etc/hostname
  2. running hostnamectl set-hostname federate.cloudlab.freeipa.org
  3. Using nmcli to set the properties for the connections ipv4 configuration
  4. Explicitly Setting it in /etc/sysconfig/network-scripts/ifcfg-eth0
  5. Setting the value in /etc/hostname and making hostname immutable with chattr +i /etc/hostname

Finally, Dan Williams (dcbw) suggested I look in the journal to see what was going on with the host name.  I ran journalctl -b and did a grep for hostname.  Everything looked right until…

Mar 26 14:01:10 federate.cloudlab.freeipa.org cloud-init[1914]: [CLOUDINIT] stages.py[DEBUG]: Running module set_hostname (<module 'cloudinit.config.cc_set_hostname' from '/usr/lib/python2.7/site-packages/cloudinit...

cloud-init?

But…I thought that was only supposed to be run when the VM was first created? So, regardless of the intention, it was no longer helping me.

yum erase cloud-init

And now the hostname that I set in /etc/hostname survives a reboot. I’ll post more when I figure out why cloud-init is still running after initialization.

For discussion: Orphaned package in Fedora

The Fedora Security Team (FST) has uncovered an interesting problem.  Many packages in Fedora aren’t being actively maintained meaning they are unofficially orphaned.  This is likely not a problem since at least some of these packages will happily sit there and be well behaved.  The ones we worry about are the ones that pick up CVEs along the way, warning of unscrupulous behaviour.

The FST has been plugging away at trying to help maintainers update their packages when security flaws are known to exist.  So far we’ve almost hit the 250 bug level.  Unfortunately we forced a policy that still isn’t perfect.  What do you do with a package that is no longer is supported and has a known vulnerability in it?  Unless you can recruit someone to adopt the package the only responsible choice you have is to retire the package and remove it from the repositories.

This, of course, leads to other problems, specifically that someone has that package installed and they know not that the package is no longer supported nor do they know it contains a security vulnerability.  This morning, during the FST meeting, we discussed the problem a bit and I had an idea that I’ll share here in hopes of starting a discussion.

The Idea

Create a file containing all the packages that have been retired from a repository and perhaps a short reason for why this package has been retired.  Then have yum/dnf consume this information regularly and notify the user/admin when a package that is installed is added to this list.  This allows the system admin to become aware of the unsupported nature of the package and allows them to make a decision as to whether or not to keep the package on the system.

Okay, discuss…


A change in thinking…

When I entered the information security world in late 2001 I received training on communications technologies that included a significant interest in confidentiality.  Obviously the rest of the trifecta, integrity and availability, were also important but maintaining communications security was king.

Now, almost fifteen years later, I’m still focused on the trifecta with confidentiality coming out with a strong lead.  But my goals have changed.  While confidentiality is an important piece of the puzzle, for privacy and other reasons, I feel it should no longer be king with my work and writing.

Over the coming weeks I plan to focus on the availability of data.  And not just whether or not a file is on a server somewhere but diving into the heart of the availability problem.  File format standards, flexibility of the data to be used with accessibility tools, ability to translate the words into other languages to ease sharing, and the ability to move the information to other forms of media to improve access are all topics I want to cover.

I’m largely writing this as a reminder of ideas I want to research and discuss but I hope this gets other people thinking about their own works.  If you have a great idea don’t you want to make it easier for other people to consume your thoughts and be able to build on them?  Unfortunately the solution isn’t simple and I suspect much will be written over time about the topic.  Hopefully we’ll have a solution soon before that StarWriter file you have stored on a 5.25″ floppy drive is no longer readable.


March 25, 2015

Troubleshooting Keystone in a New Install

Recently heard complaints:

I’ve done a deployment , and every time I try to log in to the dashboard, I get “An error occurred authenticating. Please try again later.” Somewhat surprisingly, the only log that I’m noticing showing anything of note is the Apache error log, which reports ‘Login failed for user “admin”‘. I’ve bumped keystone — where I’d assume the error is happening — to DEBUG, but it’s showing exactly zero activity. How do I go about debugging this?’

Trying to enable LDAP with OpenStack/keystone in Juno release. All the horizon users return error “You are not authorized for any projects.” Similarly, all the OpenStack services are reported not to be authorized.’
What is supposed to happen:

  1. You Login to Horizon using admin and the correct password
  2. Horizon passes that to Keystone in a token request
  3. Keystone uses that information to create a token. If the user has a default project set, the token is scoped to the default proejct
  4. token is returned to Horizon

Let’s take a deeper look at step 3.
In order to perform an operation on a resource in a project, a user needs to be assigned a role in a project. So the failure could happen at a couple steps.

  1. The user does not exist in the identity backend
  2. The user has the wrong password
  3. The user has no role assignments
  4. The user has a default project assigned, but does not have a role assignment for that project

The Keystone configuration file

Most deployments run with Keystone reading its configuration values from /etc/keystone/keystone.conf. It is an ini file, with section headers.
In Juno and Icehouse, the storage is split into two pieces: Identity and Assignment. Identity holds users and groups. Assignment holds roles, role assignments, projects and domains. Let’s start with the simplest scenario.
Identity in SQL, Assignments in SQL:
This is what you get from devstack if you make no customizations. To confirm that you are running this way, look in your Keystone.conf file for the sections that starts with
[identity]
and
[assignment]
and look for the value driver. In a Devstack deployment that I just ran, I have

[identity]
driver = keystone.identity.backends.sql.Identity

Which confirms I am running witht he SQL driver for identity, and

[assignment]
driver = keystone.assignment.backends.sql.Assignment

Which confirms I am running with the SQL driver for Assignment
First steps
For Devstack, I get my environment variables set using

. openrc
and this will set:
$OS_AUTH_URL $OS_NO_CACHE $OS_TENANT_NAME
$OS_CACERT $OS_PASSWORD $OS_USERNAME
$OS_IDENTITY_API_VERSION $OS_REGION_NAME $OS_VOLUME_API_VERSION
echo $OS_USERNAME
demo

To change to the admin user:

$ export OS_USERNAME=admin
$ export OS_PASSWORD=FreeIPA4All

While we are trying to get people to move to the common CLI, older deployments may only have the keystone CLI to work with. I’m going to start with that.

$ keystone --debug token-get
DEBUG:keystoneclient.auth.identity.v2:Making authentication request to http://192.168.1.58:5000/v2.0/tokens
INFO:requests.packages.urllib3.connectionpool:Starting new HTTP connection (1): 192.168.1.58
DEBUG:requests.packages.urllib3.connectionpool:"POST /v2.0/tokens HTTP/1.1" 200 3783
+-----------+----------------------------------+
| Property | Value |
+-----------+----------------------------------+
| expires | 2015-03-25T16:03:25Z |
| id | ec7c2d1f07c5414499c3cbaf7c59d4be |
| tenant_id | 69ff732083a64a1a8e34fc4d2ea178dd |
| user_id | 042b50edf70f484dab1f14e893a73ea8 |
+-----------+----------------------------------+

OK, what happens when I do keystone token-get? The CLI uses the information I provide to try and get a token;

$ echo $OS_AUTH_URL
http://192.168.1.58:5000/v2.0

OK…It is going to go to a V2 specific URL. And, to confirm:

$ echo $OS_IDENTITY_API_VERSION

2.0

We are using Version 2.0
The username, password and tenant used are

$ echo $OS_USERNAME
admin
$ echo $OS_PASSWORD
FreeIPA4All
$ echo $OS_TENANT_NAME
demo

Let’s assume that running keystone token-get fails for you. Let’s try to isolate the issue to the role assignments by getting an unscoped token:

$ unset OS_TENANT_NAME
$ echo $OS_TENANT_NAME

That should return a blank line. Now:

$ keystone token-get
+----------+----------------------------------+
| Property | Value |
+----------+----------------------------------+
| expires | 2015-03-25T16:14:28Z |
| id | 2a3ce489422342f2b6616016cb43ebc2 |
| user_id | 042b50edf70f484dab1f14e893a73ea8 |
+----------+----------------------------------+

If this fails, it could be one of a few things:

  1. User does not exist
  2. Password is wrong
  3. User has a default tenant that is invalid

How can we check:

Using Admin Token

Bootstrapping the Keystone install requires putting users in the database before there are any users defined. Most installers take advantage of an alternate mechanism called the ADMIN_TOKEN or SERVICE_TOKEN. To see the value for this, look in keystone.conf section:
[DEFAULT]
for a value like this:
#admin_token = ADMIN
Note that devstack follows the best practice of disabling the admin token by commenting it out. This password is very powerful and should be disabled in common usage, but is very powerful for fixing broken systems. To enable it, uncomment the value, and restart Keystone.

Using the Common CLI

The keystone command line has been deprecated with an eye toward using the openstack client. Since you might be deploying an old version of Openstack that has different library dependencies, you might not be able to install the latest version on your server, but you can (and should) run an updated version on your workstation which will then be capable of talking to older versions of keystone.
To perform operations using the common cli you need to pass the endpoint and admin_token as command line parameters.

The os-url needs to be the publicly routed URL to the admin interface. The firewall port for that URL needs to be Open.

$ openstack --os-token ADMIN --os-url http://192.168.1.58:35357/v2.0/ user list
+----------------------------------+----------+
| ID | Name |
+----------------------------------+----------+
| 042b50edf70f484dab1f14e893a73ea8 | admin |
| eb0d4dc081f442dd85573740cfbecfae | demo |
+----------------------------------+----------+
$ openstack --os-token ADMIN --os-url http://127.0.0.1:35357/v2.0/ role list
+----------------------------------+-----------------+
| ID | Name |
+----------------------------------+-----------------+
| 1f069342be2348ed894ea686706446f2 | admin |
| 2bf27e756ff34024a5a9bae269410f44 | service |
| dc4e9608b6e64ee1a918030f23397ae1 | Member |
+----------------------------------+-----------------+
$ openstack --os-token ADMIN --os-url http://192.168.1.58:35357/v2.0/ project list
+----------------------------------+--------------------+
| ID | Name |
+----------------------------------+--------------------+
| 69ff732083a64a1a8e34fc4d2ea178dd | demo |
| 7030f12f6cb4443cbab8f0d040ff023b | admin |
+----------------------------------+--------------------+

Now, to check to see if the admin user has a role on the admin project:

$ openstack --os-token ADMIN --os-url http://192.168.1.58:35357/v2.0/ user role list --project admin admin

+----------------------------------+-------+---------+-------+
| ID | Name | Project | User |
+----------------------------------+-------+---------+-------+
| 1f069342be2348ed894ea686706446f2 | admin | admin | admin |
+----------------------------------+-------+---------+-------+

If this returns nothing, you probably have found the root of your problem. Add the assignment with
$ openstack --os-token ADMIN --os-url http://192.168.1.58:35357/v2.0/ role add --project admin --user admin admin
+-------+----------------------------------+
| Field | Value |
+-------+----------------------------------+
| id | 1f069342be2348ed894ea686706446f2 |
| name | admin |
+-------+----------------------------------+
Not using IPv6? Are you sure?
World IPv6 Launch logo

CC-BY World IPv6 Launch

Internet Protocol version 6 (IPv6) has been around for many years and was first supported in Red Hat Enterprise Linux 6 in 2010.  Designed to provide, among other things, additional address space on the ever-growing Internet, IPv6 has only recently become a priority for ISPs and businesses.

On February 3, 2011, ICANN announced that the available pool of unallocated IPv4 addresses had been completely emptied and urged network operators and server owners to implement IPv6 if they had not already done so.  Unfortunately, many networks still do not support IPv6 and many system and network administrators don’t understand the security risks associated with not having some sort of IPv6 control within their networks setup even if IPv6 is not supported.  The common thought of not having to worry about IPv6 since it’s not supported on a network is a false one.

The Threat

On many operating systems, Red Hat Enterprise Linux and Fedora included, IPv6 is preferred over IPv4.  A DNS lookup will search first for an IPv6 address and then an IPv4 address.  A system requesting a DHCP allocation will, by default, attempt to obtain both addresses as well.  When a network does not support IPv6 it leaves open the possibility of rouge IPv6 DHCP and DNS servers coming online to redirect traffic either around current network restrictions or through a specific choke point where traffic can be inspected or both.  Basically, if you aren’t offering up IPv6 within your network someone else could.

Just like on an IPv4 network, monitoring IPv6 on the internal network is crucial for security, especially if you don’t have IPv6 rolled out.  Without proper monitoring, an attacker, or poorly configured server, could start providing a path way out of your network, bypassing all established safety mechanisms to keep your data under control.

Implementing IPv6

There are several methods for protecting systems and networks from attacks revolving around IPv6.  The simplest, and most preferred method, is to simply start using IPv6.  It becomes much more difficult for rouge DNS and DHCP servers to be implemented on a functioning IPv6 network.  Implementing IPv6 isn’t particularly difficult either.

Unfortunately IPv6 isn’t all the simple to implement either.  As UNC‘s Dr. Joni Julian spoke about in her SouthEast LinuxFest presentation on IPv6 Security, many of the tools administrators use to manage network connections have been rewritten, and thus renamed, to support IPv6.  This adds to the confusion when other tools, such as iptables, require different rules to be written to support IPv6.  Carnegie Mellon University’s CERT addresses many different facets of implementing IPv6 including ip6tables rules.  There are many resources available to help system and network administrators setup IPv6 on their systems and networks and by doing so networks will automatically be available to IPv6-only networks of the future present.

Blocking and Disabling IPv6

If setting up IPv6 isn’t possible the next best thing is disabling, blocking, and monitoring for IPv6 on the network.  This means disabling IPv6 in the network stack and blocking IPv6 in ip6tables.

# Set DROP as default policy to INPUT, OUTPUT, and FORWARD chains.
ip6tables -P INPUT DROP
ip6tables -P OUTPUT DROP
ip6tables -P FORWARD DROP

# Set DROP as a rule to INPUT and OUTPUT chains.
ip6tables -I INPUT -p all -j DROP
ip6tables -I OUTPUT -p all -j DROP

Because it can never known that every system on a network will be properly locked down, monitoring for IPv6 packets on the network is important.  Many IDSs can be configured to alert on such activity but configuration is key.

A few final words

IPv6 doesn’t have to be scary but if you want to maintain a secure network a certain amount of respect is required.  With proper monitoring IPv6 can be an easily manageable “threat”.  Of course the best way to mitigate the risks is to embrace IPv6.  Rolling it out and using it prevents many of the risks already discussed and it could already be an availability issue if serving up information over the Internet is important.

March 23, 2015

Firewalld rule for Minecraft Server

My sons play Minecraft.  I recently decided to let them play head to head on the same server.  Aside from the financial aspect (I had to buy a second account) it was fairly straightforward running the server.  The one thing that tripped me up was a firewall rule that prevented a remote client machine from connecting to the server.  Fix was pretty simple.

When running the server, the log showed:

[23:58:50] [Server thread/INFO]: Starting Minecraft server on *:25565

And so I knew that my firewalld configuration would block it. Killing firewalld and flushing the iptables rules confirmed it:

sudo systemctl stop firewalld.service
sudo iptables -F

But I don’t want to run without a firewall.

I want to open port 25565. To do so, I need to figure out what zone holds the firewall rule blocking it, and add a rule that opens this port.

$ firewall-cmd --get-active-zones
public
  interfaces: em1 tun0 virbr0 virbr1 virbr1-nic

Simple enough; I only have one zone (Fedora 21 default setup)

I want to only open this port when I fired up the game, I would probably be better off with a sudo rule that I embedded into the game startup script that opens the port dynamically, but I can do this by hand.

sudo firewall-cmd  --zone=public --add-port=25565/tcp

and then closes it upon shutdown:

sudo firewall-cmd  --zone=public --remove-port==25565/tcp

If I was setting up a machine to be a dedicated server, I would want this port to always be opened.

$ sudo firewall-cmd --permanent --zone=public --add-port=25565/tcp
success

Did that work?

$ sudo firewall-cmd --zone=public --query-port=25565/tcp
no

Not yet. So far I’ve only said that the port should be written down to be opened in general. I want this to be persisted.

$ sudo firewall-cmd --reload 
success
$ sudo firewall-cmd --zone=public --query-port=25565/tcp
yes

Now it is open and will be kept open. How: it gets written in to the firewalld config file. If you run

 sudo less /etc/firewalld/zones/public.xml

In there you should see a line that contains:

  port protocol="tcp" port="25565"

If you decide to disable the server and want to close the port:

$ sudo firewall-cmd --permanent --zone=public --remove-port=25565/tcp
success
$ sudo firewall-cmd --reload 
success
$ sudo firewall-cmd --zone=public --query-port=25565/tcp
no

What if we want to name this port? We know that the client must look for port 25565 Even if it isn’t in /etc/services. We can name this port “minecraft-server” at least for firewalld purposes. Create this file:

sudo vi /etc/firewalld/services/minecraft.xml
<?xml version="1.0" encoding="utf-8"?>
<service>
  <short>minecraft</short>
  <description>Port used to allow remote connections to a Minecraft server running on this machine.</description>
  <port protocol="tcp" port="25565"/>
</service>

Now, instead of the above commands:

To query:

 sudo firewall-cmd --zone=public --query-service=minecraft

To Enable:

sudo firewall-cmd --zone=public --add-service=minecraft

And to Disable

sudo firewall-cmd  --zone=public --remove-service=minecraft

And use the –permanant flags and –reload if you want to make these changes survive a reboot.

March 18, 2015

CWE Vulnerability Assessment Report 2014

Last year is almost three months over and we have been busy completing the CWE statistics of our vulnerabilities. The biggest change from the year before is the scale of the data – CWE report for 2013 was based on 37 classified vulnerabilities, whereas last year we classified 617 vulnerabilities in our bugzilla. Out of them 61 were closed with resolution NOTABUG, which means they were either not a security issues, or did not affect Red Hat products. These still include vulnerabilities which affect Fedora or EPEL packages only – narrowing this down to vulnerabilities affecting at least one supported Red Hat product we end up with 479.

The graph below shows the Top 10 weaknesses in Red Hat software. Note the total sum is bigger than overall number of vulnerabilities, as one vulnerability may be result of multiple weaknesses. The most common case is CWE-190 Integer Overflow or Wraparound causing out-of-bounds buffer access problems.

The top spot is taken by cross site scripting with 36 vulnerabilities last year. However, closer examination reveals that despite the count, it was not very common. In fact, two packages had much more XSS flaws that the average: phpMyAdmin with 13 and Openstack (Horizon and Swift) with 7. The standard recommendation to the developers would immediately be to use one of the modern web frameworks, whether it be Ruby on Rails, Django or others.

The second place is occupied by Out-of-bound Read. Again, the distribution of vulnerabilities is not flat among packages, with xorg-x11-server having 9 and chromium-browser 5 vulnerabilities of this type last year. All of the xorg-x11-server come from a single security advisory released on 2014-12-09, which fixed flaws reported by Ilja van Sprundel. The results of his security research of X, which lead to discovery of the flaws, were presented on CCC in 2013. His presentation is a great intoduction into X security problems and is still available.

From the above we could hypothesize that the statistics are dominated by a smaller set of very vulnerable packages, or that certain packages are be prone to certain kinds of weaknesses. The graph below shows a number of vulnerabilities that affected each of the packages – vulnerabilities which did not affected versions of packages we ship are excluded.

Median value of vulnerabilities per package is 1, however, not all packages are equal. Looking at the top 20, all of the packages contain large codebases, some of which are a separate product of an upstream vendor. We should not make a mistake of misinterpreting this graph as Top 20 most vulnerable projects, as it would be more fair to compare apples with apples e.g. kernel (package) with Openstack (which we ship as a product). More honest interpretation would be to see it as a list of packages that increase the attack surface of the system the most when installed.

If we look at statistics per-product, Red Hat Enterprise Linux dominates just by including vast number of packages. The distribution of weaknesses is therefore very close to the overall one show on the first graph above. However, if we look at the top 5 weaknesses in RHEL 5, 6 and 7, we can see a statistically significant drop in number of use after free types of vulnerabilities.

The root cause of this has been traced back to our source code analysis group and the mass scans performed on the Fedora versions prior to RHEL 7 rebase. These scans were performed using a couple of source code analysis tools including Coverity and cppcheck and the warnings were addressed as normal bugs. This explanation is also supported by the decreasing number of found use-after-frees in Fedora from versions 17 to 19, which served as basis for RHEL 7. Interestingly, other weaknesses like buffer access problems and overflows are unaffected, which is probably combination of a) inherent difficulty of their detection via code analysis and b) large number of false positives, making the developers less inclined to address these types of warnings.

The two most common weaknesses in Openshift Enterprise are Information Exposure and Cross Site Scripting. A closer look tells a different story – 5 out of 6 information exposure vulnerabilities were found in Jenkins, shipped as part of the Openshift product. In fact, surprising 21 out of 60 vulnerabilities that affected Openshift product were present in Jenkins. On the other hand, just 9 vulnerabilities were found in core Openshift components.

Interestingly, the distribution of vulnerable components in Openstack is more flat with no component standing out. CWE-400 Uncontrolled Resource Consumption (‘Resource Exhaustion’) is the most common weakness and all of the vulnerabilities affect core Openstack components. Number of vulnerabilities in Keystone related to session expiration (4) is also surprising, as we haven`t seen many vulnerabilities of that type in other packages last year.

Other products and components also tend to have their specific weaknesses: external entity expansion for Java/JBoss based products, out of bounds reads in Freetype, use after free in Mozilla etc. Overall the depth of the data is much bigger and provides new possibilities for the proactive research. Having more precise data for the feedback loop allowing us to both evaluate past measures and propose future ones is next step towards more efficient proactive security. Unfortunately, the time it takes for any countermeasures to make a dent in statistics is measured in releases, so this data will become much more interesting as they change in time.

March 12, 2015

Postfix Encryption

I’ve been tinkering with the encryption options in Postfix for a while.  Encryption between clients and their SMTP server and between SMTP servers is necessary to protect the to, from, and subject fields, along with the rest of the header, of an email.  The body of the message is also protected but it’s always better to utilize PGP or S/MIME cryptography to provide end-to-end protection; encryption between clients and SMTP servers doesn’t provide this.

As rolled out now, encryption between SMTP servers is opportunistic encryption and is generally not required.  While doing a review of my mail log I seem to be receiving most personal mail via some encrypted circuit while much of the mail coming out of listservs, like Yahoo! Groups, is not negotiating encryption on connect.  I’ve also noticed that some email providers actually run their incoming email through an external service, I suspect for spam control, before accepting the message into their servers.  Some of these spam services don’t support encryption making it difficult to protect mail in transit.

Postfix documentation is pretty decent.  The project seems to document most settings but sometimes they don’t actually put the entire picture together.  Encryption is one of those things where a complete picture is difficult to put together just by looking at a single page of documentation.

Postfix’s documentation on TLS is fairly complete.  What they miss on that page, forward security, must be found else where.  Until last night, I had missed that last page and now have fixed my configuration to include, what I consider, acceptable settings.

Here’s what I’ve got:

main.cf

### TLS
# enable opportunistic TLS support in the SMTP server
smtpd_tls_security_level = may
smtpd_tls_eecdh_grade = ultra
tls_eecdh_strong_curve = prime256v1
tls_eecdh_ultra_curve = secp384r1
smtpd_tls_loglevel = 1
smtpd_tls_cert_file = /etc/pki/tls/certs/mail.crt
smtpd_tls_key_file = /etc/pki/tls/private/mail.key
smtpd_tls_CAfile = /etc/pki/tls/certs/mail-bundle.crt
smtpd_tls_session_cache_timeout = 3600s
smtpd_tls_session_cache_database = btree:${queue_directory}/smtpd_scache
smtpd_tls_received_header = yes
smtpd_tls_ask_ccert = yes
smtpd_tls_received_header = yes
tls_random_source = dev:/dev/urandom
#TLS Client
smtp_tls_security_level = may
smtp_tls_eecdh_grade = ultra
smtp_tls_loglevel = 1
smtp_tls_cert_file = /etc/pki/tls/certs/mail.crt
smtp_tls_key_file = /etc/pki/tls/private/mail.key
smtp_tls_CAfile = /etc/pki/tls/certs/mail-bundle.crt

master.cf

submission inet n       –       –       –       –       smtpd
-o smtpd_tls_security_level=encrypt
-o smtpd_sasl_auth_enable=yes
-o smtpd_sasl_type=dovecot
-o smtpd_sasl_path=private/auth
-o smtpd_sasl_security_options=noanonymous

Those familiar with setting up TLS in Apache will notice a few differences here.  We haven’t defined ciphers or SSL protocols.  This is because this is opportunistic encryption.  We’re just happy if encryption happens, even using EXPORT ciphers, since the alternate is plaintext.  In a more controlled setting you could define the ciphers and protocols and enforce their use.  Until encryption becomes the norm on the Internet (and why shouldn’t it be?) I’ll have to stick with just begging for encrypted connections.

It should also be noted that client-to-SMTP server connections are forced to be encrypted in master.cf as seen in the submission portion.  This was a quick and dirty way of forcing encryption on the client side while allowing opportunistic encryption on the public (port 25) side.

It should be noted that ECC keys can be used with Postfix, which forces good ciphers and protocols, but most email servers have RSA keys established so problems could arise from that.  Dual keys can always be used to take advantage of both ECC and RSA.

As SSLLabs is for testing your web server’s encryption settings, so is CheckTLS for checking your SMTP encryption settings.  These tools are free and should be part of your regular security check of your infrastructure.


USB Killer (or maybe it’s a killer via USB?)

A co-worker passed this along to me and I felt this was worthy of further dissemination.

http://kukuruku.co/hub/diy/usb-killer

And this, my friends, is why you shouldn’t just plug in random, unknown USB devices.


Emilio’s Craftsmanship

My Saxophone is back from the workshop of Emilio Lyons.  It is a pleasure to play on it.  I would say “like new” but for two things.  First, the horn was twenty years old when I got it, so I never played it new.  Second, Emilio has customized the feel of the horn enough that o suspect it never played like this.  What did he do?

Back when I dropped off my tenor, I pointed out issues with both of the thumbs.  The lower thumb rest was a hard plastic hook that was too sharp curve for my finger.  He showed me then the replacement. He would use. Here it is installed.

image

It is much smoother.  I love the feel of the brass and the wider contact platform.  This is one benefit of working with someone with such long experience; he has older replacement parts hoarded,  some of which are quite unique.  He kept my old thumb piece in case someone else needed it in the future.

The other thumb rests very near the access hole to one of the rods.  I always found it irritating to my thumb.  Here in what he added to make it comfortable.

image

The black disk was the original.  He added the triangular shaped piece to the left. Note how he still made the screw for the rod accessible.

Right hand finger adjustment.

image

Left hand key adjustments.  Compared to a stock setup, this makes the notes much easier and faster to play.

image
All new pads.  Coverage is fantastic.  Low notes come out as cleanly as I’ve ever got them to sound.

image

Here are the replacements for the corks that prevent metal on metal during the key movements.  He replaced all the corks with rubber.  Emilio is very materials focused, and was looking for an improvement to cork for a long time.  He found in the little nubs that come on the bottom side of a carpet.  They are almost silent when tapped on the horn body.

image

Neck cork.  Very precise for my mouthpiece.

image

Tightened the neck.  You can’t see from the pictures,  but I can feel it.

image

Serial number for recorded posterity.

image

He replaced pretty much everything designed to be replaced except for the springs,  those are still the original.

The horn.

image

Emilio does not play saxophone himself.  To test it out, he had another customer, a very good professional tenor player, try it out “He liked that horn a lot.”

 

Trade marks.

image

 

 

March 11, 2015

CWE update

In the past Red Hat Product Security assigned weakness IDs only to vulnerabilities that meet certain criteria, more precisely, only vulnerabilities with CVSS score higher than 7. Since the number of incoming vulnerabilities was high, this filtering allowed us to focus on vulnerabilities that matter most. However, it also makes statistics incomplete, missing low and moderate vulnerabilities.

In the previous year we started assigning weakness IDs to almost all vulnerabilities, greatly increasing the quantity of data used to generate statistics. This was a big commitment time-wise, but resulted in 13 times more vulnerabilities with assigned weakness IDs in 2014 than the year before. There are a few exceptions – for some vulnerabilities there are not enough information available to decide the types of weaknesses. These almost always come from big upstream vendors. For this reason bugs in mysql or OpenJDK do not have weaknesses assigned and are excluded from the CWE statistics. With the exceptions mentioned, there are always at least references to commits that fix the vulnerability available, so it is possible to assign correct weakness data to vulnerabilities in any open source project.

Part of using Common Weakness Enumeration (CWE) at Red Hat is CWE Coverage – a subset of weaknesses that we use to classify vulnerabilities. As everyone can notice after scrolling through the CWE list there are a lot of weaknesses that are very similar or describe the same issue in varying level of detail. This means different people can assign different weaknesses to the same vulnerability, a very undesirable outcome. Furthermore, this may skew resulting statistics, as vulnerabilities of the same nature may be described by different weaknesses. To counter these effects, Red Hat keeps CWE coverage, a subset of weaknesses we use, to prevent both. The coverage should contain weaknesses with similar level of detail (Weakness Base) and should not contain multiple overlapping weaknesses. However there is a possibility that a vulnerability would not fit into any of the weaknesses in our coverage and for this reason the coverage is regularly updated.

Maintenance of CWE coverage has been tied with the release of new CWE revisions by MITRE in past. Since we started assigning weakness IDs to much larger number of vulnerabilities we also gathered weaknesses missing in the coverage more quickly. Therefore the coverage has been updated and the changes are now included in the statistics. Current revision of Red Hat`s CWE Coverage can be found on the Customer Portal.

Apart from adding missing weaknesses we also removed a number of unused or unsuitable weaknesses. The first version of coverage was based on CWE Cross-Section maintained as view by MITRE. The CWE Cross-Section represents a subset of weaknesses at the abstraction level most useful for general audiences. While this was a good starting point, it quickly became evident that the Cross-Section has numerous deficiencies. Some of the most common weaknesses are not included, for example CWE-611 Improper Restriction of XML External Entity Reference (‘XXE’), which ranked as 10th most common weakness in our statistics for 2014. On the other hand, we have not included considerable number of weaknesses that were not relevant in open source, for example CWE-546 Suspicious Comment. After these changes current revision of the coverage has little in common with CWE Cross-Section, but represents structure of weaknesses usually specific to open source projects well.

Last but not least, all CWE related data are kept public and statistics (even for our internal use) are generated only from publicly available data.The weakness ID is stored in whiteboard of a vulnerability in bugzilla. This is rather cryptic format and requires tooling to get the statistics into a format that can be processed. Therefore, we are currently investigating the best way how to make the statistics available online for wider audience.

March 08, 2015

CERN cares about information security… what about you?

As a security engineer it’s usually difficult for me to endure many of dumb things companies do.  It’s quite sad when a company that prides itself on creating solutions for building internal solutions to protect customer data actually starts pushing its own data out to Google and other “solution” providers.  It’s as if they don’t actually believe in their own products and actually think that a contract will actually protect their data.

So it’s quite refreshing when you run across a group that actually gets information security.  Recently, I ran across the information security bulletins at CERN (particle physics is another interest of mine) and was excited to find a group that actually gets it.  They provide internal, secure solutions for getting their work done without using outside solutions such as Google, Apple, Microsoft, Amazon, and Dropbox cloud solutions (I wish more of the internal solutions were FOSS but…).  In fact, CERN feels externally-hosted solutions are a bad idea for both business and personal uses.  I concur.

Here is a sample of their infosec bulletins:

What about you?  Do you care about the security of your information?


March 07, 2015

Keystone Federation via mod_lookup_identity redux

Last year I wrote a proof-of-concept for Federation via mod_lookup_identity. Some of the details have changed since then, and I wanted to do a formal one based on the code that will ship for Kilo. This was based on a devstack deployment.

UPDATE: Looks like I fooled myself: this only maps the first group. There is a patch outstanding that allows for lists of groups, and that is required to really make this work right.


The Configuration of SSSD and mod_lookup_identity stayed the same.
although the sssd-dbus RPM is already installed in F21.

Here is my devstack /opt/stack/devstack/local.conf

[[local|localrc]]
ADMIN_PASSWORD=<not gonna="gonna" tell="tell" you="you">
DATABASE_PASSWORD=$ADMIN_PASSWORD
RABBIT_PASSWORD=$ADMIN_PASSWORD
SERVICE_PASSWORD=$ADMIN_PASSWORD
SERVICE_TOKEN=password
USE_SSL=True

ENABLED_SERVICES="$ENABLED_SERVICES,-tempest,-h-api,-h-eng,-h-api-cfn,-h-api-cw"

Tempest didn’t like SSL. That is a a recurring problem, and something we need to fix by making SSL the default.
I disabled Heat, too. Nothing against Heat, but I needed to speed up the install, and that was the easiest to leave off.

I’m getting a token with a request that looks like this:

#!/usr/bin/bash

OS_AUTH_URL=https://devstack.ayoung530.younglogic.net/keystone/sss


curl -v  \
-k \
-H "Content-Type:application/json" \
--negotiate -u : \
--cacert ca.crt  \
-d  '{ "auth": { "identity": { "methods": ["kerberos"], "kerberos":{"identity_provider":"sssd", "protocol":"sssd_kerberos"}}, "scope": { "unscoped": { } } } }' \
-X POST $OS_AUTH_URL/v3/auth/tokens

This is due to using the following in the keystone.conf:

[auth]
methods = external,password,token,kerberos

kerberos =  keystone.auth.plugins.mapped.Mapped

This implies that we will want to be able to put Federation data into the kerberos auth plugin for the client.

The trickiest part was getting the mapping right.    I’ve added the mapping to the bottom of this email.

To set up the call, I used the openstack client. After sourcing openrc:

export OS_AUTH_URL=https://192.168.122.182:5000/v3
export OS_USERNAME=admin

openstack --os-identity-api-version=3 group create admins
openstack --os-identity-api-version=3 group create ipausers
openstack --os-identity-api-version=3    identity provider create sssd
openstack --os-identity-api-version=3   mapping create  --rules /home/ayoung/kerberos_mapping_edited.json  kerberos_mapping
openstack --os-identity-api-version=3 federation protocol create --identity-provider sssd --mapping kerberos_mapping sssd_kerberos
       [
            {
                "local": [
                    {
                        "user": {
                            "name": "{0}",
                            "id": "{0}"
                        }
                    }
                ],
                "remote": [
                    {
                        "type": "REMOTE_USER"
                    }
                ]
            },

            {
                "local": [
                    {
                        "group": {
                            "name": "{0}",
                            "domain": {"name": "Default"}
                        }
                    }
                ],
                "remote": [
                    {
                        "type": "REMOTE_USER_GROUPS"
                    }
                ]
            }

        ]

My config for HTTPD Keystone looks like this:

LoadModule lookup_identity_module modules/mod_lookup_identity.so

WSGIDaemonProcess keystone-sss processes=5 threads=1 user=ayoung display-name=%{GROUP}
WSGIProcessGroup keystone-sss
WSGIScriptAlias /keystone/sss  /var/www/keystone/admin


WSGIApplicationGroup %{GLOBAL}
WSGIPassAuthorization On
ErrorLogFormat "%{cu}t %M"
ErrorLog /var/log/httpd/keystone.log
CustomLog /var/log/httpd/keystone_access.log combined
SSLEngine On
SSLCertificateFile /opt/stack/data/CA/int-ca/devstack-cert.crt
SSLCertificateKeyFile /opt/stack/data/CA/int-ca/private/devstack-cert.key


<location /keystone/sss>
  AuthType Kerberos
  AuthName "Kerberos Login"
  KrbMethodNegotiate on
  KrbMethodK5Passwd off
  KrbServiceName HTTP
  KrbAuthRealms AYOUNG530.YOUNGLOGIC.NET
  Krb5KeyTab /etc/httpd/conf/openstack.keytab
  KrbSaveCredentials on
  KrbLocalUserMapping on
  Require valid-user
  SSLRequireSSL
  LookupUserAttr mail REMOTE_USER_EMAIL " "
  LookupUserGroups REMOTE_USER_GROUPS ";"
</location>

I had to pre-create all groups from the mapping due to https://bugs.launchpad.net/keystone/+bug/1429334

March 04, 2015

Convince Nova to Use the V3 version of the API

In a recent post I showed how to set up the LDAP in a domain other than default. It turns out that the Nova configuration does accept these tokens; by default, Nova uses the V2 version of the Keystone API only. This is easy to fix.

The first indication that something was wrong was that Horizon threw up a warning

Cannot Fetch Usage Information.

It turns out that all Operations against Nova were failing.
The Default for Auth token should be to perform discovery to see what version of the Keystone API is supported. However, Nova seems to have a configuration override that defaults the value to the V2.0 API. Looking in /etc/nova/nova.conf

I saw:

#auth_version=V2.0

Setting this to

auth_version=

And restarting all of the services fixed the problem.

Factoring RSA export keys – FREAK (CVE-2015-0204)

This week’s issue with OpenSSL export ciphersuites has been discussed in the press as “Freak” and “Smack”. These are addressed by CVE-2015-0204, and updates for affected Red Hat products were released in January.

Historically, the United States and several other countries tried to control the export or use of strong cryptographic primitives. For example, any company that exported cryptographic products from the United States needed to comply with certain key size limits. For RSA encryption, the maximum allowed key size was 512 bits and for symmetric encryption (DES at that time) it was 40 bits.

The U.S. government eventually lifted this policy and allowed cryptographic primitives with bigger key sizes to be exported. However, these export ciphersuites did not really go away and remained in a lot of codebases (including OpenSSL), probably for backward compatibility purposes.

It was considered safe to keep these export ciphersuites lying around for multiple purposes.

  1. Even if your webserver supports export ciphersuites, most modern browsers will not offer that as a part of initial handshake because they want to establish a session with strong cryptography.
  2. Even if you use export cipher suites, you still need to factor the 512 bit RSA key or brute-force the 40-bit DES key. Though doable in today’s cloud/GPU infrastructure, it is pointless to do this for a single session.

However, this results in a security flaw, which affects various cryptographic libraries, including OpenSSL. OpenSSL clients would accept RSA export-grade keys even when the client did not ask for export-grade RSA. This could further lead to an active man-in-the-middle attack, allowing decryption and alteration of the TLS session in the following way:

  • An OpenSSL client contacts a TLS server and asks for a standard RSA key (non-export).
  • A MITM intercepts this requests and asks the server for an export-grade RSA key.
  • Once the server replies, the MITM attacker forwards this export-grade RSA key to the client. The client has a bug (as described above) that allows the export-grade key to be accepted.
  • In the meantime, the MITM attacker factors this key and is able to decrypt all possible data exchange between the server and the client.

This issue was reported to OpenSSL in October 2014, fixed in public in OpenSSL in January 2015, and shipped in Red Hat Enterprise Linux 6 and 7 two week later via RHSA-2015-0066. This issue has also been addressed in Fedora 20 and Fedora 21.

Red Hat Product Security initially classified this as having low security impact, but after more details about the issue and the possible attack scenarios have become clear, we re-classified it as a moderate-impact security issue.

Additional information on mitigating this vulnerability can be found on the Red Hat Customer Portal.

[Updated 17th March 2014: the original article stated this issue was fixed in OpenSSL in October 2014, however the fix was not public until January 2015.  We have updated the article to clarify this].

February 26, 2015

Three Types of Tokens

One of the most annoying administrative issues in Keystone is The MySQL backend to the token database filling up. While we have a flush scrit, it needs to be scheduled via cron. Here is a short over view of the types of tokens, why the backend is necessary, and what is being done to mitigate the problem.

DRAMATIS PERSONAE:

Amanda: The companies OpenStack system Admin

Manny: The IT manager.

ACT 1 SCENE1: Small conference room. Manny has called a meeting with Amanda.

Manny: Hey Amanda, What are these keystone tokens and why are they causing so many problems?

Amanda: Keystone tokens are an opaque blob used to allow caching of an authentication event and some subset of the authorization data associated with the user.

Manny: OK…backup. what does that mean?

Amanda: Authentication means that you prove that you are who you claim to be. For the most of OpenStack’s history, this has meant handing over a symmetric secret.

Manny: And a symmetric secret is …?

Amanda: A password.

Manny:Ok Got it. I hand in my password to prove that I am me. What is the authorization data?

Amanda: In OpenStack, it is the username and the user’s roles.

Manny: All their roles?

Amanda: No. only for the scope of the token. A token can be scoped to a project. Also to a domain, but in our set up, only I ever need a domain scoped token.

Manny: The domain is how I select between the customer list and our employees out of our LDAP server, right?

Amanda: Yep. There is another domain just for admin tasks, too. It has the service users for Nova and so on.

Manny: OK, so I get a token, and I can see all this stuff?

Amanda: Sort of. For most of the operation we do, you use the “openstack” command. That is the common command line, and it hides the fact that it is getting a token for most operations. But you can actually use a web tool called curl to go direct to the keystone server and request a token. I do that for debugging sometimes. If you do that, you see the body of the token data in the response. But that is different from being able to read the token itself. The token is actually only 32 characters long. It is what is known as a UUID.

Manny (slowly): UUID? Universally Unique Identifier. Right?

Amanda: Right. Its based on a long random number generated by the operating system. UUIDs are how most of OpenStack generates remote identifiers for VMs, images, volumes and so on.

Manny: Then the token doesn’t really hold all that data?

Amanda: It doesn’t. The token is just a…well, a token.

Manny: Like we used to have for the toll machines on route 93. Till we all got Easy pass!

Amanda: Yeah. Those tokens showed that you had paid for the trip. For OpenStack, a token is a remote reference to a subset of your user data. If you pass a token to Nova, it still has to go back to Keystone to validate the token. When it validates the token, it gets the data. However, our OpenStack deployment is so small, Nova and Keystone are on the same machine. Going back to Keystone does not require a “real” network round trip.

Manny: So when now that we are planning on going to the multi host set up, validating a token will require a network round trip?

Amanda: Actually, when we move to the multi-site, we are going to switch over to a different form of token that does not require a network round trip. And that is where the pain starts.

Manny: These are the PKI tokens you were talking about in the meeting?

Amanda: Yeah.

Manny: OK, I remember the term PKI was Public Key…something.

Amanda: The I is for infrastructure, but you remembered the important part.

Manny: Two keys, Public versus private: you encode with one and decode with the other.

Amanda: Yes. In this case, it is the token data that is encoded with private key, and decode with the public key.

Manny: I thought that made it huge. Do you really encode all the data?

Amanda: No, just a signature of the data. A Hash. This is called message signing, and it is used in a lot of places, basically to validate that the message is both unchanged and that it comes from the person you think it comes from.

Manny: OK, so…what is the pain.

Amanda: Two things. One, the tokens are bigger, much bigger, than a UUID. They have all of the validation data in them. To include the service catalog. And our service catalog is growing on the multi-site deployment, so we’ve been warned that the tokens might get so big that it causes problems.

Manny: Let’s come back to that. What is the other problem?

Amanda: OK…since a token is remotely validated, there is the possibility that something hass changed on Keystone, and the token is no longer valid. With our current system, Keystoen knows this immediately, and just dumps the token. So When Nova comes to validate it, its no longer valid and the user has to get another token. With remove validation, Nova has to periodically request a list of revoked tokens.

Manny: So either way Keystone needs to store data. What is the problem?

Amanda: Well, today we store our tokens in Memcached. Its a simple Key value store, its local to the Keystone instance, and it just dumps old data that hasn’t been used in a while. With revocations, if you dump old data, you might lose the fact that a token was revoked.

Manny: Effectively un-revoking that token?

Amanda: Yep.

Manny: OK…so how do we deal with this?

Amanda: We have to move from storing token in Memcached to MySQL. According to the docs and upstream discussions, this can work, but you have to be careful to schedule a job to clean up the old tokens, or you can fill up the token database. Some of the larger sites have to run this job very frequently.

Manny: Its a major source of pain?

Amanda: It can be. We don’t think we’ll be at that scale at the multisite launch, but it might happen as we grow.

Manny: OK, back to the token size thing, then. How do we deal with that?

Amanda: OK, when we go multi-site, we are going to have one of everything at each site: Keystone, Nova, Neutron, Glance. We have some jobs to synchronize the most essential things like the glance images and the customer database, but the rest is going to kept fairly separate. Each will be tagged as a region.

Manny: So the service catalog is going to be galactic, but will be sharded out by Keystone server?

Amanda: Sort of. We are going to actually make it possible to have the complete service catalog in each keystone server, but there is an option in Keystone to specify a subset of the catalog for a given project. So when you get a token, the service catalog will be scoped down to the project in question. We’ve done some estimates of size and we’ll be able to squeak by.

Manny: So, what about the multi-site contracts? Where a company can send there VMs to either a local or remote Nova?

Amanda: for now they will be separate projects. But for the future plans where we are going to need to be able to put them in the same project, we are stuck.

Manny: Ugh. We can’t be the only people with this problem.

Amanda: Some people are moving back to UUID tokens, but there are issues both with replication of the token database and also with cross site network traffic. But there is some upstream work that sounds promising to mitigate that.

Manny: The lightweight thing?

Amanda: Yeah, lightweight tokens. Its backing off the remotely validated aspect of Keystone tokens, but doesn’t need to store the tokens themselves. They use a scheme called Authorized Encryption which puts a minimal amount of info into the token to be able to recreate the whole authorization data. But only the Keystone server can expand that data. Then, all that needs to be persisted is the revocations.

Manny: Still?

Amanda: Yeah, and there are all the same issues there with flushing of data, but the scale of the data is much smaller. Password changes and removing roles from users are the ones we expect to see the most. We still need a cron job to flush those.

Manny: No silver bullet, eh? Still how will that work for multisite?

Amanda: Since the token is validated by cryptography, the different sites will need to synchronize the keys. There was a project called Kite that was part of Keystone, and then it wasn’t, and then it was again, but it is actually designed to solve this problem. So all of the Keystone servers will share their keys to validate tokens locally.

Manny: We’ll still need to synchronize the revocation data?

Amanda: No silver bullet.

Manny: Do we really need the revocation data? What if we just … didn’t revoke. Made the tokens short lived.

Amanda: Its been proposed. The problem is that a lot of the workflows were being built around the idea of long lived tokens. The Tokens went from 24 hours valid to 1 hour valid by default, and that broke some things. Some people have had to crank the time back up again. We think we might be able to get away with shorter tokens, but we need to test and see what it breaks.

Manny: Yeah, I could see HA having a problem with that…wait, 24hours…how does heat do what it needs to. It can restart a machine a mong afterwards. DO we just hand over the passwords to HEAT?

Amanda: Heh..used to,. But Heat uses a delegation mechanism called trusts. A user creates a trust, and that effectively says that Heat can do something on the users behalf, but Heat has to get its own token first. It first proves that it is Heat, and then it uses the trust to get a token on the users behalf.

Manny: So…trusts should be used everywhere?

Amanda: Something like trusts, but more lightweight. Trusts are deliberate delegation mechanisms, and a re set op on a per user bases. TO really scale, it would have to be something where the admin set up the delegation agreement as a template. If that were the case, then these long lived work flows would not need to use the same token.

Manny: And we could get rid of the revocation events. OK, that is time, and I have a customer meeting. Thanks.

Amanda: No problem.

EXIT

The Sax Doctor
Distinctive ring that holds the bell of a Selmer Paris Saxophone to the main body of the horn.

Ring of a Selmer Paris Saxophone

Dropped my Sax off at Emilio Lyon’s house and workshop. My folks bought it for me from him at Rayburn Music in Boston back when I was a High School Freshman. I still remember him pointing to the sticker on it that indicated “This is my work.”

As someone who loves both the saxophone and working with my hands, I have to admit I was looking forward to meeting him. I was even a little nervous. He has a great reputation. Was he going to chastise me for the state of my horn? It hadn’t been serviced in…way too long. I was a little worried that the lack of changing the oil on the rods would have worn down some of the metal connections.

I spent the best forty minutes in his workshop as he explained what the horn needed; an overhaul, which meant pulling all the pads off and replacing them. I expected this.

I showed him how the bottom thumb rest didn’t fit my hand right…it was the stock Selmer piece, and it had always cut into my thumb a little. He had another from a different, older horn, that was brass. He shaped it with a hammer and .. it felt good. Very good. He gave me that piece and kept mine, in case it would work for someone else.

There was another minor issue with the left thumb catching a rough spot near the thumb rest and he covered it with some epoxy. Not magic, but a magic touch none-the-less.

To say he told me it needed an overhaul doesn’t do it justice. He explained how he would do it, step by step, especially the task of setting the pads. I know from elsewhere that this is real artistry, and takes years of experience to get right. He talked about taking the springs off and leaving the pads resting on the cups. I asked why he took the springs off?

Its about touch. You shouldn’t work hard to cover the holes of the saxophone, it should be light. Emilio understands how the top saxophonists in the world interact with their horns. He talked about advising Sonny Rollins to use a light touch “how he would like playing the horn better. Sonny tried for a week and then called back to apologize: he just couldn’t play that way. We talked about how other people played..Joe Lovano and George Garzone, heavy. David Sanborne, very light.

The corks and felts all need to be replaced. He has a new technique, I had heard about, using little black rubber nubs. He showed me, and how quiet they were. “I never liked the base.” I think he meant the base set of padding that came with the Selmers.

He assured me that the metal was fine. This is a good horn. A good number.

He quoted me the price for the overhaul. I told was less than other people had quoted me.

He didn’t take any contact info, told me to contact him. I get my horn back in two weeks. I’ll make due with playing my beat on student horn alto and EWI.

I’m really looking forward to getting back my Selmer Mark VI with the overhaul from Emilio. Will it play like new? I don’t know, the horn was at least 6 years old by the time I was born, and 20 when I first played it.

But I suspect it will play better than new.

February 25, 2015

Common Criteria

What is Common Criteria?

Common Criteria (CC) is an international standard (ISO/IEC 15408) for certifying computer security software. Using Protection Profiles, computer systems can be secured to certain levels that meet requirements laid out by the Common Criteria. Established by governments, the Common Criteria treaty agreement has been signed by 17 26 countries, and each country recognizes the other’s certifications.

In the U.S., Common Criteria is handled by the National Information Assurance Partnership (NIAP). Other countries have their own CC authorities. Each authority certifies CC labs, which do the actual work of evaluating products. Once certified by the authority, based on the evidence from the lab and the vendor, that certification is recognized globally.

Your certification is given a particular assurance level which, roughly speaking, represents the strength of the certification. Confidence is higher at a level EAL4 than at EAL2 for a certification. Attention is usually given to the assurance level, instead of what, specifically, you’re being assured of, which is the protection profiles.
CC certification represents a very specific set of software and hardware configurations. Software versions and hardware model and version is important as differences will break the certification.

How does the Common Criteria work?

The Common Criteria authority in each country creates a set of expectations for particular kinds of software: operating systems, firewalls, and so on. Those expections are called Protection Profiles. Vendors, like Red Hat, then work with a third-party lab to document how we meet the Protection Profile. A Target of Evaluation (TOE) is created which is all the specific hardware and software that’s being evaluated. Months are then spent in the lab getting the package ready for submission. This state is known as “in evaluation”.
Once the package is complete, it is submitted to the relevant authority. Once the authority reviews and approves the package the product becomes “Common Criteria certified” for that target.

Why does Red Hat need or want Common Criteria?

Common Criteria is mandatory for software used within the US Government and other countries’ government systems. Other industries in the United States may also require Common Criteria. Because it is important to our customers, Red Hat spends the time and energy to meet these standards.

What Red Hat products are Common Criteria certified?

Currently, Red Hat Enterprise Linux (RHEL) 5.x and 6.x meet Common Criteria in various versions. Also, Red Hat’s JBoss Enterprise Application Platform 5 is certified in various versions. It should be noted that while Fedora and CentOS operating systems are related to RHEL, they are not CC certified. The Common Criteria Portal provides information on what specific versions of a product are certified and to what level. Red Hat also provides a listing of all certifications and accreditation of our products.

Are minor releases of RHEL certified?

When a minor release, or a bug fix, or a security issue arises, most customers will want to patch their systems to remain secure against the latest threats. Technically, this means falling out of compliance. For most systems, the agency’s Certifying Authority (CA) requires these updates as a matter of basic security measures. It is already understood that this breaks CC.

Connecting Common Criteria evaluation to a specific minor versions is difficult, at best, for a couple of reasons:

First, the certifications will never line up with a particular minor version exactly. A RHEL minor version is, after all, just a convenient waypoint for what is actually a constant stream of updates. The CC target, for example, began with RHEL 6.2, but the evaluated configuration will inevitably end up having packages updated from their 6.2 versions. In the case of FIPS, the certifications aren’t tied to a RHEL version at all, but to the version of the certified package. So OpenSSH server version 5.3p1-70.el6 is certified, no matter which minor RHEL version you happen to be using.

This leads to the second reason. Customers have, in the past, forced programs to stay on hopelessly outdated and unpatched systems only because they want to see /etc/redhat-release match the CC documentation exactly. Policies like this ignore the possibility that a certified package could exist in RHEL 6.2, 6.3, 6.4, etc., and the likelihood that subsequent security patches may have been made to the certified package. So we’re discouraging customers from saying “you must use version X.” After all, that’s not how CC was designed to work. We think CC should be treated as a starting point or baseline on which a program can conduct a sensible patching and errata strategy.

Can I use a product if it’s “in evaluation”?

Under NSTISSP #11, government customers must prefer products that have been certified using a US-approved protection profile. Failing that, you can use something certified under another profile. Failing that, you must ensure that the product is in evaluation.

Red Hat has successfully completed many Common Criteria processes so “in evaluation” is less uncertain than it might sound. When a product is “in evaluation”, the confidence level is high that certification will be awarded. We work with our customers and their CAs and DAAs to help provide information they need to make a decision on C&A packages that are up for review.

I’m worried about the timing of the certification. I need to deploy today!

Red Hat makes it as easy as possible for customers to use the version of Red Hat Enterprise Linux that they’re comfortable with. A subscription lets you use any version of the product as long as you have a current subscription. So you can buy a subscription today, deploy a currently certified version, and move to a more recent version once it’s certified–at no additional cost.

Why can’t I find your certification on the NIAP website?

Red Hat Enterprise Linux 6 was certified by BSI under OS Protection Profile at EAL4+. This is equivalent to certifying under NIAP under the Common Criteria mutual recognition treaties. More information on mutual recognition can be found on the CCRA web site. That site includes a list of the member countries that recognize one another’s evaluations.

How can I keep my CC-configured system patched?

A security plugin for the yum update tool allows customers to only install patches that are security fixes. This allows a system to be updated for security issues while not allowing bug fixes or enhancements to be installed. This makes for a more stable system that also meets security update requirements.

To install the security plugin, from a root-authenticated prompt:

# yum install yum-plugin-security
# yum updateinfo
# yum update --security

Once security updates have been added to the system, the CC-evaluated configuration has changed and the system is no longer certified.  This is the recommended way of building a system: starting with CC and then patching in accordance with DISA regulations. Consulting the CA and DAA during the system’s C&A process will help establish guidelines and expectations.

You didn’t answer all my questions. Where do I go for more help?

Red Hat Support is available anytime a customer, or potential customer, has a question about a Red Hat product.

Additional Reading

February 24, 2015

What Can We Do About Superfish?

Perhaps the greatest question about Superfish is what can we do about it. The first response is to throw technology at it.

The challenge here is that the technology used by Superfish has legitimate uses:

  • The core Superfish application is interesting – using image analysis to deconstruct a product image and search for similar products is actually quite ingenious! I have no reservations about this if it is an application a user consciously selects and installs and deliberately uses.
  • Changing the html data returned by a web site has many uses – for example, ad blocking and script blocking tools change the web site. Even deleting tracking cookies can be considered changing the web site! Having said that, changing the contents of a web site is a very slippery slope. And I have real problems with inserting ads in a web site or changing the content of the web site without making it extremely clear this is occurring.
  • Reading the data being exchanged with other sites is needed for firewalls and other security products.
  • Creating your own certificates is a part of many applications. However, I can’t think of many cases where it is appropriate to install a root certificate – this is powerful and dangerous.
  • Even decrypting and re-encrypting web traffic has its place in proxies, especially in corporate environments.

The real problem with Superfish is how the combination of things comes together and is used. And quality of implementation – many reports indicate poor implementation practices, such as a single insecure password for the entire root certificate infrastructure. It doesn’t matter what encryption algorithm you are using if your master password is the name of your company!

Attempting a straight technology fix will lead to “throwing the baby out with the bath water” for several valuable technologies. And a technical fix for this specific case won’t stop the next one.

The underlying issue is how these technologies are implemented and used. Attempting to fix this through technology is doomed to failure and will likely make things worse.

Yes, there is a place for technology improvements. We should be using dnssec to make sure dns information is valid. Stronger ways of validating certificate authenticity would be valuable – someone suggested DANE in one of the comments. DANE involves including the SSL certificate in the dns records for a domain. In combination with dnssec it gives you higher confidence that you are talking to the site you think you are, using the right SSL certificate. The issue here is that it requires companies to include this information in their dns records.

The underlying questions involve trust and law as well as technology. To function, you need to be able to trust people – in this case Lenovo – to do the right thing. It is clear that many people feel that Lenovo has violated their trust. It is appropriate to hold Lenovo responsible for this.

The other avenue is legal. We have laws regulate behavior and to hold people and companies responsible for their actions. Violating these regulations, regardless of the technology used, can and should be addressed through the legal system.

At the end of the day, the key issues are trust, transparency, choice, and following the law. When someone violates these they should expect to be held accountable and to pay a price in the market.


February 23, 2015

Samba vulnerability (CVE-2015-0240)

Samba is the most commonly used Windows interoperability suite of programs, used by Linux and Unix systems. It uses the SMB/CIFS protocol to provide a secure, stable, and fast file and print services. It can also seamlessly integrate with Active Directory environments and can function as a domain controller as well as a domain member (legacy NT4-style domain controller is supported, but the Active Directory domain controller feature of Samba 4 is not supported yet).

CVE-2015-0240 is a security flaw in the smbd file server daemon. It can be exploited by a malicious Samba client, by sending specially-crafted packets to the Samba server. No authentication is required to exploit this flaw. It can result in remotely controlled execution of arbitrary code as root.

We believe code execution is possible but we’ve not yet seen any working reproducers that would allow this.

This flaw arises because of an uninitialized pointer is passed to the TALLOC_FREE() funtion. It can be exploited by calling the ServerPasswordSet RPC api on the NetLogon endpoint, by using a NULL session over IPC.
Note: The code snippets shown below are from samba-3.6 shipped with Red Hat Enterprise Linux 6. (All versions of samba >= 3.5 are affected by this flaw)
In the _netr_ServerPasswordSet() function, cred is defined as a pointer to a structure. It is not initialized.

1203 NTSTATUS _netr_ServerPasswordSet(struct pipes_struct *p,   
1204                          struct netr_ServerPasswordSet *r)   
1205 {   
1206         NTSTATUS status = NT_STATUS_OK;   
1207         int i;  
1208         struct netlogon_creds_CredentialState *creds;

Later netr_creds_server_step_check() function is called with cred at:

1213  status = netr_creds_server_step_check(p, p->mem_ctx,   
1214                               r->in.computer_name,   
1215                               r->in.credential,   
1216                               r->out.return_authenticator,   
1217                               &creds);

If netr_creds_server_step_check function fails, it returns and cred is still not initialized. Later in the _netr_ServerPasswordSet() function, cred is freed using the TALLOC_FREE() function which results in an uninitialized pointer free flaw.
It may be possible to control the value of creds, by sending a number of specially-crafted packets. Later we can use the destructor pointer called by TALLOC_FREE() to execute arbitrary code.

As mentioned above, this flaw can only be triggered if netr_creds_server_step_check() fails. This is dependent on the version of Samba used.

In Samba 4.1 and above, this crash can only be triggered after setting “server schannel = yes” in the server configuration. This is due to the
adbe6cba005a2060b0f641e91b500574f4637a36
commit, which introduces NULL initialization into the most common code path. It is still possible to trigger an early return with a memory allocation failure, but that is less likely to occur. Therefore this issue is more difficult to exploit. Red Hat Product Security team has rated this flaw as having important impact on Red Hat Enterprise Linux 7.

In older versions of Samba (samba-3.6 as shipped with Red Hat Enterprise Linux 5 and 6. samba-4.0 as shipped with Red Hat Enterprise Linux 6) the above mentioned commit does not exist. An attacker could call _netr_ServerPasswordSet() function with a NULLED buffer, which could trigger this flaw. Red Hat Product Security has rated this flaw as having critical impact on all other versions of samba package shipped by Red Hat.

Lastly the version of Samba 4.0 shipped with Red Hat Enterprise Linux 6.2 EUS is based on an alpha release of Samba 4, which lacked the password change functionality and thus the vulnerability. The same is true for the version of Samba 3.0 shipped with Red Hat Enterprise Linux 4 and 5.

Red Hat has issued security advisories to fix this flaw and instructions for applying the fix are available on the knowledgebase.  This flaw is also fixed in Fedora 20 and Fedora 21.

February 21, 2015

Superfish – Man-in-the-Middle Adware

Superfish has been getting a lot of attention – the Forbes article is one of the better overviews.

Instead of jumping in and covering the details of Superfish, let’s look at how it might work in the real world.

Let’s say that you are looking for a watch and you visit Fred’s Fine Watches. Every time you want to look at a watch, someone grabs the key to the cabinet from Fred, uses a magic key creator to create a new key, opens the cabinet, grabs the watch from Fred, studies the watch, looks for “similar” watches, and jams advertising fliers for these other watches in your face – right in the middle of Fred’s Fine Watches! Even worse, they leave the key in the lock, raising the possibility that others could use it. Further, if you decide to buy a watch from Fred, they grab your credit card, read it, and then hand it to Fred.

After leaving Fred’s Fine Watches you visit your bank. You stop by your doctor’s office. You visit the DMV for a drivers license renewal. And, since this article is written in February, you visit your accountant about taxes. Someone now has all this information. They claim they aren’t doing anything with it, but there is no particular reason to trust them.

How does all this work? Superfish is a man-in-the-middle attack that destroys the protection offered by SSL (Secure Sockets Layer). It consists of three basic components: the Superfish adware program, a new SSL Root Certificate inserted into the Windows Certificate Store, and a Certificate Authority program that can issue new certificates.

SSL serves two purposes: encryption and authentication. SSL works by using a certificate that includes a public encryption key that is used to negotiate a unique encryption key for each session. This encryption key is then use to uniquely encrypt all traffic for that session. There are two types of SSL certificates: public and private.

Public certificates are signed. This means that they can be verified by your browser as having been created from another certificate – you have at least some assurance of where the certificate came from. That certificate can then be verified as having been created from another certificate. This can continue indefinitely until you reach the top of the certificate tree, where you have a master or root certificate. These root certificates can’t be directly verified and must be trusted.

Root certificates are connected to Internet domains. For example, Google has the google.com root certificate, and is the only one who can create a signed certificate for mail.google.com, maps.google.com, etc.

Bills Browser Certificates, Inc., can only create signed certificates for billsbrowsercertificates.com. The details are a bit more complex, but this is the general idea – signed certificates can be traced back to a root certificate. If the owner of that root certificate is cautious, you can have a reasonable level of trust that the certificate is what it claims to be.

Your browser or OS comes with a (relatively small) list of root certificates that are considered trusted. Considerable effort goes into managing these root certificates and ensuring that they are good. Creation of new signed certificates based on these root certificates is tightly controlled by whoever owns the root certificate.

Certificate signing is a rather advanced topic. Let’s summarize it by saying that the mathematics behind certificate signing is sound, that implementations may be strong or weak, and that there are ways of over-riding the implementations.

Private certificates are unsigned. They are the same as public certificates, work in exactly the same way, but can’t be verified like public certificates can. Private certificates are widely used and are a vital part of communications infrastructure.

According to reports, Lenovo added a new Superfish root certificate to the Microsoft Certificate Store on certain systems. This means that Superfish is trusted by the system. Since Superfish created this certificate, they had all the information that they needed to create new signed certificates. Which they did by including a certificate authority program which creates new certificates signed by the Superfish root certificate – on your system while you are browsing. These certificates are completely normal, and there is nothing unusual about them – except the way they were created.

Again, according to reports, Superfish hijacked web sessions. Marc Rogers shows an example where Superfish has created a new SSL certificate for Bank of America. The way it works is that Superfish uses this certificate to communicate with the browser and the user. The user sees an https connection to Bank of America, with no warnings – there is, in fact, a secure encrypted session in place. Unfortunately, this connection is to Superfish. Superfish then uses the real Bank of America SSL certificate to communicate with Bank of America. This is a perfectly normal session, and BOA has no idea that anything is going on.

To recap, the user enters their bank id and password to login to the BOA site. This information is encrypted – and sent to Superfish. Superfish decrypts the information and then re-encypts it to send to BOA using the real BOA SSL certificate. Going the other way, Superfish receives information from BOA, decrypts it, reads it, re-encrypts it with the Superfish BOA certificate, and sends it back to you.

Superfish apparently creates a new SSL certificate for each site you visit. The only reason that all this works is that they were able to add a new root certificate to the certificate store – without this master certificate in the trusted certificate store they would not be able to create new trusted certificates.

Superfish can also change the web page you receive – this is the real purpose of of Superfish. In normal operation Superfish will modify the web page coming back from the web site you are visiting by inserting new ads. Think about it – you have no idea of what the original web site sent, only what Superfish has decided to show you!

Superfish is sitting in the middle of all your web sessions. It reads everything you send, sends arbitrary information to an external server (necessary for the image analysis it claims to perform, but can be used for anything), forges encryption, and changes the results you get back.

The real threat of Superfish is that it contains multiple attack vectors and, by virtue of the root certificate, has been granted high privileges. Further, the private key Superfish is using for their root certificate has been discovered, meaning that other third parties can create new signed certificates using the Superfish root certificate. There is no way to do secure browsing on a system with Superfish installed. And no way to trust the results of any browsing you do, secure or not.


February 19, 2015

RC4 prohibited

Originally posted on securitypitfalls:

After nearly half a year of work, the Internet Engineering Task Force (IETF) Request for Comments (RFC) 7465 is published.

What it does in a nutshell is disallows use of any kind of RC4 ciphersuites. In effect making all servers or clients that use it non standard compliant.

View original


February 17, 2015

SCAP Workbench

SCAP Workbench allows you to select SCAP benchmarks (content) to use, tailor an SCAP scan, run an SCAP scan on a local or remote system, and to view the results of a scan. The SCAP Workbench page notes:

The main goal of this application is to lower the initial barrier of using SCAP. Therefore, the scope of very narrow – scap-workbench only scans a single machine and only with XCCDF/SDS (no direct OVAL evaluation). The assumption is that this is enough for users who want to scan a few machines and users with huge amount of machines to scan will just use scap-workbench to test or hand-tune their content before deploying it with more advanced (and harder to use) tools like ​spacewalk.

SCAP Workbench is designed to hide the complexity of the SCAP tools and CLI. I can vouch for the ease of use of SCAP Workbench – I’ve been using it to run SCAP and find it the easiest and most flexible way to perform SCAP scans.

SCAP Workbench is an excellent tool for tailoring SCAP benchmarks. SCAP Workbench allows you to select which Benchmark to use, and then displays a list of all the rules in the Benchmark, allowing you to select which rules to evaluate.

SCAP Workbench Tailoring
In addition, SCAP Workbench allows you to modify values in the Benchmark. In the screenshot above you see list of rules. The Set Password Expiration Parameters rule is selected and has been expanded so that we can see the various components of this rule. We have selected the minimum password length rule, and can see the details of this rule on the right side of the window.

We see the title of this rule, the unique identifier for the rule, and the type of this rule. Since this as an xccdf:Value rule, it has an explicit value that will be checked. Since this rule is checking the minimum password length, the minimum password length must be set to this value or larger.

We see that the minimum password length in the Benchmark is 12. We can change this to another value, such as 8 characters. If we change the minimum password length check, the change will be saved in the Tailoring File – the Benchmark is not modified.

After selecting the SCAP Rules you wish to evaluate and modifying values as needed you run the scan by clicking on the SCAN button. The SCAP Scan is run and results displayed in the SCAP Workbench. You can also see the full SCAP report by clicking on the Show Report button, or save the full report by clicking Save Results.


February 14, 2015

Adding an LDAP backed domain to a Packstack install

I’ve been meaning to put all the steps together to do this for a while:

Got an IPA server running on Centos7
Got a Packstack all in one install on Centos 7. I registered this host as a FreeIPA client, though that is not strictly required.

Converted the Horizon install to be domain aware by editing /etc/openstack-dashboard/local_settings

OPENSTACK_API_VERSIONS = {
    "identity": 3
}
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = 'Default'

And restarting HTTPD.

sudo yum install python-openstackclient

The keystonerc_admin file is set for V2.0 of the identity API. To make it work with the v3 api, cp keystonerc_admin keystonerc_admin_v3 and edit:

export OS_AUTH_URL=http://10.10.10.25:5000/v3/
export OS_IDENTITY_API_VERSION=3
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default

And confirm:

$ openstack domain list
+----------------------------------+------------+---------+----------------------------------------------------------------------+
| ID                               | Name       | Enabled | Description                                                          |
+----------------------------------+------------+---------+----------------------------------------------------------------------+
| default                          | Default    | True    | Owns users and tenants (i.e. projects) available on Identity API v2. |
+----------------------------------+------------+---------+----------------------------------------------------------------------+

Add a domain for the LDAP backed domain like this:

$ openstack domain create YOUNGLOGIC
+---------+----------------------------------+
| Field   | Value                            |
+---------+----------------------------------+
| enabled | True                             |
| id      | a9569e236912496f9f001e73fc978baa |
| name    | YOUNGLOGIC                       |
+---------+----------------------------------+

To Enable domain specific backends, edit /etc/keystone.conf like this:

[identity]
domain_specific_drivers_enabled=true
domain_config_dir=/etc/keystone/domains

Right now this domain is backed by SQL for Both Identity and Assignments. To convert it to LDAP for Identity, create a file in /etc/keystone/domains. This directory and file needs to be owned by the Keystone user:

Here is my LDAP specific file /etc/keystone/domains/keystone.YOUNGLOGIC.conf

# The domain-specific configuration file for the YOUNGLOGIC domain

[ldap]
url=ldap://ipa.younglogic.net
user_tree_dn=cn=users,cn=accounts,dc=younglogic,dc=net
user_id_attribute=uid
user_name_attribute=uid
group_tree_dn=cn=groups,cn=accounts,dc=younglogic,dc=net


[identity]
driver = keystone.identity.backends.ldap.Identity

Restart Keystone. Juno RDO has Keystone running in Eventlet still, so use:

sudo systemctl restart openstack-keystone.service

Now, grant the admin user an admin role on the new domain:

openstack role add --domain YOUNGLOGIC --user  admin admin

Like most things, I did my initial test using curl:

{
    "auth": {
        "identity": {
            "methods": [
                "password"
            ],
            "password": {
                "user": {
                    "domain": {
                        "name": "YOUNGLOGIC"
                    },
                    "name": "edmund",
                    "password": "nottellingyou"
                }
            }
        }
    }
}
curl -si -d @token-request-edmund.json -H "Content-type: application/json" $OS_AUTH_URL/auth/tokens

That requests an unscoped token. It has the side effect of populating the id_mappings for the user and group ids from the user that connects.

You can then assign roles to the user like this:

openstack role add --project demo --user  9417d7b6e7d53d719106b192251e7b9560577b9c1709463a19feffdd30ea7513 _member_

I have to admit I cheated: I looked at the database:

$   echo "select * from id_mapping;"  |  mysql keystone  --user keystone_admin --password=stillnottelling 
public_id	domain_id	local_id	entity_type
7c3448d7dc5f51861444a7f974bc63c77a2685a057d8545341a1fbcafd754b96	a9569e236912496f9f001e73fc978baa	ayoung	user
9417d7b6e7d53d719106b192251e7b9560577b9c1709463a19feffdd30ea7513	a9569e236912496f9f001e73fc978baa	edmund	user
862caa65329a761556ded5e6317f3f0cbfcab839f01340b334bdd2be4e54f1c4	a9569e236912496f9f001e73fc978baa	ipausers	group
b14ecd0d21ffb1485261ffce9c95b2cf8fec3d8194bfcda4257bb0ac74f80b0e	a9569e236912496f9f001e73fc978baa	peter	user
4e2abec872c7559279612abd2834ba1a3919aad9c01035cb948a91241a831830	a9569e236912496f9f001e73fc978baa	wheel	group

But with that knowledge I can do:

openstack role add --project demo --group  862caa65329a761556ded5e6317f3f0cbfcab839f01340b334bdd2be4e54f1c4 _member_

And now every user in the ipausers group gets put in the demo project. This works upon first login to Horizon.

UPDATE: See this post for a necessary configuration change.</a.>

February 12, 2015

Debugging OpenStack with rpdb

OpenStack has many different code bases.  Figuring out how to run in a debugger can be maddening, especially if you are trying to deal with Eventlet and threading issues.  Adding HTTPD into the mix, as we did for Keystone, makes it even trickier.  Here’s how I’ve been handling things using the remote pythong debugger (rpdb).

rpdb is a remote python debugging tool.  To use it, you edit the python source and add the line

import  rpdb;  rpdb.set_trace()

and execute your program.  The first time that code gets hit, the program will stop on that line, and then open up a socket. You can connect to the socket with:

telnet localhost 4444

You can replace localhost with a ip address or the hostname.

In order to use this from within the unit tests run from tox on keystone client, for example,  you first need to get rpdb into the venv

. .tox/py27/bin/activate
pip install rpdb
deactivate

Note that if you put the rpdb line into code that is executed on multiple tests, the second and subsequent times tests hit that line of code, the rpdb code will report an error binding to the socket that the address is already in use.

I use emacs, and to run the code such that It matches up with the source in my local git repository, I use:

Meta-X pdb

and then I run pdb like this:

telnet localhost 4444

and gud is happy to treat it like a local pdb session.

February 04, 2015

Life-cycle of a Security Vulnerability

Security vulnerabilities, like most things, go through a life cycle from discovery to installation of a fix on an affected system. Red Hat devotes many hours a day to combing through code, researching vulnerabilities, working with the community, and testing fixes–often before customers even know a problem exists.

Discovery

When a vulnerability is discovered, Red Hat engineers go to work verifying the vulnerability and rating it to determine it’s overall impact to a system. This is a most important step as mis-identifying the risk could lead to a partial fix and leave systems vulnerable to a variation of the original problem. This also allows prioritization of fixes so that those issues with the greatest risk to customers are handled first and issues of low or minimal risk are not passed on to customers who also need to invest time in validating new packages for their environment.

Research

Many times a vulnerability is discovered outside of Red Hat’s domain. This means that the vulnerability must be researched and reproduced in-house to fully understand the risk involved. Sometimes reproducing a vulnerability leads to discovering other vulnerabilities which need fixes or re-engineering.

Notification

When a vulnerability has been discovered, Red Hat works with upstream developers to develop and ship a patch that fixes the problem. A CVE assignment will be made that records the vulnerability and links the problem with the fix among all applicable implementations. Sometimes the vulnerability is embedded in other software and that host software would acquire the CVE. This CVE is also used by other vendors or projects that ship the same package with the same code—CVEs assigned to software Red Hat ships are not necessarily Red Hat specific.

Patch development

One of the most difficult parts of the process is the development of the fix. This fix must remedy the vulnerability completely while not introducing any other problems along the way. Red Hat reviews all patches to verify it fixes the underlying vulnerability while also checking for regressions. Sometimes Red Hat must come up with our own patches to fix a vulnerability. When this happens, we fix not only our shipped software, but also provide this fix back upstream for possible inclusion into the master software repository. In other cases, the upstream patch is not applicable because the version of the software we ship is older, and in these cases Red Hat has to backport the patch to the version we do ship. This allows us to minimize any changes exclusively to those required to fix the flaw without introducing and possible regressions or API/ABI changes that could have an impact on our customers.

Quality assurance

As important as patch development, Red Hat’s QE teams validate the vulnerability fix and also check for regressions. This step can take a significant amount of time and effort depending on the package, but any potential delays introduced due to the quality assurance effort is worth it as it significantly reduces any possible risk that the security fix may be incomplete or introduces other regressions or incompatibilities. Red Hat takes the delivery of security fixes seriously and we want to ensure that we get it right the first time as the overhead of re-delivering a fix, not to mention the additional effort by customers to re-validate a secondary fix, can be costly.

Documentation

To make understanding flaws easier, Red Hat spends time to document what the flaw is and what it can do. This documentation is used to describe flaws in the errata that is released and in our public CVE pages. Having descriptions of issues that are easier to understand than developer comments in patches is important to customers who want to know what the flaw is and what it can do. This allows customers to properly assess the impact of the issue to their own environment. A single flaw may have much different exposure and impact to different customers and different environments, and properly-described issues allow customers to make appropriate decisions on how, when, and if the fix will be deployed in their own environment.

Patch shipment

Once a fix has made it through the engineering and verification processes, it is time to send it to the customers. At the same time the fixes are made available in the repositories, a Red Hat Security Advisory (RHSA) is published and customers are notified using the rhsa-announce list. The RHSA will provide information on the vulnerability and will point to errata that more thoroughly explain the fix.

Customers will begin to see updates available on their system almost immediately.

Follow-on support

Sometimes questions arise when security vulnerabilities are made public. Red Hat customers have access to our technical support team that help support all Red Hat products. Not only can they answer questions, but they can also help customers apply fixes.

Conclusion

Handling security issues properly is a complex process that involves a number of people and steps. Ensuring these steps are dealt with correctly and all issues are properly prioritized is one of the things Red Hat provides with each subscription. The level of expertise required to properly handle security issues can be quite high. Red Hat has a team of talented individuals who worry about these things so you don’t have to.

January 29, 2015

The Travelling Saxophone

The Saxophone is a harsh mistress. She demands attention every day. A musician friend once quoted to me: “Skip a day and you know. Skip two days and your friends know. Skip three days and everyone knows.” That quote keeps me practising nightly.

Playing Sax by the Seine

By the Seine: Photo by Jamie Lennox

My work on OpenStack has me travelling a bit more than I have had to for other software projects. While companies have been willing to send me to conferences in the past, only OpenStack has had me travelling four times a year: two for the Summit and two for mid-cycle meetups of the Keystone team. Keeping on a practice schedule while travelling is tough, sometimes impossible. But the nature of the places where I am visiting makes me want to bring along my horn and play there.

The Kilo OpenStack summit was in Paris in November. The thought of playing in Paris took residence in my imagination and wouldn’t leave. I brought the horn along, but had trouble finding a place and a time to play. The third night, I decided that I would skip the scheduled fun and go play in the middle of the Arc de Triomphe, a couple blocks away from my hotel. There is a walkway under the traffic circle with stairs that lead up to the plaza. However, a couple of police stationed at the foot of the stairs made me wonder if playing there would be an issue, and I continued on. As I approached the far end of the walkway, I heard an accordion.

The accordion player spoke no English. I spoke less French. However, his manner indicated he was overjoyed to let me play along with him.

I shut my case to indicate that tips would still be going in his box. I was certainly not playing for the money.

He struck up a tune, and I followed long, improvising. It worked. He next said the single word “Tango” and I kicked started one off with a growl. Another tune, and then he suggested “La Vie en Rouge” and I shrugged. He seemed astounded that I didn’t really know the tune. This would be the equivalent of being in New Orleans and not knowing “When the Saint’s Go Marching In.” I faked it, but I think his enthusiasm waned, and I packed up afterwards and headed back to the hotel.

I got one other chance to play on that trip. Saturday, prior to heading to the airport, Jamie Lennox and I toured a portion of the city, near the Eiffel tower. Again, I wasn’t playing for the money, and I didn’t want to gather crowds. So we headed down to the banks of the Seine and I played near a bridge, enjoying the acoustics of the stone.

The Keystone midcycle happened in January, and I brought my Sax again. This time, I played each night, usually in the courtyard of the hotel or down along the Riverwalk. The Keystone gang joined me one night, after dinner, and it was gratifying to play for people I knew. On the walk back to the Hotel, Dolph and Dave Stanek (maybe others) were overly interested in their cell phones. It turned out they were setting up ww.opensax.com.

Playing by the Riverwalk

Playing by the Riverwalk: Photo by Dolph Matthews

January 28, 2015

Security improvements in Red Hat Enterprise Linux 7

Each new release of Red Hat® Enterprise Linux® is not only built on top of the previous version, but a large number of its components incorporate development from the Fedora distribution. For Red Hat Enterprise Linux 7, most components are aligned with Fedora 19, and with select components coming from Fedora 20. This means that users benefit from new development in Fedora, such as firewalld which is described below. While preparing the next release of Red Hat Enterprise Linux, we review components for their readiness for an enterprise-class distribution. We also make sure that we address known vulnerabilities before the initial release. And we review new components to check that they meet our standards regarding security and general suitability for enterprise use.

One of the first things that happens is a review of the material going into a new version of Red Hat Enterprise Linux. Each release includes new packages that Red Hat has never shipped before and anything that has never been shipped in a Red Hat product receives a security review. We look for various problems – from security bugs in the actual software to packaging issues. It’s even possible that some packages won’t make the cut if they prove to have issues that cannot be resolved in a manner we decide is acceptable. It’s also possible that a package was once included as a dependency or feature that is no longer planned for the release. Rather than leave those in the release, we do our best to remove the unneeded packages as they could result in security problems later down the road.

Previously fixed security issues are also reviewed to ensure nothing has been missed since the last version. While uncommon, it is possible that a security fix didn’t make it upstream, or was somehow dropped from a package at some point during the move between major releases. We spend time reviewing these to ensure nothing important was missed that could create problems later.

Red Hat Product Security also adds several new security features in order to better protect the system.

Before its 2011 revision, the C++ language definition was ambiguous as to what should happen if an integer overflow occurs during the size computation of an array allocation. The C++ compiler in Red Hat Enterprise Linux 7 will perform a size check (and throw std::bad_alloc on failure) if the size (in bytes) of the allocated array exceeds the width of a register, even in C++98 mode. This change affects the code generated by the compiler–it is not a library-level correction. Consequently, we compiled all of Red Hat Enterprise Linux 7 with a compiler version that performs this additional check.

When we compiled Red Hat Enterprise Linux 7, we also tuned the compiler to add “stack protector” instrumentation to additional functions. The GCC compiler in Red Hat Enterprise Linux 6 used heuristics to determine whether a function warrants “stack protector” instrumentation. In contrast, the compiler in Red Hat Enterprise Linux 7 uses precise rules that add the instrumentation to only those functions that need it. This allowed us to instrument additional functions with minimal performance impact, extending this probabilistic defense against stack-based buffer overflows to an even larger part of the code base.

Red Hat Enterprise Linux 7 also includes firewalld. firewalld allows for centralized firewall management using high-level concepts, such as zones. It also extends spoofing protection based on reverse path filters to IPv6, where previous Red Hat Enterprise Linux versions only applied anti-spoofing filter rules to IPv4 network traffic.

Every version of Red Hat Enterprise Linux is the result of countless hours of work from many individuals. Above we highlighted a few of the efforts that the Red Hat Product Security team assisted with in the release of Red Hat Enterprise Linux 7. We also worked with a number of other individuals to see these changes become reality. Our job doesn’t stop there, though. Once Red Hat Enterprise Linux 7 was released, we immediately began tracking new security issues and deciding how to fix them. We’ll further explain that process in an upcoming blog post about fixing security issues in Red Hat Enterprise Linux 7.

January 20, 2015

Running SCAP Scans

OpenSCAP can be run from the command line, but there are easier ways to do it.

OpenSCAP support has been integrated into Red Hat Satellite and into the Spacewalk open source management platform.

Red Hat Satellite has the ability to push SCAP content to managed systems and to run the SCAP audit scans. Red Hat Satellite has the ability to schedule SCAP audit scans and to retrieve the reports and access them through the Red Hat Satellite Audit tab.

If you are going to be using SCAP in production, especially on large numbers of systems, you should really be using a management framework like Red Hat Satellite or Spacewalk.

For development, testing, tuning SCAP benchmarks, and small scale use, the SCAP Workbench is a friendly and flexible tool. We will cover this in more detail in the next post.


January 12, 2015

Security Tests – SCAP Content

While the SCAP technologies are interesting, they have limited value without security content – the actual set of security tests run by SCAP. Fortunately there is a good set of content available that can be used as a starting point.

The US Government has released a set of SCAP content that covers the baseline security required – the United States Government Configuration Baseline (USGCB), which contains the security configuration baselines for a variety of computer products which are widely deployed across federal agencies. USGCB content covers Internet Explorer, Windows, and Red Hat Enterprise Linux Desktop 5.

Also from the US Government is the Department of Defense STIG or Security Technical Implementation Guides. A specific example of this would be downloadable SCAP Content for RHEL 6, the Red Hat 6 STIG Benchmark, Version 1, Release 4.

A number of vendors include SCAP content in their products. This is often a sample or an example – it is enough to get you started, but does not provide a comprehensive security scan.

While the available SCAP content is a good start, most organizations will have additional needs. This can be addressed in two ways: by tailoring existing SCAP content and by writing new SCAP content.

Tailoring SCAP content involves choosing which SCAP rules will be evaluated and changing parameters.

An example of changing parameters is minimum password length. The default value might be 12 characters. You can change this in a tailoring file, perhaps to 8 characters or to 16 characters for a highly secure environment.

A common way to use SCAP is to have a large SCAP benchmark (content) which is used on all systems, and to select which rules will be used for each scan. This can be changed for each system and each run. You do this by providing the SCAP benchmark, an SCAP Tailoring file, and running the SCAP scanner.

Writing new SCAP content can be a daunting task. SCAP is a rich enterprise framework – in other words, it is complex and convoluted… If you are going to be writing SCAP content (and you really should), I suggest starting with Security Automation Essentials, getting very familiar with the various websites we’ve mentioned, studying the existing SCAP content, and being prepared for a significant learning curve.