The Simplificator blog

You are reading articles by Simplificator, a Swiss-based custom software development agency. Here we write about the problems we solve and how we work together.

Returning Enumerator unless block_given?

· Pascal Betz

The Enumerable module gives you methods to search, iterate, traverse and sort elements of a collection. All you need to to is to implement each and include the mixin.

class NameList
include Enumerable

def initialize(*names)
@names = names
end

def each
@names.each { |name| yield(name) }
end
end

list = NameList.new('Kaiser Chiefs', 'Muse', 'Beck')

list.each_with_index do |name, index|
puts "#{index + 1}: #{name}"
end

# => 1: Kaiser Chiefs
# => 2: Muse
# => 3: Beck

So by defining each and including Enumerable we got an impressive list of methods that we can call on our NameList instance. But having all those methods added does not feel right. Usually you will use one or two of them. Why clutter the interface by adding 50+ methods that you'll never use? There is an easy solution for this:

class NameList
def initialize(*names)
@names = names
end

def each
return @names.to_enum(:each) unless block_given?
@names.each { |name| yield(name) }
end
end

list = NameList.new('Kaiser Chiefs', 'Muse', 'Beck')

list.each do |name|
puts name
end
# => Kaiser Chiefs
# => Muse
# => Beck

list.each.each_with_index do |name, index|
puts "#{index + 1}: #{name}"
end
# => 1: Kaiser Chiefs
# => 2: Muse
# => 3: Beck

Note that each now returns the Enumerator on which you can call each_with_index (or any of the other methods) unless a block is given. So you can even call it like this:

puts list.each.to_a.size # => 3

By returning an Enumerator when no block is given one can chain enumerator methods. Ever wanted to do a each_with_index on a hash? There you go:

points = {mushroom: 10, coin: 12, flower: 4}

points.each.each_with_index do |key_value_pair, index|
puts "#{index + 1}: #{key_value_pair}"
end

# => 1: [:mushroom, 10]
# => 2: [:coin, 12]
# => 3: [:flower, 4]

ActionPack Variants

· Pascal Betz

Fabian has a good write-up on ActionPack Variants, the rails mechanism to deal with device dependent views on the server side: want to know more

Ruby and the double splat operator

· Pascal Betz

If you have been programming ruby for a while then you have seen the splat operator. It can be used to define methods that accept a variable length argument list like so:

def single_splat(an_argument, *rest)
puts "#{rest.size} additional argument(s)"
end

single_splat('howdy')
#=> 0 additional argument(s)

single_splat('howdy', :foo)
#=> 1 additional argument(s)

single_splat('howdy', :foo, :bar, :baz)
#=> 3 additional argument(s)

But also to "unwrap" values from an array and pass them as single arguments:

def unwrapped(a, b, c)
puts "#{a} / #{b} / #{c}"
end

data = [1, 2, 3]

unwrapped(*data)
# => 1 / 2 / 3

unwrapped(data)
#=> wrong number of arguments (1 for 3) (ArgumentError)

But behold can use the splat operator also to coerce values into arrays:

coerced = *"Hello World"
p coerced
# => ["Hello World"]

coerced = *nil
p coerced
# => []

coerced = *[1, 2, 3]
p coerced
# => [1, 2, 3]

And of course to deconstruct arrays:

data = [1, 2, 3, 4]

first, *rest = data
puts "#{first}, #{rest}"
# => 1, [2, 3, 4]

*list, last = data
puts "#{list}, #{last}"
# => [1, 2, 3], 4

first, *center, last = data
puts "#{first}, #{center}, #{last}"
# => 1, [2, 3], 4

first, second, *center, third, fourth = data
puts "#{first}, #{second}, #{center}, #{third}, #{fourth}"
# => 1, 2, [], 3, 4

But now back to the double splat operator. It has been added to Ruby in version 2.0 and behaves similarly to the single splat operator but for hashes in argument lists:

def double_splat(**hash)
p hash
end

double_splat()
# => {}

double_splat(a: 1)
# => {:a => 1}

double_splat(a: 1, b: 2)
# => {:a => 1, :b => 2}

double_splat('a non hash argument')
# => `double_splat': wrong number of arguments (1 for 0) (ArgumentError)
# (The message for the case where I pass in a non-hash argument is not very helpful I'd say)

"What!" I can hear you shout. Where is the difference to a standard argument. In the use case as shown above it is pretty much the same. But you would be able to pass in nil values or non hash values, so more checks would be required:

def standard_argument(hash = {})
puts hash
end

standard_argument()
# => {}

standard_argument(nil)
# =>

Now if we move this to a more realistic use case, consider a method taking a variable list of arguments AND some options:

def extracted_options(*names, **options)
puts "#{names} / #{options}"
end

extracted_options()
# => [] / {}

extracted_options('pascal', 'lukas', color: '#123456', offset: 3, other_option: :foo)
# => ["pascal", "lukas"] / {:color=>"#123456", :offset=>3, :other_option=>:foo}

Ruby on Rails developers might know this pattern already. It is used in various parts of the framework. It is so common that the functionality has been defined in extract_options!

Handling errors in Ruby on Rails

· Pascal Betz

Rails offers multiple ways to deal with exceptions and depending on what you want to achieve you can pick either of those solutions. Let me walk you through the possibilities.

begin/rescue block

begin/rescue blocks are the standard ruby mechanism to deal with exceptions. It might look like this:

begin
do_something
rescue
handle_exception
end

This works nice for exceptions that might happen in your code. But what if you want to rescue every occurrence of a specific exception, say a NoPermissionError which might be raised from your security layer? Clearly you do not want to add a begin/rescue block in all your actions just to render an error message, right?

Around filter

An around filter could be used to catch all those exceptions of a given class. Honestly I haven't used a before filter for this, this idea came to my mind when writing this blog post.

class ApplicationController < ActionController::Base
around_action :handle_exceptions

private
def handle_exceptions
begin
yield
rescue NoPermissionError
redirect_to 'permission_error'
end
end
end

rescue_from

rescue_from gives you the same possibilities as the around filter. It's just shorter and easier to read and if the framework offers a convenient way, then why not use it. There are multiple ways to define a handler for an exception, for a short and sweet handler I prefer the block syntax:

class ApplicationController < ActionController::Base
rescue_from 'NoPermissionError' do |exception|
redirect_to 'permission_error'
end
end

exceptions_app

There is an additional feature (added in Rails 3.2) that allows to handle exceptions. You can specify an exceptions_app which is used to handle errors. You can use your own Rails app for this:

config.exceptions_app = self.routes

If you do so, then your routing must be configured to match error codes like so:

match '/404', to: 'exceptions#handle_404'

Alternatively you can specify a lambda which receives the whole Rack env:

config.exceptions_app = lambda do |env|
# do something
end

Do you wonder how you can call an arbitrary action when you have the env? It's pretty easy:

action = ExceptionsController.action(:render_error)
action.call(env)

In any case you want to set following configuration for exceptions_app to be used:

Rails.application.config.consider_all_requests_local = false
Rails.application.config.action_dispatch.show_exceptions = true

But where is the exception you ask? It is stored in the Rack env:

env['action_dispatch.exception']

And as a bonus: here is how you can determine an appropriate status code for an exception:

wrapper = ActionDispatch::ExceptionWrapper.new(env, exception)
wrapper.status_code

There is more information you can extract using the exception wrapper. Best you look it up in the API description.

Most of this code can be seen in action in our infrastructure gem which we use to add error pages to apps we build.

Our new laptop sleeves arrived

· Pascal Betz

Simplificator laptop sleeves

Our new laptop sleeves arrived. Every employee picked two of the five values that Simplificator stands for. Now we have nice and colorful sleeves and they convey our message.

Values

· Pascal Betz

It's always great to visit a customer and see the magnets with our five values in their offices:

How we sped up our model spec to run 12 times faster

· Thomas Ritter

We are using cancancan as an authorization gem for one of our applications. To make sure that our authorization rules are correct, we unit-tested the Ability object. In the beginning, the test was quite fast, but the more rules we added, the longer it took to run the whole model test. When we analyzed what was slowing down our test, we saw that quite some time is actually used persisting our models to the database with factory_girl as part of the test setup. It took a bit more than 60 seconds to run the whole ability spec, which is far too much for a model test.

Let's look at an excerpt of our ability and its spec:

# ability.rb

def acceptance_modes
can [:read], AcceptanceMode
if @user.admin?
can [:create, :update], AcceptanceMode
can :destroy, AcceptanceMode do |acceptance_mode|
acceptance_mode.policies.empty?
end
end
end


# ability_spec.rb

describe Ability do

let!(:admin_user) { create(:admin_user) }
subject!(:ability) { Ability.new(admin_user) }

context 'acceptance mode' do

let!(:acceptance_mode) { create(:acceptance_mode) }

before(:each) do
create(:policy, :acceptance_mode => acceptance_mode)
end

[:read, :create, :update].each do |action|
it { should be_able_to(action, acceptance_mode) }
end

it { should_not be_able_to(:destroy, acceptance_mode) }

end
end


# ability_matcher.rb

module AbilityHelper
extend RSpec::Matchers::DSL

matcher :be_able_to do |action, object|
match do |ability|
ability.can?(action, object)
end

description do
"be able to #{action} -- #{object.class.name}"
end

failure_message do |ability|
"expected #{ability.class.name} to be able to #{action} -- #{object.class.name}"
end

failure_message_when_negated do |ability|
"expected #{ability.class.name} NOT to be able to #{action} -- #{object.class.name}"
end
end
end

RSpec.configure do |config|
config.include AbilityHelper
end

We first set up a user -- in this case it's an admin user -- and then initialize our ability object with this user. We further have a model called AcceptanceMode, which offers the usual CRUD operations. An acceptance mode has many policies. If any policy is attached to an acceptance mode, we don't want to allow it to be deleted.

Note that a lot of models are created, meaning these are persisted to the database. In this excerpt, we have 4 test cases. Each of these test cases needs to create the admin user, acceptance mode and also create a policy. This is a lot of persisted models, even more so if you realize that this is not all the acceptance mode specs and acceptance mode specs are only a small fraction of the whole ability spec. Other models are even more complex and require more tests for other dependencies.

But is this really necessary? Do we really need to persist the models or could we work with in-memory versions of these?

Let's take a look at this modified spec:

describe Ability do

let(:stub_policy) { Policy.new }
let!(:admin_user) { build(:admin_user) }
subject!(:ability) { Ability.new(admin_user) }

context 'acceptance mode' do

let(:acceptance_mode) { build(:acceptance_mode, :policies => [stub_policy]) }

[:read, :create, :update].each do |action|
it { should be_able_to(action, acceptance_mode) }
end

it { should_not be_able_to(:destroy, acceptance_mode) }

end
end

Note that all the create calls are replaced with build. We actually don't need the models to be persisted to the database. The ability mainly checks if the user has admin rights (with admin?), which can be tested with an in-memory version of a user. Further, the acceptance mode can be built with an array that contains an in-memory stub policy. If you look closely at the Ability implementation, you will see that that's not even necessary. Any object could reside in the array and the spec would still pass. But we decided to use an in-memory policy nonetheless.

With this approach, no model is persisted to the database. All models are in-memory but still collaborate the same way as they would have when loaded from the database first. However, no time is wasted on the database. The whole ability spec run time was reduced from 60 seconds to 5 seconds, by simply avoiding to persist models to the database in the test setup.

As an aside: there's a lot of discussions around the topic of factories and fixtures. Fixtures load a fixed set of data into the database at the start of the test suite, which avoids these kinds of problems entirely.

That's it. We hope you can re-visit some of your slow unit tests and try to use in-memory models, or avoid persisting your models for the next unit test you write!

Interesting read on "the web app myth"

· Simplificator

From an article written by Christian Heilman on the "Web Application Myth".

Apps are software products with human interfaces. Web sites are that, too. The WYSIWYG dream of the 90s was telling people all they need to do is buy DreamWeaver and they’ll be able to build and be the same success as Amazon. This was nonsense.

Dare to question

· Lukas Eppler

We had a problem.

Let me call it Project X. We were six months behind. Requirements Creep resulted in enormous methods, bloated controllers, a test coverage below the belt and still no clear plan of finishing. We worked a year on the thing, it has been close to finished for months now, but it wasn't coming together. We had a problem.

Dare to question

Wasn’t it cool in the old days, when we were the wizards, the magicians - where just the fact that we were able to create a simple calculating form or create a script saving someone two days of busywork per week? They trusted us when we said, it is going to take three weeks to implement it. If you understood how to “fix a computer” by finding the loose cable connector on the keyboard. When running a defragmentation tool made your uncle feel like he bought a brand new machine.

It’s no longer like that. Writing software is not so magical anymore, it’s a craft. We know what we do, and we’re appreciated for it. But things have to get done. The customer is king again. We’re constantly struggling in the space between what the customer wants and what we know is the right thing. We learned a while ago that wearing a hoodie and carrying a sticker-infested laptop to a board meeting doesn’t automatically raise their respect for us. We learned to listen. We learned to learn each customer’s language, to better understand, to better craft what’s needed.

On the other hand, we still feel like wizards. We know what works, and don't want to waste our precious time with dull decoration. We want our effort limited to a minimum, working on the ambitious adventure, the principal puzzle, the real riddle. The cool stuff. Let’s write the simplest thing that can possibly work. You want more? You Ain’t Gonna Need It (YAGNI™). Because an apparently simple request might lead to days of unforeseen work, which might even go unpaid because its complexity never got onto any offer.

So we grew an instinct to say no, to approach a request with a certain defensive attitude. A feature has to pass a threshold first: Is it really needed?

But then, the customer actually pays for what we do, so saying no doesn’t fly well with them. We apparently need a different attitude.

We had a routine importing data which needed almost a day to run, and one wrongly formatted element in the source data would knock the process out. We added and tweaked, only to find the next edge case… we dearly wanted to exclude those edge cases, but many were still essential.

What was going wrong? What was the problem behind the problem?

Complexity is not value. But neither is simplicity as such. We are trained to write what’s needed, in budget, and on time. Those constraints are natural. We coders have experienced many situations where broken business models resulted in hopeless strategies, which turned into convoluted requirements. Sometimes we call it “design by committee”, where the results of a brainstorming session is translated into demands full of contradictions, wishful thinking and pies in the sky. After the session, several people “flesh out” the requirements, and the input of all participants is gathered, but never questioned.

Now try to write good code with that. We try to manage upwards, trying to filter what should never have made it into requirements.

Hence, the first draft of our company values had the line “dare to say no”.

“Dare to say no” at least tames the devil of blindly implementing what’s requested, only to find the contradictions at the very end where ideas meet reality, when bugs show up stemming from the bad design decisions above. Code is honest, code is pure. There is no handwaving, no "maybes" in code, no “mostly” or “generally” - come with unfinished ideas and you will be mercilessly punished. The wall of logic can’t be broken with sheer will, you’ll be crushed between requirements and feasibility.

But saying no doesn’t give you good code.

And Project X wasn’t finished. We saw it ourselves. We had something which worked, and somehow fulfilled requirements. But it didn’t feel right. It felt buggy and convoluted. It looked the part… We needed a reboot.

Reboot

“Dare to say no” apparently needed a reboot too. We worked on that line. And we found out what we meant by it. We wanted to be able to work on all levels of software to find the right solutions. We needed to be able to address the first decisions. Those which lead to the requirements causing trouble.

Mind you, this happens anyway - at the latest, when broken code goes in production. At this point, even the people who brainstormed the ideas will see the contradictions, because they’re now glaringly obvious. Only now the important questions get asked. Can’t we get to that knowledge earlier?

We can. It requires courage to show the contradictions, the unfinished thoughts. It requires tact and skill to identify the core requirements which clash, and talk about them. It requires a lot of guts to ask fundamental questions.

Invigorated, we addressed Project X with new energy. We started with tidying up the code. Where weird requirements held us up, we went back to the customer and asked why they wanted a certain feature, why it had to be like that. The pruning and culling resulted in a much more streamlined user experience, clean code, and somewhat to our surprise, a greatly improved relationship with the customer.

Our value became “dare to question.” Ask why, understand the answer - or ask why again. Get to the bottom of it. Find the need behind the need. Throw away what’s not necessary, make it clean - with the full understanding of the requirement.

The project is live now. We have more work coming.

Maybe we can still be wizards. We just have to learn the new magic.

Dare to question.

Filter Rails SQL log in production

· Thomas Ritter

In order to debug a problem, which only occurred in production, we recently wanted to tweak our Rails SQL logs to only show the access to a specific table.

Here's what we did to accomplish this. We created a file initializers/filter_sql_log.rb with this content:

if Rails.env.production?
module ActiveRecord
class LogSubscriber
alias :old_sql :sql

def sql(event)
if event.payload[:sql].include? 'users'
old_sql(event)
end
end
end
end
end

This monkey-patches the ActiveRecord::LogSubscriber class and only delegates to the old logging method, if the SQL statement includes the string "users".

By default, SQL logging is deactivated in the Rails production environment. Therefore we needed to change config/environments/production.rb like this:

config.log_level = :debug