In association with heise online

Reversible Migrations

Aaron Patterson (@tenderlove) made Rails 3.1 a lot smarter by implementing reversible migrations. Instead of using only self.up and self.down to manage your migrations, Patterson has included the #change method, which will allow Rails to "figure out" how to reverse a given migration on its own.

For example, consider the following Rails 3.0 standard migration, compared with one using #change:

# Rails 3.0
class CreateUsers < ActiveRecord::Migration
def self.up
create_table :users do |t|
t.string :name
t.string :phone
# ... and so on ...
end
end

def self.down
drop_table :users
end
end

# Rails 3.1
class CreateUsers < ActiveRecord::Migration
def change
create_table :users do |t|
t.string :name
t.string :phone
# ... and so on ...
end
end
end

When running a rollback, Rails will "figure out" how to reverse the 3.1 version of the migration on its own. It simply applies the inverse of whatever was called for in the #change method.

Unfortunately, this particular improvement isn't perfect. In testing, some simple things, such as changing a column's definition after the table was created (to use an int(8) instead of an int(4) in PostgreSQL), couldn't be reversed because figuring out what the field should have been changed back to wasn't possible.

In this case, the standard #up and #down methods can be used. A minor difference from previous versions, the #up and #down methods for a migration are not scoped to the current class. In other words, simply use "up" and "down", not "self.up" and "self.down" for up and down migrations in Rails 3.1. For example:

class CreateUsers < ActiveRecord::Migration
def up # no "self."
create_table :users do |t|
# ...
end
end

def down # "no self."
drop_table :users
end
end

While most migrations of significant complexity will still need to use the standard #up and #down methods, for simple table definitions, the #change method for reversible migrations can be very useful.

Prepared Statements

At RailsConf 2011, Aaron Patterson introduced Prepared Statements for Rails 3.1. Normally, Rails sends a standard SQL statement to the database, the database prepares a query plan for the statement, runs the statement, then returns it. This is a four-step process.

By using prepared statement caching, this process is reduced by half. In Rails 3.1, queries such as SELECT * FROM table_name WHERE id = $1 will be sent. The database then caches the statement and returns a token to Rails. Later, when the query is actually needed, Rails sends back a token and a value for the missing element(s). This makes read operations much more efficient, because the statement is already cached and the query plan built in advance, meaning that the database has only to execute the query and return the result set.

In the above linked video, Patterson shows performance impact for three different databases: SQLite, PostgreSQL, and MySQL. This change increases performance in PostgreSQL and SQLite in all cases, and in MySQL in cases where complex queries are stored. Unfortunately, it actually degrades performance in MySQL with simple queries because MySQL does no advance query planning, meaning that storing the query is simply an extra step with no benefit. Additionally, executing prepared statements on MySQL requires exchanging network data twice for some reason.

Developers using MySQL as their production database have a difficult choice to make: to use prepared statement caching, or not. In cases of complex queries, it might be worth it. Because MySQL uses two network round trips when prepared statements are executed, the network latency between your database and application servers is a factor to consider. You would have to find out if total network latency, when factored in with the speed performance on said complex query (or queries), is still an overall improvement in speed. In some cases this may be true, but in others it won’t be. On simple queries, MySQL’s performance degrades when using prepared statement caching.

Engine Yard has a blog post on benchmarks for SQL Server, stating that the performance is up to 10x faster for complex queries, and doubles speed on simple queries.

Patterson has implemented this new feature in such a way that developers don't have to do anything differently to benefit from prepared statement caching: the API is exactly the same, so these changes come "out of the box" for anyone building a Rails 3.1 application.

Role-based mass-assignment protection

Mass assignment protection has received an interesting upgrade with Rails 3.1. You can now specify roles on the model level with protected attributes. This is accomplished by allowing attr_accessible to accept a :role option, and #update_attributes an :as option. One would use this by defining what attributes are accessible by which roles, then specifying the role when #update_attributes is called.

Consider the following example:

# app/models/user.rb
class User < ActiveRecord::Base
attr_accessible :username, :as => :admin
end

# app/controllers/users_controller.rb
class UsersController < ApplicationController
def update
@user = User.find(params[:id])
@user.update_attributes(params[:user],
:as => current_user.role.name.to_sym)
end
end

In this example, we specify that the username attribute of a user can only be updated by an administrator. In the controller, responsible for triggering the update, we specify that the call to #update_attributes should include a symbol of the currently logged-in user's role. (Assume that current_user is the currently logged-in user, that it has_one :role, and that the role has a "name" attribute, which we then convert to a symbol.)

Next: has_secure_password

Print Version | Permalink: http://h-online.com/-1285887
  • Twitter
  • Facebook
  • submit to slashdot
  • StumbleUpon
  • submit to reddit
 


  • July's Community Calendar





The H Open

The H Security

The H Developer

The H Internet Toolkit