“ActiveRecord” for non-SQL Data in Rails

February 13, 2006 § Leave a comment

The heart of Ruby on Rails is arguably
ActiveRecord (AR), which implements their version of Object-Relational Mapping.
For greater generality (and locking across apps) it would be nice to have Rails
on Mac OS X talk directly to CoreData. In particular, that would allow it to a)
utilize the model file, and b) access data stored in XML and Binary format, not
just SQLite.

Unfortunately,
ActiveRecord is hard-wired to use SQL to talk to the backing store. The recommended practice for non-SQL datastores is
to create an “AR-like” class, a la ActiveLDAP. However, that didn’t seem
“AR-like” enough for me.

In particular,
there’s several aspects of AR that I want to preserve:

? conventions for automatically mapping
classes onto back-end entities
? flexible method names for accessing
attributes
? dynamically-loaded adapter plug-ins for various
datastores

Rather than rewrite all this
from scratch, I’ve decided to see how much I can extract/refactor from
ActiveRecord, into what for now I’m calling ActiveRecord-nonSQL.zip [Read more] for my
findings so far.

Hypothesis: I can accomplish everything I want
simply by replacing hand-generated SQL with procedurally-generated SQL defined
by the connection.

That is, rather
“sql” merely being a string, it becomes a connection-dependent
object:


sql=connection.new_query()


sql.add(“SELECT”,options[:select] || ‘*’)


sql.add(“FROM”,table_name)

The
implementation is pretty trivial for the SQL case (just concatenate the
strings), and at least it is pre-parsed for non-SQL
datasources.

Also, the whole
‘connection.{delete | update | insert}’ api seems redundant, since the SQL
already contains all that. In fact, this seems to complicate many adaptors.
Why not just do:

connection.execute(:action,
sql, log)

and let the adaptor special-case
the various actions if needed (otherwise, just
ignore).

Even better, since the query
now comes from the connection, why not have it run the
execution:

sql.execute(:action,
log)

The default, of course, should be
to implement the same interface as before, but this should allow much simpler
functionality over time.

Is this true?
Not sure. There’s a couple special cases where SQL is passed in
directly:

1.
Joins

sql.add(nil,join)
if join

2.
Conditions

def
add_conditions!(sql, conditions)


segments = [scope(:find, :conditions)]


segments << sanitize_sql(conditions) unless
conditions.nil?

segments <<
type_condition unless descends_from_active_record?


segments.compact!


sql.add_array(“WHERE”,segments) unless segments.empty? # need to explicitly
handle “AND”


end

3.
add_limit_offset!

Handle in
sql, not connection

4.
sanitize_sql

If I understand
it correctly, this would actually be the place we translate any passed-in SQL
into my sql object.

5.
interpolate_sql


instance_eval(“%@#{sql.gsub(‘@’, ‘\@’)}@”) # need to implement/pass-through gsub
(?)

What else? Is this even worth
pursuing? Should I give up? Is there an easier way?

Advertisements

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

What’s this?

You are currently reading “ActiveRecord” for non-SQL Data in Rails at iHack, therefore iBlog.

meta

%d bloggers like this: