Skip to content
This repository has been archived by the owner on Apr 18, 2018. It is now read-only.

Support for SQL data stores #48

Open
andrewhavens opened this issue Feb 4, 2015 · 1 comment
Open

Support for SQL data stores #48

andrewhavens opened this issue Feb 4, 2015 · 1 comment

Comments

@andrewhavens
Copy link
Contributor

Hello,

I recently came across Curator in my search for a database-agnostic persistence gem which implements the repository pattern. I really like the API/DSL that Curator provides. It's the best I've found so far, so I am interested in adding support for other data stores. Specifically, I need to support at least MySQL, PostgreSQL, and SQLite.

I have started work on implementing support for these adapters, but I'm starting to enter unknown territory for how Curator should work in certain situations. I'd like to start the discussion of how to support traditional, schema-based data stores.

Currently, I'm getting started by trying to add SQLite support. I'm using the Sequel gem to interact with an SQLite database. I think Sequel would be a good fit because it already supports every SQL adapter that we would want to support. Here's a sample of the code that I am working on at the moment:

require 'sequel'

module Curator
  module Sqlite
    class DataStore

      def client
        return @client if @client
        yml_config = YAML.load(File.read(Curator.config.sqlite_config_file))[Curator.config.environment]
        @client = Sequel.connect(yml_config)
      end

      def save(options)
        table_name = options[:collection_name].to_sym
        data = options[:value].merge!(id: options[:key])

        # automagically create table if it doesn't already exist?
        client.create_table?(table_name) do
          primary_key :id
        end
        table = client[table_name]

        # automagically create fields if they don't already exist?
        data.each do |field_name, value|
          unless table.columns.include? field_name
            if value.is_a? Array
              client.add_column table_name, field_name, String, text: true
              data[field_name] = YAML.dump(value)
            else
              client.add_column table_name, field_name, value.class
            end
          end
        end

        table.insert(data)
      end

My first attempt was to automatically create the tables and fields behind the scenes (avoiding the need to write database migrations). Giving you a sort of schemaless persistence experience. I now think this is a bad idea. Curator supports database migrations so we might as well force the user to create the schema that they need. However, you can see in the code above that there is a situation where someone might want to persist an Array. In Mongo/Riak, this is no big deal, but I'm not quite sure how to deal with this in an SQL way.

Thoughts?

@pgr0ss
Copy link
Contributor

pgr0ss commented Feb 5, 2015

Hi @andrewhavens,

Thanks for contacting us, and it's great that you want to get involved and extend curator. @pitluga started adding SQL support years ago but never finished. You might want to take a look at his branch (for inspiration or even bringing the code up to date):

https://github.com/pitluga/curator/compare/sql

For arrays, you could use array columns where they exist (e.g. http://www.postgresql.org/docs/9.4/static/arrays.html) and maybe serialize to a string where they don't?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants