pillar alternatives and similar packages
Based on the "ORM and Datamapping" category.
Alternatively, view pillar alternatives based on common mentions on social networks and blogs.
-
paper_trail
Track and record all the changes in your database with Ecto. Revert back to anytime in history. -
ecto_psql_extras
Ecto PostgreSQL database performance insights. Locks, index usage, buffer cache hit ratios, vacuum stats and more.
CodeRabbit: AI Code Reviews for Developers

* Code Quality Rankings and insights are calculated and provided by Lumnify.
They vary from L1 to L5 with "L5" being the highest.
Do you think we are missing an alternative of pillar or a related project?
Popular Comparisons
README
Pillar
Elixir client for ClickHouse, a fast open-source Online Analytical Processing (OLAP) database management system.
Features
- Direct Usage with connection structure
- Pool of workers
- Async insert
- Buffer for periodical bulk inserts
- Migrations
Usage
Direct Usage with connection structure
conn = Pillar.Connection.new("http://user:password@localhost:8123/database")
# Params are passed in brackets {} in SQL query, and map strtucture does fill
# query by values.
sql = "SELECT count(*) FROM users WHERE lastname = {lastname}"
params = %{lastname: "Smith"}
{:ok, result} = Pillar.query(conn, sql, params)
result
#=> [%{"count(*)" => 347}]
Pool of workers
Recommended usage, because of limited connections and supervised workers.
defmodule ClickhouseMaster do
use Pillar,
connection_strings: [
"http://user:password@host-master-1:8123/database",
"http://user:password@host-master-2:8123/database"
],
name: __MODULE__,
pool_size: 15
end
ClickhouseMaster.start_link()
{:ok, result} = ClickhouseMaster.select(sql, %{param: value})
Async insert
connection = Pillar.Connection.new("http://user:password@host-master-1:8123/database")
Pillar.async_insert(connection, "INSERT INTO events (user_id, event) SELECT {user_id}, {event}", %{
user_id: user.id,
event: "password_changed"
}) # => :ok
Buffer for periodical bulk inserts
For this feature required Pool of workers.
defmodule BulkToLogs do
use Pillar.BulkInsertBuffer,
pool: ClickhouseMaster,
table_name: "logs",
# interval_between_inserts_in_seconds, by default -> 5
interval_between_inserts_in_seconds: 5,
# on_errors is optional
on_errors: &__MODULE__.dump_to_file/2
@doc """
dump to file function store failed inserts into file
"""
def dump_to_file(_result, records) do
File.write("bad_inserts/#{DateTime.utc_now()}", inspect(records))
end
@doc """
retry insert is dangerous (but it is possible and listed as proof of concept)
this function may be used in `on_errors` option
"""
def retry_insert(_result, records) do
__MODULE__.insert(records)
end
end
:ok = BulkToLogs.insert(%{value: "online", count: 133, datetime: DateTime.utc_now()})
:ok = BulkToLogs.insert(%{value: "online", count: 134, datetime: DateTime.utc_now()})
:ok = BulkToLogs.insert(%{value: "online", count: 132, datetime: DateTime.utc_now()})
....
# All this records will be inserted with 5 second interval.
on_errors parameter allows you to catch any error of bulk insert (for example: one of batch is bad or clickhouse was not available )
Migrations
Migrations can be generated with mix task mix pillar.gen.migration
migration_name
.
mix pillar.gen.migration events_table
But for launching them we have to write own task, like this:
defmodule Mix.Tasks.MigrateClickhouse do
use Mix.Task
def run(_args) do
connection_string = Application.get_env(:my_project, :clickhouse_url)
conn = Pillar.Connection.new(connection_string)
Pillar.Migrations.migrate(conn)
end
end
And launch this via command.
mix migrate_clickhouse
Contribution
Feel free to make a pull request. All contributions are appreciated!