Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: implement SQLx data source #47

Merged
merged 5 commits into from
Dec 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
627 changes: 618 additions & 9 deletions Cargo.lock

Large diffs are not rendered by default.

250 changes: 37 additions & 213 deletions TODO.md
Original file line number Diff line number Diff line change
@@ -1,83 +1,47 @@
MVP:
# v0.2 (eta January 2025)

- [x] Swap to sqlx
- [ ] Allow use of custom columns
- [ ] "returning `id` should properly choose ID column"
- [ ] Add thread safety (currently tests in bakery_api fail)
- [ ] Implement transaction support
- [ ] Add MySQL support
- [ ] Add a proper database integration test-suite
- [ ] Implement all basic SQL types
- [ ] Implement more operations
- [ ] Fully implement joins
- [ ] Implement and Document Disjoint Subtypes pattern
- [ ] Add and document more hooks
- [ ] Comprehensive documentation for mock data testing
- [ ] Implement "Realworld" example application in a separate repository
- [ ] Implement Uuid support
- [ ] with_id() shouldn't need into()

# v0.3

- [ ] Implement associated records (update and save back)
- [ ] Implement table aggregations (group by)
- [ ] Implement NoSQL support
- [ ] Implement RestAPI support
- [ ] Implement Queue support
- [ ] Add expression as a field value (e.g. when inserting)
- [ ] Add delayed method evaluation as a field value (e.g. when inserting)
- [ ] Add tests for cross-database queries
- [ ] Explore replayability for idempotent operations and workflow retries
- [ ] Provide example for scalable worker pattern

# Someday maybe:

0.0.1: Query Building

- [x] create a basic query type
- [x] have query ability to render into a SQL query
- [x] add ability to have expressions
- [x] add ability to have where conditions
- [x] add support for datasource
- [x] add support for dataset
- [x] add integration with sqlite
- [x] add integration with postgres
- [x] implement insert query
- [x] implement delete query
- [x] implement operations: (field.eq(otherfield))
- [x] implement parametric queries
- [x] reimplement "with_condition" into "with_where_condition" and "with_having_condition"

0.0.2: Nested Query Building

- [x] properly handle nested queries
- [x] table should own DataSource, which should be cloneable and use Arc for client
- [x] implemented condition chaining
- [x] implemented and/or conditions
- [x] implemented expression query
- [x] implemented table::sum()
- [x] implemented TableDelegate trait
- [x] implemented Query::add_join()

0.0.3: Table Structure

- [x] add uniq id vendor
- [x] implemented Table::join_table() for merging tables
- [x] field prefixing with table alias/name (optional)
- [x] Table::join_table can be used to reference fields. Also add Table::with_join()
- [x] Table::join_table should preserve joins on other_table
- [x] When joining table, combine their UniqueIdVendors into one
- [x] Implement has_one and has_many in a lazy way
- [x] Implement expressions in a lazy way
- [x] Implemented bakery example

0.0.4: Improve Entity tracking and add target documentation

- [x] Add documentation for target vision of the library
- [x] Add "Entity" concept into Table
- [x] Add example on how to use traits for augmenting Table of specific Entity
- [x] Implement rendering of QueryType::Update so that we could update records
- [x] Refine "AnyTable" concept, so that we can use table as dyn without (if we want)
- [x] Check on "Join", they should allow for Entity mutation (joined table associated with a different entity)
- [x] Implement has_one and has_many in a correct way, moving functionality to Related Reference
- [x] Implement Unrelated Reference (when ref leads to a table with different Data Source)
- [x] Implement a better data fetching mechanism, using default entity
- [x] Restore functionality of bakery example
- [x] Implement ability to include sub-queries based on related tables

0.0.5: Refactor internal crates

- [x] Move ReadableDataSet and WritableDataSet to separate crate and document
- [x] Implement WritableDataSet for Table (almost)
- [ ] Implement todo in update() in WritableDataSet for Table
- [ ] Continue through the docs - align crates with documentation

Create integration test-suite for SQL testing
# Create integration test-suite for SQL testing

- [x] Quality of life improvements - nextest and improved assert_eq
- [x] Implement testcontainers postgres connectivity
- [x] Get rid of testcontainers (they don't work anyway), use regular Postgres
- [ ] Create separate test-suite, connect DB etc
- [x] Populate Bakery tables for tests
- [x] Seed some data into Bakery tests
- [ ] Make use of Postgres snapshots in the tests
- [ ] Add integration tests for update() and delete() for Table

Control field queries

- [x] Convert Field and &Field into Arc<Field> everywhere
- [x] Implement a way to create a query with custom field references
- [x] Implement a way to query with a serialized structure
- [x] Separate fields from active fields structure
- [x] Implement ability to specify which fields to query for
# Control field queries

- [ ] add tests for all CRUD operations (ID-less table)
- [ ] implemented `each` functionality for DataSet
Expand All @@ -86,23 +50,12 @@ Control field queries
- [ ] add tests for table conditions (add_condition(field1.eq(field2))
- [ ] implement sub-library for datasource, supporting serde
- [ ] add second data-source (csv) as an example
- [x] datasource should convert query into result (traited)
- [x] select where a field is a sub-query
- [x] insert where a field value is an expression
- [x] insert where a field is imported from related table
- [x] select from a subquery
- [ ] add sql table as a dataset at a query level (+ clean up method naming)
- [ ] postgres expressions should add type annotation into query ("$1::text")

Pratcitacl tests:

- [x] Populate bakery tests
- [ ] Make bakery model more usable
- [ ] table.itsert_query should quote field names (bug)

Lazy features:
Implement extensions:

- [ ] Implement join_table_lazy()
- [ ] Lazy table joins (read-only)
- [ ] Implement add_field_lazy()

Minor Cases:
Expand All @@ -112,135 +65,6 @@ Minor Cases:
- [ ] Condition::or() shouldn't be limited to only two arguments
- [ ] It should not be possible to change table alias, after ownership of Fields is given

Implementing examples:

- [x] Add query filters
- [x] Add sum() function

```rust
let vip_client = Table::new('client', db)
.add_title('name')
.add_field('is_vip')
.add_condition('is_vip', true);

let sum = vip_client.sum('total_spent');
```

- [ ] Implement relations between tables

```rust
let mut clients = Table::new('client', db)
.add_title('name')
.add_field('is_vip');
let mut orders = Table::new('orders', db)
.add_field('total');

users.has_many('orders', orders, 'order_id', 'id');

let vip_total = clients.clone()
.add_condition('is_vip', true)
.ref('orders')
.sum('total');
```

- [ ] Implement syntax sugar for models
- [ ] Implement support for types

```rust

#[vantage::table]
struct Client {
name: String,
is_vip: bool,
}

#[vantage::table]
struct Order {
#[vantage::has_one(Client, "id"))]
user_id: i32,
total: f64,
}

let vip_total = Client::new(db)
.add_condition(is_vip.eq(true))
.ref_orders()
.sum(total);
```

# Future features

## Implement persistence-aware model

By a model we call a struct implementing ser/de traits that can be used with
DataSet to load, store and iterate over data. We do not need a basic implementation
to be persistence-aware. However with persistence-aware model we can implement
id-tracked conditioning. The model will know where it was loaded from and
will be able to update itself if changed, which can even be done on drop.

```rust
#[vantage::persistence(id = "my_id")]
struct Client {
my_id: i32,
name: String,
is_vip: bool,

_dsp: DataSourcePersistence, // required for persistence-aware model
}

let client = ClientSet::new(db)
.load(1);

db.transaction(|_| {

client.orders.each(|order: Order| {
order.price-= 10;
});

client.is_vip = true;
client.save();
});
```

## Implement non-table SQL data source

Basic implementation allows to use Table as an ORM data source. We can implement
a read-only source that have a query as a source.

TODO: query-based model can be a curious feature, but this example should be rewritten
to use a different table-like construct, returned by table.group() method.

```rust
struct GraphData {
date: Date,
value: f64,
}

struct DailyDeployments {
table_deployment: Deployments,
query: Query,
}

impl DailyDeployments {
// like Deployments, but with date grouping and date-range
pub fn new(ds: DataSource, date_from: Date, date_to: Date) -> Self {
let td = Deployments::new(ds);
let query = td
.query_fields(vec![td.date(), td.value()])
.add_condition(td.date().gte(date_from))
.add_condition(td.date().lte(date_to))
.group_by(td.date());

Self { ds, table }
}
pub fn date(&self) -> Field {
self.query.field(0)
}
}

let dd = DailyDeployments::new(db, Date::new(2020, 1, 1), Date::new(2020, 1, 31));
let data = dd.query().fetch::<GraphData>();
```

## Implement cross-datasource operations

Developers who operate with the models do not have to be aware of the data source.
Expand Down
1 change: 1 addition & 0 deletions bakery_model/Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ testcontainers-modules = { version = "0.8.0", features = [
tokio = "1.38.1"
tokio-postgres = "0.7.10"
sqlformat = "0.2.3"
sqlx = { version = "0.8.2", default-features = false, features = ["json", "postgres", "runtime-tokio"] }

[[example]]
name = "0-intro"
Expand Down
4 changes: 2 additions & 2 deletions bakery_model/examples/0-intro.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,8 @@ async fn create_bootstrap_db() -> Result<()> {
bakery_model::connect_postgres().await?;
let vantage_client = bakery_model::postgres();
let client = vantage_client.client();
let schema = tokio::fs::read_to_string("bakery_model/schema-pg.sql").await?;
client.batch_execute(&schema).await?;
let schema = tokio::fs::read_to_string("schema-pg.sql").await?;
sqlx::raw_sql(&schema).execute(client).await?;

Ok(())
}
Expand Down
4 changes: 1 addition & 3 deletions bakery_model/examples/1-soft-delete.rs
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,8 @@ async fn create_bootstrap_db() -> Result<()> {
// Get the postgres client for batch execution
let vantage_client = bakery_model::postgres();
let client = vantage_client.client();

// Read the schema from the file and execute it
let schema = tokio::fs::read_to_string("bakery_model/schema-pg.sql").await?;
client.batch_execute(&schema).await?;
sqlx::raw_sql(&schema).execute(client).await?;

Ok(())
}
Expand Down
4 changes: 1 addition & 3 deletions bakery_model/examples/2-joined-tables.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,8 @@ async fn create_bootstrap_db() -> Result<()> {
// Get the postgres client for batch execution
let vantage_client = bakery_model::postgres();
let client = vantage_client.client();

// Read the schema from the file and execute it
let schema = tokio::fs::read_to_string("bakery_model/schema-pg.sql").await?;
client.batch_execute(&schema).await?;
sqlx::raw_sql(&schema).execute(client).await?;

Ok(())
}
Expand Down
19 changes: 16 additions & 3 deletions bakery_model/schema-pg.sql
Original file line number Diff line number Diff line change
Expand Up @@ -66,14 +66,27 @@ VALUES
INSERT INTO
client (
name,
email,
contact_details,
is_paying_client,
bakery_id
)
VALUES
('Marty McFly', '555-1955', true, 1),
('Doc Brown', '555-1885', true, 1),
('Biff Tannen', '555-1955', false, 1);
(
'Marty McFly',
'[email protected]',
'555-1955',
true,
1
),
('Doc Brown', '[email protected]', '555-1885', true, 1),
(
'Biff Tannen',
'[email protected]',
'555-1955',
false,
1
);

INSERT INTO
product (name, calories, bakery_id, price)
Expand Down
30 changes: 2 additions & 28 deletions bakery_model/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -44,32 +44,6 @@ pub async fn connect_postgres() -> Result<()> {
let connection_string = std::env::var("DATABASE_URL")
.unwrap_or_else(|_| "postgres://postgres@localhost:5432/postgres".to_string());

let timeout = Duration::from_secs(3); // Max time to wait
let start_time = Instant::now();
let mut last_error: Result<()> = Ok(());

while Instant::now().duration_since(start_time) < timeout {
match tokio_postgres::connect(&connection_string, NoTls).await {
Ok((client, connection)) => {
tokio::spawn(async move {
if let Err(e) = connection.await {
eprintln!("connection error: {}", e);
}
});

set_postgres(Postgres::new(Arc::new(Box::new(client))))?;

println!("Successfully connected to the database.");
return Ok(());
}
Err(e) => {
println!("Error connecting to database: {}, retrying...", &e);
last_error = Err(anyhow::Error::new(e));
// sleep(Duration::from_secs(2)).await; // Wait before retrying
thread::sleep(Duration::from_millis(100));
}
}
}

last_error
let postgres = Postgres::new(&connection_string).await;
set_postgres(postgres)
}
Loading
Loading