From 9a7c67a1a0bed775248a2d6b7cb09207cc2d5573 Mon Sep 17 00:00:00 2001 From: Red Davies Date: Tue, 28 Oct 2025 15:17:57 -0400 Subject: [PATCH 1/3] Simplify the example The previous example with multiple tables, parsing json files etc was too complicated for a simple tutorial. This is simpler and illustrates all the issues. --- docs/simple/connecting.md | 21 +++-- docs/simple/custom-types-architecture.md | 102 +++++++++++++++++++++++ docs/simple/custom-types-json.md | 78 +++++++++++++++++ docs/simple/custom-types-testing.md | 68 +++++++++++++++ docs/simple/custom-types.md | 64 +++++++++++++- docs/simple/populating.md | 51 ++++++++---- docs/simple/queries.md | 34 ++++---- docs/simple/tablecreate.md | 81 +++++++++--------- docs/simple/tutorial.md | 64 +++++--------- mkdocs.yml | 7 +- 10 files changed, 449 insertions(+), 121 deletions(-) create mode 100644 docs/simple/custom-types-architecture.md create mode 100644 docs/simple/custom-types-json.md create mode 100644 docs/simple/custom-types-testing.md diff --git a/docs/simple/connecting.md b/docs/simple/connecting.md index ba6002e..2e63862 100644 --- a/docs/simple/connecting.md +++ b/docs/simple/connecting.md @@ -8,11 +8,21 @@ In this API, these are encapsulated as follows: * Database Handle, ODBCDbc * Statement Handle, ODBCStmt +## Brief Database Transaction Primer + +A database transaction is a sequence of one or more operations—such as reading, writing, updating, or deleting data—performed on a database as a single, logical unit of work. Transactions ensure that either all operations are successfully completed (committed) or none are applied (rolled back), maintaining data integrity even in the event of system failures. In other words, all commands in a transaction are either applied in an atomic manner, or rejected. + +ODBC by default commits every command you execute after you execute it. This is called "autocommit". + +Transactions are a best-practice, so we are going to utilize them in our tutorial from the very begining. + +In order to implement transactions, we need to disable autocommit. + ## General Structure of the API The vast majority of calls on ODBCEnv, ODBCDbc, and ODBCStmt return `Bool ?`. In other words, these are partial functions. -Structuring the API in this way means that we can serialize our calls and choose to fail / rollback a transaction if any of them fail. It makes for a clean interface. +Structuring the API in this way means that we can serialize our API calls into blocks of SQL commands, and then choose to fail / rollback a transaction if any of them fail. It makes for a very clean interface. ```pony use "debug" @@ -28,9 +38,9 @@ actor Main let enh: ODBCEnv = ODBCEnv try let dbh: ODBCDbc = enh.dbc()? + dbh.set_autocommit(false)? dbh.connect("psql-demo")? else - Debug.out("We were enable to create our database handle") end ``` @@ -42,17 +52,18 @@ First we create our Environment Object: let enh: ODBCEnv = ODBCEnv ``` -Then we create our Database Object and connect to our database using the DSN we configured previously in our .odbc.ini file: +Then we create our Database Object, set auto\_commit to false, and connect to our database using the DSN we configured previously in our .odbc.ini file: ```pony try let dbh: ODBCDbc = enh.dbc()? + dbh.set_autocommit(false)? dbh.connect("psql-demo")? else - Debug.out("We were enable to create our database handle") + Debug.out("We were enable to connect to our database") end ``` Once the `dbh.connect()?` call is complete, we have an authenticated connection to the database instance that we requested. -Next up - let's create some tables! +Next up - let's create our table! diff --git a/docs/simple/custom-types-architecture.md b/docs/simple/custom-types-architecture.md new file mode 100644 index 0000000..280dbab --- /dev/null +++ b/docs/simple/custom-types-architecture.md @@ -0,0 +1,102 @@ +# SQL Type Architecture + +Each SQL Type implementation is a type-aware wrapper around a C buffer. This buffer is passed to the ODBC driver via `bind_parameter` (for inputs), or `bind_column` (for results). At a high level, we are responsible for: + +- Sizing / Allocating / Deallocating the C-buffer. +- Marshalling / Unmarshalling from the C-buffer to the native pony types. + +## The SQLType trait + +Almost all of the common-code is already implemented for you in SQLType. You should only need to implement the class fields, accessors, and the read()? / write functions. + +### Example: SQLInteger + +Let's look at SQLInteger line-by-line. + +```pony +class SQLInteger is SQLType + """ + The internal class which represents an Integer (I32) + """ + var _v: CBoxedArray = CBoxedArray + var _err: SQLReturn val = SQLSuccess +``` + +When you create your class, you must mark your new SQL Type as a `SQLType`. This is the type that all the associated functions (such as: `bind_parameter(SQLType): Bool ?` will expect. + +Inside you need a (minimum) of two fields, one for the C Buffer (CBoxedArray), and one for SQLReturn. + +Next, we need a constructor - and as we know that a textual representation of a SQL Integer (I32) will never exceed 15 characters, we allocate 15 bytes: + +```pony + new create() => _v.alloc(15) +``` + +Next we need to provide accessors for both of these field variables, so that they can be manipulated by the API: + +```pony + fun \nodoc\ ref get_boxed_array(): CBoxedArray => _v + fun \nodoc\ ref set_boxed_array(v': CBoxedArray) => _v = v' + + fun \nodoc\ ref get_err(): SQLReturn val => _err + fun \nodoc\ ref set_err(err': SQLReturn val) => _err = err' +``` + +Lastly, we implement marshallers and demarshallers: + +```pony + fun ref write(i32: I32): Bool => + """ + Write an I32 to this buffer as a string. The string will fit + as we defined the buffer to be characters on initialization. + + Will return true if written and verification succeeds. + + Will return false if the string is too long for the buffer or + the readback doesn't match for some other reason. + """ + _write(i32.string()) + + fun ref read(): I32 ? => + """ + Read the value of the buffer and convert it into an I32. + + This function is partial just in case the string version + that comes from the database is not within the I32 range. + + NOTE: SQLite does not enforce type limits, so if you use + that specific ODBC driver, this is something that must be + verified. + """ + if (get_boxed_array().is_null()) then + error + else + _v.string().i32()? + end +``` + +## Useful trait functions + +There are a few functions in `SQLType` that are automatically available to either assist in debugging, or provide needed functionality. You do not need to do anything to use these: + +| Function Signature | Purpose | +|-------------------------|----------------------------------------------------------------| +| reset(): Bool | Resets and zeros the buffer to its initial state. | +| string(): String iso^ | Returns the data as delivered by the database as a String. | +| array(): Array[U8] iso^ | Returns the data as delieverd by the database as an Array[U8]. | +| is\_null(): Bool | Returns if the buffer was populated by a SQL Null | +| null() | Sets the buffer to represent a SQL Null | + +## Buffer Behaviour + +There is a difference in the way that buffers behave, depending on if they are used as a parameter or as a part of a result set: + +### Parameter + +You are responsible for ensuring that the buffer is of a sufficient size to hold the value you choose to write to it, and that the value is a valid String. If you do not, the trait's `_write()` will return false. + +THE BUFFER WILL NOT RESIZE IF YOU TRY AND STUFF SOMETHING TOO LARGE INTO IT. IT CAN'T, BECAUSE THE BUFFER IS ALREADY BOUND TO YOUR PREPARE STATEMENT. + +### Column (part of a result set) + +Even though you are responsible for ensuring that the buffer is of a sufficient size to hold the resultant value, the API *will* resize, rebind, and re-read the data for you. The newly created larger buffer replaces the previous one in the API, but as long as you don't poke into the `CBoxedArray` in the implementation, it should be seamless to you. diff --git a/docs/simple/custom-types-json.md b/docs/simple/custom-types-json.md new file mode 100644 index 0000000..5be477f --- /dev/null +++ b/docs/simple/custom-types-json.md @@ -0,0 +1,78 @@ +# Implementing SQLJson + +Here is a simple example for how to implement a custom SQL Type for SQLJson + +## Dependencies + +Firstly, we need to ensure that `ponylang/json` is included in our corral.json, like so: + +```quote +red@panic:~/project/psql-demo$ corral add github.com/ponylang/json.git --version 0.2.0 +red@panic:~/project/psql-demo$ corral fetch +``` + +## Starting our class + +We need to include the `ponylang/json` dependency, ensure that the class uses the `SQLType` trait, and create the boilerplate code. + +Unfortutely, since Json can be any arbitrary size, we will need our end-user to declare the size of the buffer they wish to use. + +Here's what we start with: + +```pony +use "pony-odbc" +use "json" + +class SQLJson is SQLType + """ + An example class that represents a PostgreSQL Json type + """ + var _v: CBoxedArray = CBoxedArray + var _err: SQLReturn val = SQLSuccess + + new create(size: USize) => _v.alloc(size) + """ + Creates a SQLJson object. You must specify the textual size at creation. + """ + + fun \nodoc\ ref get_boxed_array(): CBoxedArray => _v + fun \nodoc\ ref set_boxed_array(v': CBoxedArray) => _v = v' + + fun \nodoc\ ref get_err(): SQLReturn val => _err + fun \nodoc\ ref set_err(err': SQLReturn val) => _err = err' +``` + +Now we simply refer to the [ponylang/json documentation](https://ponylang.github.io/json/json--index/) to work out how to serialize / deserialize the data: + +To serialize, we simply call `JsonDoc.string()`. + +```pony + fun ref write(json: JsonDoc): Bool => + """ + Write the serialized JsonDoc to the buffer. + + Will return true if written and verification succeeds. + + Will return false if the string is too long for the buffer or + the readback doesn't match for some other reason. + """ + _write(json.string()) +``` + +To deserialize, we simply call `JsonDoc.parse()?` on the string returned from the database: + +```pony + fun ref read(): JsonDoc ref ? => + """ + Once we have confirmed that the data is NOT NULL, we create a new JsonDoc instance and call `JsonDoc.parse()?` on it with the string to populate it. + """ + if (get_boxed_array().is_null()) then + error + else + var json: JsonDoc = JsonDoc + json.parse(_v.string())? + json + end +``` + +Hopefully you can see from this example, that we have gone out of our way to make the creation of custom SQL Types as painless as possible. diff --git a/docs/simple/custom-types-testing.md b/docs/simple/custom-types-testing.md new file mode 100644 index 0000000..96b8754 --- /dev/null +++ b/docs/simple/custom-types-testing.md @@ -0,0 +1,68 @@ +# Testing SQLJson + +In order to test our new custom class, let's update our tutorial example: + +## Add the json package + +```pony +use "json" +``` + +## Create a JsonDoc to store + +We'll use the simple example from the package documentation, and create two functions. One to write, one to read - and we'll display the serialized output directly from JsonDoc object our SQLJson created! + +```pony + var json: JsonDoc = JsonDoc + var obj: JsonObject = JsonObject + + obj.data("key") = "value" + obj.data("property") = true + obj.data("array") = JsonArray.from_array([ as JsonType: I64(1); F64(2.5); false]) + json.data = obj + + write_json_record(sth, json)? + Debug.out(read_json_record(sth)?.string()) +``` + +## Write the JsonDoc (write\_json\_record) + +In this function, we'll just insert our JsonDoc into a row with the name "Json Record". Note that this function takes a pony class `JsonDoc`. + +```pony + fun write_json_record(sth: ODBCStmt, json: JsonDoc)? => + var name: SQLVarchar = SQLVarchar(254) + var jsonfrag: SQLJson = SQLJson(1023) + sth + .> prepare("insert into psqldemo (name,jsonfragment) values (?,?)")? + .> bind_parameter(name)? + .> bind_parameter(jsonfrag)? + + name.write("Json Record") + jsonfrag.write(json) + + sth + .> execute()? + .> finish()? +``` + +## Read our JsonDoc (read\_json\_record) + +In this function we'll read the row and return a `JsonDoc`. + +```pony + fun read_json_record(sth: ODBCStmt): JsonDoc ? => + var name: SQLVarchar = SQLVarchar(254) + var jsonfrag: SQLJson = SQLJson(1023) + sth + .> prepare("select jsonfragment from psqldemo where name = ?")? + .> bind_parameter(name)? + .> bind_column(jsonfrag)? + + name.write("Json Record") + sth + .> execute()? + .> fetch()? + .> finish()? + jsonfrag.read()? +``` diff --git a/docs/simple/custom-types.md b/docs/simple/custom-types.md index dcf2c80..55f8eb7 100644 --- a/docs/simple/custom-types.md +++ b/docs/simple/custom-types.md @@ -1 +1,63 @@ -# Placeholder +# Why Custom Types? + +As all data going in and out of the ODBC API goes in and out in a textual format by default for portability, it's very rare that it is something you need to do at all. + +You can just use SQLVarchars, or create your own type aliases for code readability like this: + +```pony +type SQLXml is SQLVarchar +type SQLJson is SQLVarchar +type SQLTimestampWTz is SQLVarchar +``` + +… and in our example: + +```pony + var name: SQLVarchar = SQLVarchar(254) + var xmlfrag: SQLXml = SQLXml(1024) + var jsonfrag: SQLJson = SQLJson(1024) + + sth + .> prepare("insert into psqldemo (name, xmlfragment, jsonfragment) values (?,?,?)")? + .> bind_parameter(name)? + .> bind_parameter(xmlfrag)? + .> bind_parameter(jsonfrag)? + + name.write("Some Name") + xmlfrag.write( + """ + bar + """) + + jsonfrag.write( + """ + { + "packages": [], + "deps": [ + { + "locator": "github.com/redvers/pony-odbc.git" + } + ] + } + """) + + sth.execute()? +``` + +But, wouldn't it be better if we could create SQL Types that give us legitimate pony classes for our inputs and outputs like this: + +```pony + // SQLXml + fun write(xml: Xml2Doc): Bool + fun read(): Xml2Doc ? + + // SQLJson + fun write(json: JsonDoc): Bool + fun read(): JsonDoc ? + + // SQLTimestampWTz + fun write(ts: PosixDate, tz: String): Bool + fun read(): (PosixDate, String iso^) ? +``` + +In the following pages, we will fully implement SQLJson, compatable with `ponylang/json`. diff --git a/docs/simple/populating.md b/docs/simple/populating.md index d5e6f2f..bd318b3 100644 --- a/docs/simple/populating.md +++ b/docs/simple/populating.md @@ -1,6 +1,6 @@ # Populating Our Tables -We're going to manually insert two of the Play names into our database to make the process as clear as possible: +We're going to manually insert two rows into this table to illustrate the simplest case. ## Preparing Statements @@ -9,42 +9,65 @@ When we are executing any statements that use any kind of user-input or are goin A prepared statement allows you to create tokens in your SQL Statement that are replaced with data at runtime. For example: ```pony -sth.prepare("insert into play (name) values (?)")? +sth.prepare("insert into psqldemo (name) values (?)")? ``` This creates a _parameter_ placeholder which we need to populate with the correct value to insert into the database. -Let's write a simple function to populate the play table: +Let's write a simple function to populate our table: ```pony -fun populate_play_table(sth: ODBCStmt)? => - var name: SQLVarchar = SQLVarchar(31) +fun populate_demo_table(sth: ODBCStmt)? => + var name: SQLVarchar = SQLVarchar(254) sth - .> prepare("insert into play (name) values (?)")? + .> prepare("insert into psqldemo (name) values (?)")? .> bind_parameter(name)? ``` -When we bind parameters we have to bind them in order. If there were multiple parameters in our `prepare()?` statement then we would have to execute multiple `bind_parameter()?` statements, with the variables and types in the correct order. We will see this later when we populate more complex tables. +When we bind parameters we have to bind them in order. If there were multiple parameters in our `prepare()?` statement then we would have to execute multiple `bind_parameter()?` statements, with the variables and types in the correct order. We will see this later when we populate more fields. Now that we have our statement fully prepared, we can populate data and execute it: ```pony - name.write("Romeo and Juliet") + name.write("First Simple Row") sth.execute()? - name.write("A Midsummer nights dream") - sth.execute()? + name.write("Second Simple Row") + sth + .> execute()? + .> finish()? +``` + +`name` is a pointer to a buffer which we have told our database will contain our argument. So we populate that buffer with our data, execute our query, populate it with different data, execute the query again. + +The `.finish()?` call tells the database that we are no longer using our `sth` handle for that prepared statement, and that we will never execute that prepared query again. + +Yes, we could have achieved the same thing with: + +```pony + sth.direct_exec("insert into psqldemo (name) values ('First Simple Row')")? + sth.direct_exec("insert into psqldemo (name) values ('Second Simple Row')")? ``` -Please note that we are reusing the same statement handle (sth), and the same bound parameter (name). In doing so we do not have to do the SQL statement parsing, query setup, memory allocation for our buffers, and binding said newly allocated buffers. +But doing it using prepare statements means that not only do we not have to worry about SQL Injection, but: + +- The SQL Statement is only being parsed once. +- The associated objects are allocated only once. +- The input/output buffers only being allocated once, and re-used. +- The binding of buffers is only done once. + +When we expand to hundreds or thousands of queries, this makes a significant difference. -Of course, in our example you'll need to add a call to this function on line 17 of our example: +Of course, if we choose, we can now treat both the table creation and these row insertions as a single transaction like this: ```pony - let sth: ODBCStmt = dbh.stmt()? + let sth: ODBCStmt = dbh.stmt()? + try create_tables(sth)? - populate_play_table(sth)? + populate_demo_table(sth)? +// dbh.commit()? else +// dbh.rollback()? ``` Next up, let's query the data! diff --git a/docs/simple/queries.md b/docs/simple/queries.md index 60f0099..27ec722 100644 --- a/docs/simple/queries.md +++ b/docs/simple/queries.md @@ -1,31 +1,31 @@ # Simple Queries -Let's write a simple function to query the database for the id (I64) for a specific play in the play table. +Let's write a simple function to query the database for the id (I64) for a row with a specific name: ## Preparing the Query -A reminder that our tables schema looks like this: +A reminder that the part of the table we're interested in looks like this: ```sql -CREATE TEMPORARY TABLE play ( +CREATE TABLE psqldemo ( id BIGSERIAL, - name VARCHAR(30) UNIQUE NOT NULL -); + name VARCHAR(254) UNIQUE NOT NULL, + ⋮ ``` In order to fulfil our function, we will need to provide a SQLVarchar _in_, and a SQLBigInteger _out_. ```pony -fun play_id_from_name(sth: ODBCStmt, queryname: String): I64 ? -=> +fun id_from_name(sth: ODBCStmt, qname: String): I64 ? -=> var id: SQLBigInteger = SQLBigInteger - var name: SQLVarchar = SQLVarchar(31) + var name: SQLVarchar = SQLVarchar(254) ``` Like before, we need to bind our name _parameter_ to our query using `bind_parameter()?`. In addition, we need to bind a _column_ for every column that will be in the query's result set. We do this using the somewhat intuitive `bind_column()?` function: ```pony sth - .> prepare("select id from play where name = ?")? + .> prepare("select id from psqldemo where name = ?")? .> bind_parameter(name)? .> bind_column(id)? ``` @@ -33,8 +33,7 @@ Like before, we need to bind our name _parameter_ to our query using `bind_param Then we can write our value to the name _parameter_, execute the query and fetch the (singular, due to name being UNIQUE) result back. ```pony - name.write(queryname) - + name.write(qname) sth.execute()? if (sth.fetch()?) then @@ -50,13 +49,16 @@ NOTE: There is a trap here. You *must* check the return value of `fetch()?`. If Let's add some example calls to our Main.create function to test this: ```pony - create_tables(sth)? - populate_play_table(sth)? - Debug.out(" R&J: " + play_id_from_name(sth, "Romeo and Juliet")?.string()) - Debug.out("MSND: " + play_id_from_name(sth, "A Midsummer nights dream")?.string()) + let sth: ODBCStmt = dbh.stmt()? try - play_id_from_name(sth, "I don't exist")? + create_table(sth)? + Debug.out("Successfully created table: psqldemo") + populate_demo_table(sth)? + Debug.out("Successfully written two rows") + + Debug.out("First Simple Row: " + id_from_name(sth, "First Simple Row")?.string()) +// dbh.commit()? else - Debug.out("I don't exist doesn't exist™") +// dbh.rollback()? end ``` diff --git a/docs/simple/tablecreate.md b/docs/simple/tablecreate.md index 2f34463..0f192bf 100644 --- a/docs/simple/tablecreate.md +++ b/docs/simple/tablecreate.md @@ -1,9 +1,13 @@ -# Creating Our Tables +# Creating Our Table In order to create our tables, we simply execute the SQL commands. As there are no parameters, we can use the function `direct_exec("my sql statement")?` which will either succeed or fail. ## Creating Statements +There are two SQL Statements needed to create our table. The first creates the table, the second `ALTER`s the table to add a PRIMARY KEY constraint. We need both of these functions to either succeed together, or fail together. If we don't, we could end up in a situation where the table exists, but the primary key could cause data inconsistency. + +For ease of following this tutorial, we will put the `commit()?` and `rollback()?` calls in the sources for illustration, but leave them commented out. This will result in no changes being committed, effectively making the tables `TEMPORARY` so we can run them again and again without having to drop them between runs. + In our application, let's make a Statement Handle and pass it to a function which will create our tables. Here is our full example thus far: ```pony @@ -20,12 +24,16 @@ actor Main let enh: ODBCEnv = ODBCEnv try let dbh: ODBCDbc = enh.dbc()? + dbh.set_autocommit(false)? dbh.connect("psql-demo")? let sth: ODBCStmt = dbh.stmt()? try - create_tables(sth)? + create_table(sth)? + Debug.out("Successfully created table: psqldemo") +// dbh.commit()? else Debug.out("Our create_tables() function threw an error") +// dbh.rollback()? error end else @@ -33,55 +41,44 @@ actor Main end ``` +So to tease this out: + +```pony + try + create_tables(sth)? + Debug.out("Successfully created table: psqldemo") +// dbh.commit()? + else + Debug.out("Our create_tables() function threw an error") +// dbh.rollback()? +``` + +Were the commit/rollback calls uncommented: + +If the `create_table(sth)?` fully succeeded, the `dbh.commit()?` function would be called and the changes committed to the database. If it failed, the `dbh.rollback()?` function would be called - leaving us in a known state. + Now lets create our function to make these (temporary) tables. ```pony - fun create_tables(sth: ODBCStmt)? => - .> direct_exec( - """ - CREATE TEMPORARY TABLE play ( - id BIGSERIAL, - name VARCHAR(30) NOT NULL - ); - """)? + fun create_table(sth: ODBCStmt)? => + sth .> direct_exec( """ - ALTER TABLE play ADD CONSTRAINT play_pkey PRIMARY KEY (id); - """)? - .> direct_exec( - """ - CREATE TEMPORARY TABLE player ( - id BIGSERIAL, - name VARCHAR(20) NOT NULL + CREATE TABLE psqldemo ( + id BIGSERIAL, + name VARCHAR(254) UNIQUE NOT NULL, + xmlfragment XML, + jsonfragment JSON, + insert_ts TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT current_timestamp ); """)? .> direct_exec( """ - ALTER TABLE player ADD CONSTRAINT player_pkey PRIMARY KEY (id); - """)? - .> direct_exec( - """ - CREATE TEMPORARY TABLE line ( - id BIGSERIAL, - id_play INTEGER, - id_player INTEGER, - playerlinenumber INTEGER, - actsceneline VARCHAR(15), - playerline VARCHAR(127) NOT NULL - ); - """)? - .> direct_exec( - """ - ALTER TABLE line ADD CONSTRAINT line_pkey PRIMARY KEY (id); - """)? - .> direct_exec( - """ - ALTER TABLE line ADD CONSTRAINT line_id_play_fkey FOREIGN KEY (id_play) REFERENCES play(id); - """)? - .> direct_exec( - """ - ALTER TABLE line ADD CONSTRAINT line_id_player_fkey FOREIGN KEY (id_player) REFERENCES player(id); + ALTER TABLE psqldemo ADD CONSTRAINT psqldemo_pkey PRIMARY KEY (id); """)? + .> finish()? ``` -Since we are not reading or writing any parameters in this example, we can simply use the `direct_exec()?` function which does, as the name suggests - direct execution. +When we are executing SQL commands that don't require any parameters, we use the `direct_exec()?` function, which executes the command immediately. + +Yes, you *can* place parameters in the SQL statement, but there are very good security and performance reasons to *NOT* do this, which we will cover in the next section. diff --git a/docs/simple/tutorial.md b/docs/simple/tutorial.md index 423dfb0..c81046c 100644 --- a/docs/simple/tutorial.md +++ b/docs/simple/tutorial.md @@ -1,57 +1,39 @@ # Simple API Tutorial -In this tutorial we're going to write a somewhat simple database application to store and query lines from two of William Shakespeare's plays. +In this tutorial we're going to: -Admittedly, the Schema will be over-engineered for demonstration reasons. - -In our application we will check to see if our table exists, and if not, we'll create it. - -Then we will parse the JSON files in the data/ directory and populate the tables. - -Then we will do various queries on our data. +- Create a table +- Write to a table +- Perform queries +- Extend our program with three custom SQL types that are unsupported by the ODBC standard. ## Schema -![Schema Image](../assets/shakespeare-schema.png) +Our very simple table (psqldemo) is defined as follows: -The tables are defined as follows: +| Column Name | SQLType | Nullable | Default | +|--------------|----------------|----------|-------------------| +| id | bigint | No | Autoincrements | +| name | varchar(254) | No | | +| xmlfragment | xml | Yes | NULL | +| jsonfragment | json | Yes | NULL | +| insert\_ts | timestamp w/TZ | No | Current Timestamp | -### Table: play +The first two fields use standard ODBC SQL datatypes. The last three we will be building custom SQL types for. -```sql -CREATE TABLE play ( - id BIGSERIAL, - name BIGSERIAL NOT NULL -); +### Table: psqldemo -ALTER TABLE play ADD CONSTRAINT play_pkey PRIMARY KEY (id); -``` +The SQL required to create our example table is below. Usually the tables are created independently of applications, but for completeness - we will have our program create our table. -### Table: player ```sql -CREATE TABLE player ( - id BIGSERIAL, - name VARCHAR(20) NOT NULL +CREATE TABLE psqldemo ( + id BIGSERIAL, + name VARCHAR(254) UNIQUE NOT NULL, + xmlfragment XML, + jsonfragment JSON, + insert_ts TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT current_timestamp ); -ALTER TABLE player ADD CONSTRAINT player_pkey PRIMARY KEY (id); -``` - -### Table: line - -```sql -CREATE TABLE line ( - id BIGSERIAL, - id_play INTEGER, - id_player INTEGER, - playerlinenumber INTEGER, - actsceneline VARCHAR(15), - playerline VARCHAR(127) NOT NULL DEFAULT 'NULL' -); - -ALTER TABLE line ADD CONSTRAINT line_pkey PRIMARY KEY (id); - -ALTER TABLE line ADD CONSTRAINT line_id_play_fkey FOREIGN KEY (id_play) REFERENCES play(id); -ALTER TABLE line ADD CONSTRAINT line_id_player_fkey FOREIGN KEY (id_player) REFERENCES player(id); +ALTER TABLE psqldemo ADD CONSTRAINT psqldemo_pkey PRIMARY KEY (id); ``` diff --git a/mkdocs.yml b/mkdocs.yml index 673ec4c..7fbcfa6 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -81,12 +81,15 @@ nav: - Tutorial Schema: "simple/tutorial.md" - Including pony-odbc: "simple/stage1.md" - Connecting To Our Database: "simple/connecting.md" +# - Transactions, Commits, and Rollbacks: "simple/transactions.md" - Creating Our Table: "simple/tablecreate.md" - About SQL Types: "simple/sqltypes.md" - Populating Our Table: "simple/populating.md" - Simple Queries: "simple/queries.md" - - Transactions, Commits, and Rollbacks: "simple/transactions.md" - - Implementing Custom Types: "simple/custom-types.md" + - Why Custom Types: "simple/custom-types.md" + - Custom Type Architecture: "simple/custom-types-architecture.md" + - Implementing SQLJson: "simple/custom-types-json.md" + - Testing SQLJson: "simple/custom-types-testing.md" - The "Raw" API: - Overview: "raw/index.md" - The "ORM" API: From df98bc95a9ada719cff47013c8972f82e4939fc5 Mon Sep 17 00:00:00 2001 From: Red Davies Date: Fri, 31 Oct 2025 12:10:02 -0400 Subject: [PATCH 2/3] Linting --- .spelling-wordlist.txt | 16 ++++++++++++++++ docs/simple/connecting.md | 2 +- docs/simple/custom-types-architecture.md | 2 +- docs/simple/tutorial.md | 1 - 4 files changed, 18 insertions(+), 3 deletions(-) diff --git a/.spelling-wordlist.txt b/.spelling-wordlist.txt index 5bcd58f..88c1b46 100644 --- a/.spelling-wordlist.txt +++ b/.spelling-wordlist.txt @@ -1,11 +1,16 @@ +accessors API APIs autocommit BIGINT +Bool +CBoxedArray Datatypes DBC DBD DBI +Deallocating +deserialize dotfile DSN DSNs @@ -13,12 +18,17 @@ ENV hostname ie ini +iso iODBC jdbc +json +Json +JsonDoc JSON ldd libodbc lockin +marshallers Makefile Nullable NULLs @@ -36,6 +46,7 @@ pony-odbc postgres postgresql postgreSQL +psqldemo RDBMS schemas se @@ -44,15 +55,20 @@ sth SQLBigInteger SQLFloat SQLInteger +SQLJson SQLNull SQLReal +SQLReturn SQLSmallInteger SQLState SQLStates SQLTables +SQLType SQLVarchar +SQLVarchars STMT targeting unixODBC unix +Unmarshalling VARCHAR diff --git a/docs/simple/connecting.md b/docs/simple/connecting.md index 2e63862..22dddd9 100644 --- a/docs/simple/connecting.md +++ b/docs/simple/connecting.md @@ -14,7 +14,7 @@ A database transaction is a sequence of one or more operations—such as reading ODBC by default commits every command you execute after you execute it. This is called "autocommit". -Transactions are a best-practice, so we are going to utilize them in our tutorial from the very begining. +Transactions are a best-practice, so we are going to utilize them in our tutorial from the very beginning. In order to implement transactions, we need to disable autocommit. diff --git a/docs/simple/custom-types-architecture.md b/docs/simple/custom-types-architecture.md index 280dbab..6f5958a 100644 --- a/docs/simple/custom-types-architecture.md +++ b/docs/simple/custom-types-architecture.md @@ -83,7 +83,7 @@ There are a few functions in `SQLType` that are automatically available to eithe |-------------------------|----------------------------------------------------------------| | reset(): Bool | Resets and zeros the buffer to its initial state. | | string(): String iso^ | Returns the data as delivered by the database as a String. | -| array(): Array[U8] iso^ | Returns the data as delieverd by the database as an Array[U8]. | +| array(): Array[U8] iso^ | Returns the data as delivered by the database as an Array[U8]. | | is\_null(): Bool | Returns if the buffer was populated by a SQL Null | | null() | Sets the buffer to represent a SQL Null | diff --git a/docs/simple/tutorial.md b/docs/simple/tutorial.md index c81046c..cbfeddb 100644 --- a/docs/simple/tutorial.md +++ b/docs/simple/tutorial.md @@ -25,7 +25,6 @@ The first two fields use standard ODBC SQL datatypes. The last three we will be The SQL required to create our example table is below. Usually the tables are created independently of applications, but for completeness - we will have our program create our table. - ```sql CREATE TABLE psqldemo ( id BIGSERIAL, From 381483250f85aeab3470bffecac00f546f4687b4 Mon Sep 17 00:00:00 2001 From: Red Davies Date: Fri, 31 Oct 2025 12:19:35 -0400 Subject: [PATCH 3/3] More linting --- .spelling-wordlist.txt | 10 ++++++++++ docs/simple/custom-types-json.md | 2 +- docs/simple/custom-types.md | 2 +- 3 files changed, 12 insertions(+), 2 deletions(-) diff --git a/.spelling-wordlist.txt b/.spelling-wordlist.txt index 88c1b46..aae873b 100644 --- a/.spelling-wordlist.txt +++ b/.spelling-wordlist.txt @@ -2,10 +2,14 @@ accessors API APIs autocommit +Autoincrements BIGINT +bigint Bool CBoxedArray Datatypes +datatypes +demarshallers DBC DBD DBI @@ -22,6 +26,7 @@ iso iODBC jdbc json +jsonfragment Json JsonDoc JSON @@ -68,7 +73,12 @@ SQLVarchar SQLVarchars STMT targeting +TZ +uncommented unixODBC unix Unmarshalling VARCHAR +varchar +xml +xmlfragment diff --git a/docs/simple/custom-types-json.md b/docs/simple/custom-types-json.md index 5be477f..0636d2b 100644 --- a/docs/simple/custom-types-json.md +++ b/docs/simple/custom-types-json.md @@ -15,7 +15,7 @@ red@panic:~/project/psql-demo$ corral fetch We need to include the `ponylang/json` dependency, ensure that the class uses the `SQLType` trait, and create the boilerplate code. -Unfortutely, since Json can be any arbitrary size, we will need our end-user to declare the size of the buffer they wish to use. +Unfortunately, since Json can be any arbitrary size, we will need our end-user to declare the size of the buffer they wish to use. Here's what we start with: diff --git a/docs/simple/custom-types.md b/docs/simple/custom-types.md index 55f8eb7..c448952 100644 --- a/docs/simple/custom-types.md +++ b/docs/simple/custom-types.md @@ -60,4 +60,4 @@ But, wouldn't it be better if we could create SQL Types that give us legitimate fun read(): (PosixDate, String iso^) ? ``` -In the following pages, we will fully implement SQLJson, compatable with `ponylang/json`. +In the following pages, we will fully implement SQLJson, compatible with `ponylang/json`.