Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions .spelling-wordlist.txt
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
API
APIs
BIGINT
Datatypes
DBC
DBD
DBI
Expand All @@ -11,9 +13,18 @@ hostname
ini
iODBC
jdbc
JSON
ldd
libodbc
lockin
Makefile
Nullable
NULLs
odbc
ODBC
ODBCEnv
ODBCDbc
ODBCStmt
ODBC's
ORM
pony
Expand All @@ -26,9 +37,17 @@ postgreSQL
RDBMS
schemas
se
SMALLINT
SQLBigInteger
SQLFloat
SQLInteger
SQLReal
SQLSmallInteger
SQLState
SQLStates
SQLVarchar
STMT
targeting
unixODBC
unix
VARCHAR
Binary file added docs/assets/shakespeare-schema.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
58 changes: 58 additions & 0 deletions docs/simple/connecting.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
# Connecting To Our Database

As we mentioned in our overview, before we can create any queries, we need a Statement Handle. In order to get a Statement Handle, we need a Database Handle. In order to get a Database Handle, we need an Environment Handle.

In this API, these are encapsulated as follows:

* Environment Handle, ODBCEnv
* Database Handle, ODBCDbc
* Statement Handle, ODBCStmt

## General Structure of the API

The vast majority of calls on ODBCEnv, ODBCDbc, and ODBCStmt return `Bool ?`. In other words, these are partial functions.

Structuring the API in this way means that we can serialize our calls and choose to fail / rollback a transaction if any of them fail. It makes for a clean interface.

```pony
use "debug"
use "pony-odbc"
use "lib:odbc"

actor Main
let env: Env

new create(env': Env) =>
env = env'

let enh: ODBCEnv = ODBCEnv
try
let dbh: ODBCDbc = enh.dbc()?
dbh.connect("psql-demo")?
else
Debug.out("We were enable to create our database handle")
end
```

Let's tease this apart:

First we create our Environment Object:

```pony
let enh: ODBCEnv = ODBCEnv
```

Then we create our Database Object and connect to our database using the DSN we configured previously in our .odbc.ini file:

```pony
try
let dbh: ODBCDbc = enh.dbc()?
dbh.connect("psql-demo")?
else
Debug.out("We were enable to create our database handle")
end
```

Once the `dbh.connect()?` call is complete, we have an authenticated connection to the database instance that we requested.

Next up - let's create some tables!
1 change: 1 addition & 0 deletions docs/simple/custom-types.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Placeholder
33 changes: 32 additions & 1 deletion docs/simple/index.md
Original file line number Diff line number Diff line change
@@ -1 +1,32 @@
# Simple API
# Simple API Overview

The main purpose of this Simple API is to provide 95% of the functionality required to write any database integration you need without having to expose yourself to any of the sharp corners of the C API.

## Philosophy

We have tried to stay consistent in the design of this API in order to try and make writing your code as unobtrusive as possible. Here are the principles we have tried to follow:

### Transactions and Code Blocks

When one writes a database application it is fairly common for us to batch SQL Statements into transactions. The reason for this is because we want these statements to either succeed or fail as one unit. If the statements succeed, then we `commit` the transaction. If they fail, we `rollback` the transaction, and it's like none of the statements were executed at all.

Pony has a mechanism that is well suited to that - partial functions!

```pony
try
statement_handle.direct_exec("some SQL command")?
statement_handle.direct_exec("some other SQL command")?
statement_handle.direct_exec("etc etc etc ...")?
database_handle.commit()
else
database_handle.rollback()
end
```

### SQL Datatypes

There are two ways to get data in and out of the ODBC API, one is via native C types, the other is via a textual representation. After much experimentation and discovery that consistency does not seem to be a virtue across database vendors - we have chosen to use the textual representation.

This means that all SQL Datatypes are read and written via text buffers - the size of which needs to be determined in advance.

This means that the addition of custom database types can be done trivially, by defining their buffer's size, and writing type-specific read() and write() functions. See [Implementing Custom Types](https://redvers.github.io/odbc-tutorial/simple/custom-types.html) for more details.
1 change: 1 addition & 0 deletions docs/simple/populating.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Placeholder
1 change: 1 addition & 0 deletions docs/simple/queries.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Placeholder
42 changes: 42 additions & 0 deletions docs/simple/sqltypes.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# About SQL Types

As mentioned previously, all data is written in and out of the database via textual buffers. In principle you can just use the SQLVarchar type for all input and output parameters and do the type conversions yourself. However, we have provided the default ODBC SQL Types as follows:

| ODBC SQL Type | Pony API Type | Pony Type | Max String Size |
|----------------|-----------------|-----------|-----------------|
| SMALLINT | SQLSmallInteger | I16 | 8 |
| INTEGER | SQLInteger | I32 | 15 |
| BIGINT | SQLBigInteger | I64 | 22 |
| FLOAT | SQLFloat | F32 | 20 |
| REAL | SQLReal | F64 | 30 |
| VARCHAR | SQLVarchar | String | N/A |

## All about NULL

For every SQL Datatype we create we have to also support a Nullable version. Instead of doing this by defining duplicate types for everything, we chose instead to take the following approach:

## Reading NULLs

```pony
var my_sql_integer: SQLInteger = SQLInteger
/*
* Stuff happens here which we shall explain on the
* next page which will populate this variable with
* the result of our query. */

if (my_sql_integer.is_null()) then
// The value returned was SQLNull from the database
else
var myint: I32 = my_sql_integer.read()?
end
```

If your schema has marked a column as `NOT NULL`, then you can safely call `read()?` without testing for NULL.

If however you try to `read()?` the value directly without testing for NULL and it is NULL, the function will error.

## Writing NULLs

All pony SQL Types default to NULL, so all that needs be done is to create your object and not set a value.

If you are reusing an object, you can either call `reset()` or `null()` (if you want to be more explicit).
78 changes: 78 additions & 0 deletions docs/simple/stage1.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Including pony-odbc

Let's go ahead and create a new pony project.

```shell
red@panic:~/projects$ mkdir psql-demo
red@panic:~/projects$ cd psql-demo
red@panic:~/projects/psql-demo$ corral init
red@panic:~/projects/psql-demo$ corral add github.com/redvers/pony-odbc.git --version 0.3.0
red@panic:~/projects/psql-demo$ corral fetch
git cloning github.com/redvers/pony-odbc.git into /home/red/projects/psql-demo/_repos/github_com_redvers_pony_odbc_git
git checking out @0.3.0 into /home/red/projects/psql-demo/_corral/github_com_redvers_pony_odbc
red@panic:~/projects/psql-demo$
```

Let's create a very minimal Makefile

```make
all:
corral run -- ponyc -d
./psql-demo
```

... and our initial main.pony

```pony
use "pony-odbc"
use "lib:odbc" // For unixODBC. For iODBC, use "lib:iodbc"

actor Main
let env: Env

new create(env': Env) =>
env = env'
```

Now go ahead and run make, and run ldd to double-check that the library linked correctly:

```shell
red@panic:~/projects/psql-demo$ make
corral run -- ponyc -d
exit: Exited(0)
out:
err: Building builtin -> /home/red/.local/share/ponyup/ponyc-release-0.59.0-x86_64-linux-ubuntu24.04/packages/builtin
Building . -> /home/red/projects/psql-demo
Building pony-odbc -> /home/red/projects/psql-demo/_corral/github_com_redvers_pony_odbc/pony-odbc
Building debug -> /home/red/.local/share/ponyup/ponyc-release-0.59.0-x86_64-linux-ubuntu24.04/packages/debug
Building ffi -> /home/red/projects/psql-demo/_corral/github_com_redvers_pony_odbc/pony-odbc/ffi
Building collections -> /home/red/.local/share/ponyup/ponyc-release-0.59.0-x86_64-linux-ubuntu24.04/packages/collections
Building pony_test -> /home/red/.local/share/ponyup/ponyc-release-0.59.0-x86_64-linux-ubuntu24.04/packages/pony_test
Building time -> /home/red/.local/share/ponyup/ponyc-release-0.59.0-x86_64-linux-ubuntu24.04/packages/time
Building random -> /home/red/.local/share/ponyup/ponyc-release-0.59.0-x86_64-linux-ubuntu24.04/packages/random
Generating
Reachability
Selector painting
Data prototypes
Data types
Function prototypes
Functions
Descriptors
Verifying
Writing ./psql-demo.o
Linking ./psql-demo

./psql-demo
red@panic:~/projects/psql-demo$ ldd ./psql-demo
linux-vdso.so.1 (0x00007e36767fb000)
libodbc.so.2 => /lib/x86_64-linux-gnu/libodbc.so.2 (0x00007e3676748000)
libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007e367665f000)
libatomic.so.1 => /lib/x86_64-linux-gnu/libatomic.so.1 (0x00007e3676654000)
libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007e3676626000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007e3676400000)
/lib64/ld-linux-x86-64.so.2 (0x00007e36767fd000)
libltdl.so.7 => /lib/x86_64-linux-gnu/libltdl.so.7 (0x00007e3676619000)
red@panic:~/projects/psql-demo$
```

Note that libodbc.so.2 is linked into our executable.
87 changes: 87 additions & 0 deletions docs/simple/tablecreate.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
# Creating Our Tables

In order to create our tables, we simply execute the SQL commands. As there are no parameters, we can use the function `direct_exec("my sql statement")?` which will either succeed or fail.

## Creating Statements

In our application, let's make a Statement Handle and pass it to a function which will create our tables. Here is our full example thus far:

```pony
use "debug"
use "pony-odbc"
use "lib:odbc"

actor Main
let env: Env

new create(env': Env) =>
env = env'

let enh: ODBCEnv = ODBCEnv
try
let dbh: ODBCDbc = enh.dbc()?
dbh.connect("psql-demo")?
let sth: ODBCStmt = dbh.stmt()?
try
create_tables(sth)?
else
Debug.out("Our create_tables() function threw an error")
error
end
else
Debug.out("Our application failed in some way")
end
```

Now lets create our function to make these (temporary) tables.

```pony
fun create_tables(sth: ODBCStmt)? =>
.> direct_exec(
"""
CREATE TEMPORARY TABLE play (
id BIGSERIAL,
name VARCHAR(30) NOT NULL
);
""")?
.> direct_exec(
"""
ALTER TABLE play ADD CONSTRAINT play_pkey PRIMARY KEY (id);
""")?
.> direct_exec(
"""
CREATE TEMPORARY TABLE player (
id BIGSERIAL,
name VARCHAR(20) NOT NULL
);
""")?
.> direct_exec(
"""
ALTER TABLE player ADD CONSTRAINT player_pkey PRIMARY KEY (id);
""")?
.> direct_exec(
"""
CREATE TEMPORARY TABLE line (
id BIGSERIAL,
id_play INTEGER,
id_player INTEGER,
playerlinenumber INTEGER,
actsceneline VARCHAR(15),
playerline VARCHAR(127) NOT NULL
);
""")?
.> direct_exec(
"""
ALTER TABLE line ADD CONSTRAINT line_pkey PRIMARY KEY (id);
""")?
.> direct_exec(
"""
ALTER TABLE line ADD CONSTRAINT line_id_play_fkey FOREIGN KEY (id_play) REFERENCES play(id);
""")?
.> direct_exec(
"""
ALTER TABLE line ADD CONSTRAINT line_id_player_fkey FOREIGN KEY (id_player) REFERENCES player(id);
""")?
```

Since we are not reading or writing any parameters in this example, we can simply use the `direct_exec()?` function which does, as the name suggests - direct execution.
1 change: 1 addition & 0 deletions docs/simple/transactions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Placeholder
Loading