Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 26 additions & 0 deletions .spelling-wordlist.txt
Original file line number Diff line number Diff line change
@@ -1,24 +1,39 @@
accessors
API
APIs
autocommit
Autoincrements
BIGINT
bigint
Bool
CBoxedArray
Datatypes
datatypes
demarshallers
DBC
DBD
DBI
Deallocating
deserialize
dotfile
DSN
DSNs
ENV
hostname
ie
ini
iso
iODBC
jdbc
json
jsonfragment
Json
JsonDoc
JSON
ldd
libodbc
lockin
marshallers
Makefile
Nullable
NULLs
Expand All @@ -36,6 +51,7 @@ pony-odbc
postgres
postgresql
postgreSQL
psqldemo
RDBMS
schemas
se
Expand All @@ -44,15 +60,25 @@ sth
SQLBigInteger
SQLFloat
SQLInteger
SQLJson
SQLNull
SQLReal
SQLReturn
SQLSmallInteger
SQLState
SQLStates
SQLTables
SQLType
SQLVarchar
SQLVarchars
STMT
targeting
TZ
uncommented
unixODBC
unix
Unmarshalling
VARCHAR
varchar
xml
xmlfragment
21 changes: 16 additions & 5 deletions docs/simple/connecting.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,21 @@ In this API, these are encapsulated as follows:
* Database Handle, ODBCDbc
* Statement Handle, ODBCStmt

## Brief Database Transaction Primer

A database transaction is a sequence of one or more operations—such as reading, writing, updating, or deleting data—performed on a database as a single, logical unit of work. Transactions ensure that either all operations are successfully completed (committed) or none are applied (rolled back), maintaining data integrity even in the event of system failures. In other words, all commands in a transaction are either applied in an atomic manner, or rejected.

ODBC by default commits every command you execute after you execute it. This is called "autocommit".

Transactions are a best-practice, so we are going to utilize them in our tutorial from the very beginning.

In order to implement transactions, we need to disable autocommit.

## General Structure of the API

The vast majority of calls on ODBCEnv, ODBCDbc, and ODBCStmt return `Bool ?`. In other words, these are partial functions.

Structuring the API in this way means that we can serialize our calls and choose to fail / rollback a transaction if any of them fail. It makes for a clean interface.
Structuring the API in this way means that we can serialize our API calls into blocks of SQL commands, and then choose to fail / rollback a transaction if any of them fail. It makes for a very clean interface.

```pony
use "debug"
Expand All @@ -28,9 +38,9 @@ actor Main
let enh: ODBCEnv = ODBCEnv
try
let dbh: ODBCDbc = enh.dbc()?
dbh.set_autocommit(false)?
dbh.connect("psql-demo")?
else
Debug.out("We were enable to create our database handle")
end
```

Expand All @@ -42,17 +52,18 @@ First we create our Environment Object:
let enh: ODBCEnv = ODBCEnv
```

Then we create our Database Object and connect to our database using the DSN we configured previously in our .odbc.ini file:
Then we create our Database Object, set auto\_commit to false, and connect to our database using the DSN we configured previously in our .odbc.ini file:

```pony
try
let dbh: ODBCDbc = enh.dbc()?
dbh.set_autocommit(false)?
dbh.connect("psql-demo")?
else
Debug.out("We were enable to create our database handle")
Debug.out("We were enable to connect to our database")
end
```

Once the `dbh.connect()?` call is complete, we have an authenticated connection to the database instance that we requested.

Next up - let's create some tables!
Next up - let's create our table!
102 changes: 102 additions & 0 deletions docs/simple/custom-types-architecture.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# SQL Type Architecture

Each SQL Type implementation is a type-aware wrapper around a C buffer. This buffer is passed to the ODBC driver via `bind_parameter` (for inputs), or `bind_column` (for results). At a high level, we are responsible for:

- Sizing / Allocating / Deallocating the C-buffer.
- Marshalling / Unmarshalling from the C-buffer to the native pony types.

## The SQLType trait

Almost all of the common-code is already implemented for you in SQLType. You should only need to implement the class fields, accessors, and the read()? / write functions.

### Example: SQLInteger

Let's look at SQLInteger line-by-line.

```pony
class SQLInteger is SQLType
"""
The internal class which represents an Integer (I32)
"""
var _v: CBoxedArray = CBoxedArray
var _err: SQLReturn val = SQLSuccess
```

When you create your class, you must mark your new SQL Type as a `SQLType`. This is the type that all the associated functions (such as: `bind_parameter(SQLType): Bool ?` will expect.

Inside you need a (minimum) of two fields, one for the C Buffer (CBoxedArray), and one for SQLReturn.

Next, we need a constructor - and as we know that a textual representation of a SQL Integer (I32) will never exceed 15 characters, we allocate 15 bytes:

```pony
new create() => _v.alloc(15)
```

Next we need to provide accessors for both of these field variables, so that they can be manipulated by the API:

```pony
fun \nodoc\ ref get_boxed_array(): CBoxedArray => _v
fun \nodoc\ ref set_boxed_array(v': CBoxedArray) => _v = v'

fun \nodoc\ ref get_err(): SQLReturn val => _err
fun \nodoc\ ref set_err(err': SQLReturn val) => _err = err'
```

Lastly, we implement marshallers and demarshallers:

```pony
fun ref write(i32: I32): Bool =>
"""
Write an I32 to this buffer as a string. The string will fit
as we defined the buffer to be characters on initialization.

Will return true if written and verification succeeds.

Will return false if the string is too long for the buffer or
the readback doesn't match for some other reason.
"""
_write(i32.string())

fun ref read(): I32 ? =>
"""
Read the value of the buffer and convert it into an I32.

This function is partial just in case the string version
that comes from the database is not within the I32 range.

NOTE: SQLite does not enforce type limits, so if you use
that specific ODBC driver, this is something that must be
verified.
"""
if (get_boxed_array().is_null()) then
error
else
_v.string().i32()?
end
```

## Useful trait functions

There are a few functions in `SQLType` that are automatically available to either assist in debugging, or provide needed functionality. You do not need to do anything to use these:

| Function Signature | Purpose |
|-------------------------|----------------------------------------------------------------|
| reset(): Bool | Resets and zeros the buffer to its initial state. |
| string(): String iso^ | Returns the data as delivered by the database as a String. |
| array(): Array[U8] iso^ | Returns the data as delivered by the database as an Array[U8]. |
| is\_null(): Bool | Returns if the buffer was populated by a SQL Null |
| null() | Sets the buffer to represent a SQL Null |

## Buffer Behaviour

There is a difference in the way that buffers behave, depending on if they are used as a parameter or as a part of a result set:

### Parameter

You are responsible for ensuring that the buffer is of a sufficient size to hold the value you choose to write to it, and that the value is a valid String. If you do not, the trait's `_write()` will return false.

THE BUFFER WILL NOT RESIZE IF YOU TRY AND STUFF SOMETHING TOO LARGE INTO IT. IT CAN'T, BECAUSE THE BUFFER IS ALREADY BOUND TO YOUR PREPARE STATEMENT.

### Column (part of a result set)

Even though you are responsible for ensuring that the buffer is of a sufficient size to hold the resultant value, the API *will* resize, rebind, and re-read the data for you. The newly created larger buffer replaces the previous one in the API, but as long as you don't poke into the `CBoxedArray` in the implementation, it should be seamless to you.
78 changes: 78 additions & 0 deletions docs/simple/custom-types-json.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Implementing SQLJson

Here is a simple example for how to implement a custom SQL Type for SQLJson

## Dependencies

Firstly, we need to ensure that `ponylang/json` is included in our corral.json, like so:

```quote
red@panic:~/project/psql-demo$ corral add github.com/ponylang/json.git --version 0.2.0
red@panic:~/project/psql-demo$ corral fetch
```

## Starting our class

We need to include the `ponylang/json` dependency, ensure that the class uses the `SQLType` trait, and create the boilerplate code.

Unfortunately, since Json can be any arbitrary size, we will need our end-user to declare the size of the buffer they wish to use.

Here's what we start with:

```pony
use "pony-odbc"
use "json"

class SQLJson is SQLType
"""
An example class that represents a PostgreSQL Json type
"""
var _v: CBoxedArray = CBoxedArray
var _err: SQLReturn val = SQLSuccess

new create(size: USize) => _v.alloc(size)
"""
Creates a SQLJson object. You must specify the textual size at creation.
"""

fun \nodoc\ ref get_boxed_array(): CBoxedArray => _v
fun \nodoc\ ref set_boxed_array(v': CBoxedArray) => _v = v'

fun \nodoc\ ref get_err(): SQLReturn val => _err
fun \nodoc\ ref set_err(err': SQLReturn val) => _err = err'
```

Now we simply refer to the [ponylang/json documentation](https://ponylang.github.io/json/json--index/) to work out how to serialize / deserialize the data:

To serialize, we simply call `JsonDoc.string()`.

```pony
fun ref write(json: JsonDoc): Bool =>
"""
Write the serialized JsonDoc to the buffer.

Will return true if written and verification succeeds.

Will return false if the string is too long for the buffer or
the readback doesn't match for some other reason.
"""
_write(json.string())
```

To deserialize, we simply call `JsonDoc.parse()?` on the string returned from the database:

```pony
fun ref read(): JsonDoc ref ? =>
"""
Once we have confirmed that the data is NOT NULL, we create a new JsonDoc instance and call `JsonDoc.parse()?` on it with the string to populate it.
"""
if (get_boxed_array().is_null()) then
error
else
var json: JsonDoc = JsonDoc
json.parse(_v.string())?
json
end
```

Hopefully you can see from this example, that we have gone out of our way to make the creation of custom SQL Types as painless as possible.
68 changes: 68 additions & 0 deletions docs/simple/custom-types-testing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Testing SQLJson

In order to test our new custom class, let's update our tutorial example:

## Add the json package

```pony
use "json"
```

## Create a JsonDoc to store

We'll use the simple example from the package documentation, and create two functions. One to write, one to read - and we'll display the serialized output directly from JsonDoc object our SQLJson created!

```pony
var json: JsonDoc = JsonDoc
var obj: JsonObject = JsonObject

obj.data("key") = "value"
obj.data("property") = true
obj.data("array") = JsonArray.from_array([ as JsonType: I64(1); F64(2.5); false])
json.data = obj

write_json_record(sth, json)?
Debug.out(read_json_record(sth)?.string())
```

## Write the JsonDoc (write\_json\_record)

In this function, we'll just insert our JsonDoc into a row with the name "Json Record". Note that this function takes a pony class `JsonDoc`.

```pony
fun write_json_record(sth: ODBCStmt, json: JsonDoc)? =>
var name: SQLVarchar = SQLVarchar(254)
var jsonfrag: SQLJson = SQLJson(1023)
sth
.> prepare("insert into psqldemo (name,jsonfragment) values (?,?)")?
.> bind_parameter(name)?
.> bind_parameter(jsonfrag)?

name.write("Json Record")
jsonfrag.write(json)

sth
.> execute()?
.> finish()?
```

## Read our JsonDoc (read\_json\_record)

In this function we'll read the row and return a `JsonDoc`.

```pony
fun read_json_record(sth: ODBCStmt): JsonDoc ? =>
var name: SQLVarchar = SQLVarchar(254)
var jsonfrag: SQLJson = SQLJson(1023)
sth
.> prepare("select jsonfragment from psqldemo where name = ?")?
.> bind_parameter(name)?
.> bind_column(jsonfrag)?

name.write("Json Record")
sth
.> execute()?
.> fetch()?
.> finish()?
jsonfrag.read()?
```
Loading