Skip to content

Conversation

@tlovell-sxt
Copy link
Contributor

Rationale for this change

The proof plan RPCs require knowledge of table schemas in storage. Currently, they get this from the commitment metadata for tables in storage, but we want them to get these schemas from the tables pallet metadata instead. This change performs the switch in the RPC entrypoint, and removes some now-dead code.

What changes are included in this PR?

  • feat: implement column type conversion in rpc crate
  • feat: add QuerySchema schema accessor
  • feat: add CommitmentsApiError variants for proof plan RPC refactor
  • refactor: use schemas from tables pallet in proof plan RPCs

Are these changes tested?

Yes.

We will need the ability to deal with multiple versions of proof-of-sql
in sxt-node in the future. We will need to have old native code use
older versions of proof-of-sql forevermore, or just come up with some
solution with newer versions that matches legacy behavior. We will
sometimes need to upgrade the proof plan functionality of RPCs more
often than we upgrade the commitment functionality.

Some sxt-node code will be philosophically version agnostic, so is
sort-of shareable between downstreams w/ different proof-of-sql
versions, but not as regular rust code since that would require defining
a particular version. This new crate provides a place to put code to
share between downstreams with disparate proof-of-sql versions, and also
provides the first module of this variety: converting sqlparser types to
proof-of-sql types. This code will now be shareable between the RPCs and
native code, and even indexers, who may use different proof-of-sql
versions.
The code for defining what column types we support and how they convert
to proof-of-sql types has recently been moved to the
proof-of-sql-unversioned crate. This will allow it to be shared with
different downstreams that may use different versions of proof-of-sql.
This change completes the repeated code reduction by using the
unversioned macros in commitment-sql.
Some uses of the tables pallet are only interested in table's schemas,
as in, their column names and data types. However, we store table
schemas as SQL create statements. We would like to provide a runtime API
for reading this storage into a more structured schema type. This change
provides some types and functions to sxt-core that will be helpful in
this effort.
Currently, the tests in the tables module are in sort of an unusual
place, in the middle of the file. It'll be less of a burden to add more
code to this file, and more tests of said code, if the tests are at the
standard location - the bottom.
Some uses of the tables pallet are only interested in table's schemas,
as in, their column names and data types. However, we store table
schemas as SQL create statements. We would like to provide a runtime API
for reading this storage into a more structured schema type. This change
provides a stateful method for performing this read operation and
processing. It can be used to implement the future runtime API easily.
Some uses of the tables pallet are only interested in table's schemas,
as in, their column names and data types. However, we store table
schemas as SQL create statements. This change provides a runtime API
for reading this storage into a more structured schema type.
Recently, changes have been made to the runtime to provide a new
pallet-tables runtime API. This change reflects the new feature in the
runtime version.
The proof plan RPCs require knowledge of table schemas in storage.
Currently, they get this from the commitment metadata for tables in
storage, but we want them to get these schemas from the tables pallet
metadata instead. In this path, we get the data types as plain sql,
which we need to parse and then convert to proof-of-sql column types.

This implements the necessary column type conversion code in the rpc
crate. So that we will be able to upgrade the proof-of-sql version of
RPCs separately from the runtime, we use the proof-of-sql-unversioned
macro to generate a new implementation instead of using an existing one.
The proof plan RPCs require knowledge of table schemas in storage.
Currently, they get this from the commitment metadata for tables in
storage, but we want them to get these schemas from the tables pallet
metadata instead.

proof-of-sql relies on the SchemaAccessor trait for getting schemas,
which is already implemented for QueryCommitments - mappings of table
refs to commitments. Now, instead of commitments, we'll have table
schemas from the tables pallet APIs. This adds QuerySchema, which can
be built from these table schemas, and implements SchemaAccessor.
The proof plan RPCs require knowledge of table schemas in storage.
Currently, they get this from the commitment metadata for tables in
storage, but we want them to get these schemas from the tables pallet
metadata instead. This path changes the sorts of error cases we run into
somewhat, which need to be reflected in the error enum and codes. This
adds the necessary variants, without reusing old codes to avoid a
breaking change.
@github-actions
Copy link

1.14.0

Features

  • add CommitmentsApiError variants for proof plan RPC refactor (bf5c908
  • add proof-of-sql-unversioned crate (9071830
  • add QuerySchema schema accessor (fdc37af
  • add table_schema method to pallet-tables (90f1a26
  • add TablesApi runtime API (0a232c3
  • convert create statements to communicable table schemas (3c08b60
  • implement column type conversion in rpc crate (e3db613
  • upgrade runtime spec_version to 231 (94017b4

The proof plan RPCs require knowledge of table schemas in storage.
Currently, they get this from the commitment metadata for tables in
storage, but we want them to get these schemas from the tables pallet
metadata instead. This change performs the switch in the RPC entrypoint,
and removes some now-dead code.
@tlovell-sxt tlovell-sxt force-pushed the feat/table-schema-in-proof-plan-rpc branch from af8647b to 096865e Compare July 11, 2025 06:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants