diff --git a/README.md b/README.md index 4cbe075..9f0a986 100644 --- a/README.md +++ b/README.md @@ -1,32 +1,69 @@ -# Mintlify Starter Kit +# Morph Documentation -Click on `Use this template` to copy the Mintlify starter kit. The starter kit contains examples including +Morph の Documentation の Repository です。 -- Guide pages -- Navigation -- Customizations -- API Reference pages -- Use of popular components +[Mintlify](https://mintlify.com/docs/quickstart)というフレームワークを使用しています。 -### Development +Anthropic, Perplexity, Resend, Cursor, Elevenlabs なども Minlify を使用しています。 -Install the [Mintlify CLI](https://www.npmjs.com/package/mintlify) to preview the documentation changes locally. To install, use the following command +[https://mintlify.com/customers](https://mintlify.com/customers) + +## Development + +以下の手順でセットアップしてください。 + +**1. Mintlify のインストール** ``` npm i -g mintlify ``` -Run the following command at the root of your documentation (where mint.json is) +**2. ローカルサーバーの立ち上げ** ``` mintlify dev ``` -### Publishing Changes +## ディレクトリ構造 + +- `docs/`: ドキュメントの Markdown ファイル +- `asstes/`: 画像などの静的ファイル +- `examples/`: サンプルコード集ページ用のファイル + +その他、独立したページ群を作る際にはルート直下にディレクトリを切ってください。 + +ここでいう「独立したページ群」は、Mintlify のタブで管理されるグループを指します。 + +## mint.json + +各ページは MDX で記述することができますが、それを実際のページに反映するには、mint.json を編集する必要があります。 + +詳しい設定は Mintlify のドキュメントを参照してください。 + +よく使う項目については、以下で解説します。 + +### navigation + +mint.json の navigation は、ページのグループを宣言するための項目で、navigation を設定することで、サイドバーやタブでそのグループを使用することができます。そのため、 **ページを追加したら必ず navigation にも設定を追加してください。** + +```json +{ + "navigation": [ + { + "group": "Get Started", + "version": "en", + "pages": [ + "docs/en/introduction", + "docs/en/quickstart", + "docs/en/development" + ] + } + ] +} +``` -Install our Github App to auto propagate changes from your repo to your deployment. Changes will be deployed to production automatically after pushing to the default branch. Find the link to install on your dashboard. +# Contributing -#### Troubleshooting +main ブランチが自動的にデプロイされます。作業は、ローカルでブランチを作成して、main に向けて PR を作成してください。 -- Mintlify dev isn't running - Run `mintlify install` it'll re-install dependencies. -- Page loads as a 404 - Make sure you are running in a folder with `mint.json` +PR マージ後はブランチを削除してください。 diff --git a/api-reference/endpoint/create.mdx b/api-reference/endpoint/create.mdx deleted file mode 100644 index 5689f1b..0000000 --- a/api-reference/endpoint/create.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: 'Create Plant' -openapi: 'POST /plants' ---- diff --git a/api-reference/endpoint/delete.mdx b/api-reference/endpoint/delete.mdx deleted file mode 100644 index 657dfc8..0000000 --- a/api-reference/endpoint/delete.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: 'Delete Plant' -openapi: 'DELETE /plants/{id}' ---- diff --git a/api-reference/endpoint/get.mdx b/api-reference/endpoint/get.mdx deleted file mode 100644 index 56aa09e..0000000 --- a/api-reference/endpoint/get.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: 'Get Plants' -openapi: 'GET /plants' ---- diff --git a/api-reference/introduction.mdx b/api-reference/introduction.mdx deleted file mode 100644 index c835b78..0000000 --- a/api-reference/introduction.mdx +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: 'Introduction' -description: 'Example section for showcasing API endpoints' ---- - - - If you're not looking to build API reference documentation, you can delete - this section by removing the api-reference folder. - - -## Welcome - -There are two ways to build API documentation: [OpenAPI](https://mintlify.com/docs/api-playground/openapi/setup) and [MDX components](https://mintlify.com/docs/api-playground/mdx/configuration). For the starter kit, we are using the following OpenAPI specification. - - - View the OpenAPI specification file - - -## Authentication - -All API endpoints are authenticated using Bearer tokens and picked up from the specification file. - -```json -"security": [ - { - "bearerAuth": [] - } -] -``` diff --git a/api-reference/openapi.json b/api-reference/openapi.json deleted file mode 100644 index b1509be..0000000 --- a/api-reference/openapi.json +++ /dev/null @@ -1,195 +0,0 @@ -{ - "openapi": "3.0.1", - "info": { - "title": "OpenAPI Plant Store", - "description": "A sample API that uses a plant store as an example to demonstrate features in the OpenAPI specification", - "license": { - "name": "MIT" - }, - "version": "1.0.0" - }, - "servers": [ - { - "url": "http://sandbox.mintlify.com" - } - ], - "security": [ - { - "bearerAuth": [] - } - ], - "paths": { - "/plants": { - "get": { - "description": "Returns all plants from the system that the user has access to", - "parameters": [ - { - "name": "limit", - "in": "query", - "description": "The maximum number of results to return", - "schema": { - "type": "integer", - "format": "int32" - } - } - ], - "responses": { - "200": { - "description": "Plant response", - "content": { - "application/json": { - "schema": { - "type": "array", - "items": { - "$ref": "#/components/schemas/Plant" - } - } - } - } - }, - "400": { - "description": "Unexpected error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/Error" - } - } - } - } - } - }, - "post": { - "description": "Creates a new plant in the store", - "requestBody": { - "description": "Plant to add to the store", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/NewPlant" - } - } - }, - "required": true - }, - "responses": { - "200": { - "description": "plant response", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/Plant" - } - } - } - }, - "400": { - "description": "unexpected error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/Error" - } - } - } - } - } - } - }, - "/plants/{id}": { - "delete": { - "description": "Deletes a single plant based on the ID supplied", - "parameters": [ - { - "name": "id", - "in": "path", - "description": "ID of plant to delete", - "required": true, - "schema": { - "type": "integer", - "format": "int64" - } - } - ], - "responses": { - "204": { - "description": "Plant deleted", - "content": {} - }, - "400": { - "description": "unexpected error", - "content": { - "application/json": { - "schema": { - "$ref": "#/components/schemas/Error" - } - } - } - } - } - } - } - }, - "components": { - "schemas": { - "Plant": { - "required": [ - "name" - ], - "type": "object", - "properties": { - "name": { - "description": "The name of the plant", - "type": "string" - }, - "tag": { - "description": "Tag to specify the type", - "type": "string" - } - } - }, - "NewPlant": { - "allOf": [ - { - "$ref": "#/components/schemas/Plant" - }, - { - "required": [ - "id" - ], - "type": "object", - "properties": { - "id": { - "description": "Identification number of the plant", - "type": "integer", - "format": "int64" - } - } - } - ] - }, - "Error": { - "required": [ - "error", - "message" - ], - "type": "object", - "properties": { - "error": { - "type": "integer", - "format": "int32" - }, - "message": { - "type": "string" - } - } - } - }, - "securitySchemes": { - "bearerAuth": { - "type": "http", - "scheme": "bearer" - } - } - } -} \ No newline at end of file diff --git a/assets/.DS_Store b/assets/.DS_Store new file mode 100644 index 0000000..9028b1f Binary files /dev/null and b/assets/.DS_Store differ diff --git a/assets/GitHub.copilot-1.223.1062.vsix b/assets/GitHub.copilot-1.223.1062.vsix new file mode 100644 index 0000000..0e432b9 Binary files /dev/null and b/assets/GitHub.copilot-1.223.1062.vsix differ diff --git a/assets/images/.DS_Store b/assets/images/.DS_Store new file mode 100644 index 0000000..e7b6012 Binary files /dev/null and b/assets/images/.DS_Store differ diff --git a/images/checks-passed.png b/assets/images/checks-passed.png similarity index 100% rename from images/checks-passed.png rename to assets/images/checks-passed.png diff --git a/assets/images/docs/.DS_Store b/assets/images/docs/.DS_Store new file mode 100644 index 0000000..9163b64 Binary files /dev/null and b/assets/images/docs/.DS_Store differ diff --git a/assets/images/docs/builtin-postgres.png b/assets/images/docs/builtin-postgres.png new file mode 100644 index 0000000..280dd81 Binary files /dev/null and b/assets/images/docs/builtin-postgres.png differ diff --git a/assets/images/docs/canvas-overview.png b/assets/images/docs/canvas-overview.png new file mode 100644 index 0000000..91db126 Binary files /dev/null and b/assets/images/docs/canvas-overview.png differ diff --git a/assets/images/docs/integrations.png b/assets/images/docs/integrations.png new file mode 100644 index 0000000..4e969a0 Binary files /dev/null and b/assets/images/docs/integrations.png differ diff --git a/assets/images/docs/mdx-1.mp4 b/assets/images/docs/mdx-1.mp4 new file mode 100644 index 0000000..96c1113 Binary files /dev/null and b/assets/images/docs/mdx-1.mp4 differ diff --git a/assets/images/docs/visualization-how-to-start.png b/assets/images/docs/visualization-how-to-start.png new file mode 100644 index 0000000..c0a24bb Binary files /dev/null and b/assets/images/docs/visualization-how-to-start.png differ diff --git a/assets/images/docs/visualization-overview.png b/assets/images/docs/visualization-overview.png new file mode 100644 index 0000000..c97b055 Binary files /dev/null and b/assets/images/docs/visualization-overview.png differ diff --git a/assets/images/docs/vscode-extension-edit-csv.png b/assets/images/docs/vscode-extension-edit-csv.png new file mode 100644 index 0000000..abd2069 Binary files /dev/null and b/assets/images/docs/vscode-extension-edit-csv.png differ diff --git a/assets/images/docs/vscode-extension-github-copilot.png b/assets/images/docs/vscode-extension-github-copilot.png new file mode 100644 index 0000000..892b8d5 Binary files /dev/null and b/assets/images/docs/vscode-extension-github-copilot.png differ diff --git a/assets/images/docs/vscode-extension-mdx.png b/assets/images/docs/vscode-extension-mdx.png new file mode 100644 index 0000000..8e5a118 Binary files /dev/null and b/assets/images/docs/vscode-extension-mdx.png differ diff --git a/assets/images/docs/vscode-extension-parquet-explorer.png b/assets/images/docs/vscode-extension-parquet-explorer.png new file mode 100644 index 0000000..20a0b34 Binary files /dev/null and b/assets/images/docs/vscode-extension-parquet-explorer.png differ diff --git a/assets/images/docs/vscode-extension-python.png b/assets/images/docs/vscode-extension-python.png new file mode 100644 index 0000000..13d8fa3 Binary files /dev/null and b/assets/images/docs/vscode-extension-python.png differ diff --git a/assets/images/docs/workspace-architecture.png b/assets/images/docs/workspace-architecture.png new file mode 100644 index 0000000..3c5e2a7 Binary files /dev/null and b/assets/images/docs/workspace-architecture.png differ diff --git a/assets/images/docs/workspace-editor-sql-run.png b/assets/images/docs/workspace-editor-sql-run.png new file mode 100644 index 0000000..463b337 Binary files /dev/null and b/assets/images/docs/workspace-editor-sql-run.png differ diff --git a/assets/images/docs/workspace-editor-sql.png b/assets/images/docs/workspace-editor-sql.png new file mode 100644 index 0000000..3f5ebc4 Binary files /dev/null and b/assets/images/docs/workspace-editor-sql.png differ diff --git a/assets/images/docs/workspace-git.png b/assets/images/docs/workspace-git.png new file mode 100644 index 0000000..0c91955 Binary files /dev/null and b/assets/images/docs/workspace-git.png differ diff --git a/assets/images/docs/workspace-job-schedule.png b/assets/images/docs/workspace-job-schedule.png new file mode 100644 index 0000000..cd974c1 Binary files /dev/null and b/assets/images/docs/workspace-job-schedule.png differ diff --git a/assets/images/docs/workspace-overview.png b/assets/images/docs/workspace-overview.png new file mode 100644 index 0000000..dc9a39f Binary files /dev/null and b/assets/images/docs/workspace-overview.png differ diff --git a/assets/images/docs/workspace-template-canvas.png b/assets/images/docs/workspace-template-canvas.png new file mode 100644 index 0000000..bd726ce Binary files /dev/null and b/assets/images/docs/workspace-template-canvas.png differ diff --git a/assets/images/docs/workspace-template-menu.png b/assets/images/docs/workspace-template-menu.png new file mode 100644 index 0000000..af5dd47 Binary files /dev/null and b/assets/images/docs/workspace-template-menu.png differ diff --git a/assets/images/docs/workspace-template.png b/assets/images/docs/workspace-template.png new file mode 100644 index 0000000..3eaaf86 Binary files /dev/null and b/assets/images/docs/workspace-template.png differ diff --git a/images/hero-dark.svg b/assets/images/hero-dark.svg similarity index 100% rename from images/hero-dark.svg rename to assets/images/hero-dark.svg diff --git a/images/hero-light.svg b/assets/images/hero-light.svg similarity index 100% rename from images/hero-light.svg rename to assets/images/hero-light.svg diff --git a/assets/images/integrations/bigquerycomplete.png b/assets/images/integrations/bigquerycomplete.png new file mode 100644 index 0000000..14b7684 Binary files /dev/null and b/assets/images/integrations/bigquerycomplete.png differ diff --git a/assets/images/integrations/bigqueryslug.png b/assets/images/integrations/bigqueryslug.png new file mode 100644 index 0000000..af296ca Binary files /dev/null and b/assets/images/integrations/bigqueryslug.png differ diff --git a/assets/images/integrations/integrations-how-to-use.png b/assets/images/integrations/integrations-how-to-use.png new file mode 100644 index 0000000..f81394c Binary files /dev/null and b/assets/images/integrations/integrations-how-to-use.png differ diff --git a/assets/images/integrations/mysqlcomplete.png b/assets/images/integrations/mysqlcomplete.png new file mode 100644 index 0000000..16a0a5d Binary files /dev/null and b/assets/images/integrations/mysqlcomplete.png differ diff --git a/assets/images/integrations/mysqlslug.png b/assets/images/integrations/mysqlslug.png new file mode 100644 index 0000000..02aae90 Binary files /dev/null and b/assets/images/integrations/mysqlslug.png differ diff --git a/assets/images/integrations/postgrescomplete.png b/assets/images/integrations/postgrescomplete.png new file mode 100644 index 0000000..66ff026 Binary files /dev/null and b/assets/images/integrations/postgrescomplete.png differ diff --git a/assets/images/integrations/postgresslug.png b/assets/images/integrations/postgresslug.png new file mode 100644 index 0000000..fcd3987 Binary files /dev/null and b/assets/images/integrations/postgresslug.png differ diff --git a/assets/images/integrations/redshiftcomplete.png b/assets/images/integrations/redshiftcomplete.png new file mode 100644 index 0000000..394106b Binary files /dev/null and b/assets/images/integrations/redshiftcomplete.png differ diff --git a/assets/images/integrations/redshiftslug.png b/assets/images/integrations/redshiftslug.png new file mode 100644 index 0000000..31c1aeb Binary files /dev/null and b/assets/images/integrations/redshiftslug.png differ diff --git a/assets/images/integrations/snowflakecomplete.png b/assets/images/integrations/snowflakecomplete.png new file mode 100644 index 0000000..e25dfd3 Binary files /dev/null and b/assets/images/integrations/snowflakecomplete.png differ diff --git a/assets/images/integrations/snowflakeslug.png b/assets/images/integrations/snowflakeslug.png new file mode 100644 index 0000000..fe397e6 Binary files /dev/null and b/assets/images/integrations/snowflakeslug.png differ diff --git a/assets/images/morph_logo_svg.svg b/assets/images/morph_logo_svg.svg new file mode 100644 index 0000000..d4728c6 --- /dev/null +++ b/assets/images/morph_logo_svg.svg @@ -0,0 +1,9 @@ + + + + + + + + + diff --git a/assets/images/morph_logo_svg_w.svg b/assets/images/morph_logo_svg_w.svg new file mode 100644 index 0000000..643c540 --- /dev/null +++ b/assets/images/morph_logo_svg_w.svg @@ -0,0 +1,9 @@ + + + + + + + + + diff --git a/assets/images/python-sql/python-env.png b/assets/images/python-sql/python-env.png new file mode 100644 index 0000000..bac3ec1 Binary files /dev/null and b/assets/images/python-sql/python-env.png differ diff --git a/logo/dark.svg b/assets/logo/dark.svg similarity index 100% rename from logo/dark.svg rename to assets/logo/dark.svg diff --git a/logo/light.svg b/assets/logo/light.svg similarity index 100% rename from logo/light.svg rename to assets/logo/light.svg diff --git a/assets/videos/integrations/docs_connection_bigquery.mp4 b/assets/videos/integrations/docs_connection_bigquery.mp4 new file mode 100644 index 0000000..ed8f08a Binary files /dev/null and b/assets/videos/integrations/docs_connection_bigquery.mp4 differ diff --git a/assets/videos/integrations/docs_connection_mysql.mp4 b/assets/videos/integrations/docs_connection_mysql.mp4 new file mode 100644 index 0000000..8f6b40f Binary files /dev/null and b/assets/videos/integrations/docs_connection_mysql.mp4 differ diff --git a/assets/videos/integrations/docs_connection_postgresql.mp4 b/assets/videos/integrations/docs_connection_postgresql.mp4 new file mode 100644 index 0000000..1aad797 Binary files /dev/null and b/assets/videos/integrations/docs_connection_postgresql.mp4 differ diff --git a/assets/videos/integrations/docs_connection_redshift.mp4 b/assets/videos/integrations/docs_connection_redshift.mp4 new file mode 100644 index 0000000..e89d043 Binary files /dev/null and b/assets/videos/integrations/docs_connection_redshift.mp4 differ diff --git a/assets/videos/integrations/docs_connection_snowflake.mp4 b/assets/videos/integrations/docs_connection_snowflake.mp4 new file mode 100644 index 0000000..6f50841 Binary files /dev/null and b/assets/videos/integrations/docs_connection_snowflake.mp4 differ diff --git a/data-application/en/custom-components.mdx b/data-application/en/custom-components.mdx new file mode 100644 index 0000000..4456cd4 --- /dev/null +++ b/data-application/en/custom-components.mdx @@ -0,0 +1,19 @@ +--- +title: 'Custom Components' +--- + +Custom components can be used in MDX files. + +## Create a `.tsx` file to define custom components + +You can create `.tsx` files on Morph to define custom React components. + +The Morph data application uses [Vite](https://vitejs.dev) to build React applications. + +Styling using Tailwind CSS is also possible, as [Tailwind CSS](https://tailwindcss.com) is installed by default. + +## Adding dependency packages + +You can freely add npm packages to the Morph workspace. + +Run the npm install command from the terminal to add packages. diff --git a/data-application/en/data-components.mdx b/data-application/en/data-components.mdx new file mode 100644 index 0000000..b7a96c9 --- /dev/null +++ b/data-application/en/data-components.mdx @@ -0,0 +1,50 @@ +--- +title: 'Data Components' +--- + +To use Morph data, a dedicated component is used to pass the name of the Python function or SQL file as a property. + +Below is an example of a table display of a DataFrame from a Python run. + +```tsx index.mdx + +export const name = 'index'; +export const title = 'Top page'; + +# Top page + +This is the top page of the data application. + +import { DataTable } from '@use-morph/page'; + +
+ +
+ +``` + +The components for data display are presented below. + +## DataTable + +The DataTable component displays the results of a Python function or SQL file execution in a table. + +```tsx +import { DataTable } from '@use-morph/page'; + + +``` + +## Embed + +The Embed component displays HTML when the result of a Python function execution is HTML. + +```tsx + +import { Embed } from '@use-morph/page'; + + +``` + + + diff --git a/data-application/en/how-to-build-data-application.mdx b/data-application/en/how-to-build-data-application.mdx new file mode 100644 index 0000000..bb51fb9 --- /dev/null +++ b/data-application/en/how-to-build-data-application.mdx @@ -0,0 +1,9 @@ +--- +title: 'How to build data applications' +--- + +## MDX + +When building data applications in Morph, use [MDX (https://mdxjs.com/)](https://mdxjs.com/) For more information on how to use MDX, see the [official MDX documentation](https://mdxjs.com/). + +The `.mdx` files created under the `src` directory of the workspace serve as individual pages in the data application. \ No newline at end of file diff --git a/data-application/en/mdx-setup.mdx b/data-application/en/mdx-setup.mdx new file mode 100644 index 0000000..92d2809 --- /dev/null +++ b/data-application/en/mdx-setup.mdx @@ -0,0 +1,15 @@ +--- +title: 'MDX Setup' +--- + +## Running the set-up commands + +{/* 通常この手順は、ワークスペースのシステムが自動で実行するため、スキップして構いません。どうしても起動がうまくいかない場合に試してください。 */} + +To build a data application, first open the Morph workspace and execute the following commands. This may take several minutes. + +```bash +npm i -S @use-morph/page-build && npx morph-page init +``` + +Once this command has been executed, you can start building your data application. \ No newline at end of file diff --git a/data-application/en/use-variables.mdx b/data-application/en/use-variables.mdx new file mode 100644 index 0000000..cefeee2 --- /dev/null +++ b/data-application/en/use-variables.mdx @@ -0,0 +1,122 @@ +--- +title: 'Using Variables' +--- + +Morph allows variables to be defined in Python functions and SQL files, but variables can also be used when executing from MDX files. + +For more information on using variables in Python and SQL files, see [how to use variables](/data-application/en/variables). + +## How to use variables in MDX files + +To use variables in an MDX file, you need to **create a new tsx file**, create a React component that uses the variable and then use it in the MDX file. + +It is not possible to use `useVariable` directly in an MDX file. + + + +```tsx src/components/stocks-with-dates.tsx +import { DataTable, useVariable, VariableDatePicker } from '@use-morph/page'; + +export const StocksWithDates = () => { + const dateRangeStart = useVariable(null); + const dateRangeEnd = useVariable(null); + + return ( +
+ +
+ +
+
+ ); +} + +``` + +```tsx index.mdx +export const name = 'index'; +export const title = 'Top page'; + +# Stock prices with dates + +import { StocksWithDates } from './components/stocks-with-dates'; + + +``` + +```python stocks.py +import pandas as pd + +import morph +from morph import MorphGlobalContext + +import pandas as pd + +@morph.func( + name="stocks", + description="stock prices with dates", + output_paths=["_private/{name}/{now()}{ext()}"], + output_type="dataframe", +) +@morph.variables('start_date') +@morph.variables('end_date') +def main(context: MorphGlobalContext) -> pd.DataFrame: + df = pd.read_csv('./src/stocks.csv') + + start_date = context.var.get('start_date', None) + end_date = context.var.get('end_date', None) + + df['date'] = pd.to_datetime(df['date'], format='%b %d %Y') + + if start_date is None: + start_date = df['date'].min() + else: + start_date = pd.to_datetime(start_date, format='%Y-%m-%d') + + if end_date is None: + end_date = df['date'].max() + else: + end_date = pd.to_datetime(end_date, format='%Y-%m-%d') + + filtered_df = df[(df['date'] >= start_date) & (df['date'] <= end_date)] + return filtered_df +``` + +
+ +## `useVariable` Hook + +Use `useVariable` to declare variables. + +```tsx +const dateRangeStart = useVariable(null); +const dateRangeEnd = useVariable(null); +``` + +## Passing to components + +When passing to a component, pass the name of the variable in the Python or SQL file as a key, in object form. + +In the example above, `start_date` and `end_date` are used as variables in the Python file, so they are passed as keys. + +```tsx + +``` \ No newline at end of file diff --git a/data-application/ja/custom-components.mdx b/data-application/ja/custom-components.mdx new file mode 100644 index 0000000..ded2b6b --- /dev/null +++ b/data-application/ja/custom-components.mdx @@ -0,0 +1,19 @@ +--- +title: 'カスタムコンポーネント' +--- + +mdxファイル中では、カスタムコンポーネントを使用することができます。 + +## `.tsx` ファイルを作成してカスタムコンポーネントを定義する + +Morph上で `.tsx` ファイルを作成し、カスタムReactコンポーネントを定義することができます。 + +Morphのデータアプリケーションでは、[Vite](https://vitejs.dev)を用いてReactアプリケーションをビルドしています。 + +また、デフォルトで [Tailwind CSS](https://tailwindcss.com) が導入されているため、Tailwind CSSを使用したスタイリングも可能です。 + +## 依存パッケージを追加する + +Morphのワークスペースには、自由にnpmパッケージを追加することができます。 + +ターミナルから npm install コマンドを実行して、パッケージを追加してください。 diff --git a/data-application/ja/data-components.mdx b/data-application/ja/data-components.mdx new file mode 100644 index 0000000..64a5a80 --- /dev/null +++ b/data-application/ja/data-components.mdx @@ -0,0 +1,50 @@ +--- +title: 'データコンポーネント' +--- + +Morphのデータを利用するには、専用のコンポーネントを用いて、Python関数やSQLファイルの name をプロパティとして渡します。 + +以下は、Pythonの実行結果のDataFrameをテーブル表示する例です。 + +```tsx index.mdx + +export const name = 'index'; +export const title = 'Top page'; + +# Top page + +This is the top page of the data application. + +import { DataTable } from '@use-morph/page'; + +
+ +
+ +``` + +以下で、データ表示用のコンポーネントを紹介します。 + +## DataTable + +DataTableコンポーネントは、Python関数やSQLファイルの実行結果をテーブル表示します。 + +```tsx +import { DataTable } from '@use-morph/page'; + + +``` + +## Embed + +Embedコンポーネントは、Python関数の実行結果がHTMLの場合に、そのHTMLを表示します。 + +```tsx + +import { Embed } from '@use-morph/page'; + + +``` + + + diff --git a/data-application/ja/how-to-build-data-application.mdx b/data-application/ja/how-to-build-data-application.mdx new file mode 100644 index 0000000..18c13c3 --- /dev/null +++ b/data-application/ja/how-to-build-data-application.mdx @@ -0,0 +1,9 @@ +--- +title: 'データアプリケーションの構築方法' +--- + +## MDX + +Morphでデータアプリケーションを構築する際には、[MDX (https://mdxjs.com/) ](https://mdxjs.com/)を使用します。MDXの詳しい使い方については、[MDXの公式ドキュメント](https://mdxjs.com/)を参照してください。 + +ワークスペースの `src` ディレクトリ以下に作成された `.mdx` ファイルは、データアプリケーション中の各ページの役割を果たします。 \ No newline at end of file diff --git a/data-application/ja/mdx-setup.mdx b/data-application/ja/mdx-setup.mdx new file mode 100644 index 0000000..fc48398 --- /dev/null +++ b/data-application/ja/mdx-setup.mdx @@ -0,0 +1,15 @@ +--- +title: 'MDXのセットアップ' +--- + +## セットアップコマンドの実行 + +{/* 通常この手順は、ワークスペースのシステムが自動で実行するため、スキップして構いません。どうしても起動がうまくいかない場合に試してください。 */} + +データアプリケーションを構築するためには、まずMorphのワークスペースを開いて、以下のコマンドを実行してください。これは数分かかる場合があります。 + +```bash +npm i -S @use-morph/page-build && npx morph-page init +``` + +このコマンドの実行が完了すると、データアプリケーションの構築を開始することができます。 \ No newline at end of file diff --git a/data-application/ja/use-variables.mdx b/data-application/ja/use-variables.mdx new file mode 100644 index 0000000..796299f --- /dev/null +++ b/data-application/ja/use-variables.mdx @@ -0,0 +1,122 @@ +--- +title: '変数を使う' +--- + +Morphでは、Python関数やSQLファイルの中で変数を定義することができますが、MDXファイルから実行する場合にも変数を利用することができます。 + +Python, SQLファイル中での変数の利用については、[変数の使い方](/data-application/ja/variables)を参照してください。 + +## MDXファイルでの変数の使い方 + +MDXファイルで変数を使うには、**新たにtsxファイルを作成**し、変数を利用したReactコンポーネントを作成してから、それをMDXファイル内で利用する必要があります。 + +mdxファイル内で直接 `useVariable` を利用することはできません。 + + + +```tsx src/components/stocks-with-dates.tsx +import { DataTable, useVariable, VariableDatePicker } from '@use-morph/page'; + +export const StocksWithDates = () => { + const dateRangeStart = useVariable(null); + const dateRangeEnd = useVariable(null); + + return ( +
+ +
+ +
+
+ ); +} + +``` + +```tsx index.mdx +export const name = 'index'; +export const title = 'Top page'; + +# Stock prices with dates + +import { StocksWithDates } from './components/stocks-with-dates'; + + +``` + +```python stocks.py +import pandas as pd + +import morph +from morph import MorphGlobalContext + +import pandas as pd + +@morph.func( + name="stocks", + description="stock prices with dates", + output_paths=["_private/{name}/{now()}{ext()}"], + output_type="dataframe", +) +@morph.variables('start_date') +@morph.variables('end_date') +def main(context: MorphGlobalContext) -> pd.DataFrame: + df = pd.read_csv('./src/stocks.csv') + + start_date = context.var.get('start_date', None) + end_date = context.var.get('end_date', None) + + df['date'] = pd.to_datetime(df['date'], format='%b %d %Y') + + if start_date is None: + start_date = df['date'].min() + else: + start_date = pd.to_datetime(start_date, format='%Y-%m-%d') + + if end_date is None: + end_date = df['date'].max() + else: + end_date = pd.to_datetime(end_date, format='%Y-%m-%d') + + filtered_df = df[(df['date'] >= start_date) & (df['date'] <= end_date)] + return filtered_df +``` + +
+ +## `useVariable` フック + +変数の宣言には `useVariable` を使用してください。 + +```tsx +const dateRangeStart = useVariable(null); +const dateRangeEnd = useVariable(null); +``` + +## コンポーネントに渡す + +コンポーネントに渡す際には、オブジェクト形式で、PythonまたはSQLファイル内での変数名をキーとして渡してください。 + +上記の例では、Pythonファイル中で `start_date` と `end_date` を変数として利用しているため、それらをキーとして渡しています。 + +```tsx + +``` \ No newline at end of file diff --git a/development.mdx b/development.mdx deleted file mode 100644 index 8783008..0000000 --- a/development.mdx +++ /dev/null @@ -1,98 +0,0 @@ ---- -title: 'Development' -description: 'Learn how to preview changes locally' ---- - - - **Prerequisite** You should have installed Node.js (version 18.10.0 or - higher). - - -Step 1. Install Mintlify on your OS: - - - -```bash npm -npm i -g mintlify -``` - -```bash yarn -yarn global add mintlify -``` - - - -Step 2. Go to the docs are located (where you can find `mint.json`) and run the following command: - -```bash -mintlify dev -``` - -The documentation website is now available at `http://localhost:3000`. - -### Custom Ports - -Mintlify uses port 3000 by default. You can use the `--port` flag to customize the port Mintlify runs on. For example, use this command to run in port 3333: - -```bash -mintlify dev --port 3333 -``` - -You will see an error like this if you try to run Mintlify in a port that's already taken: - -```md -Error: listen EADDRINUSE: address already in use :::3000 -``` - -## Mintlify Versions - -Each CLI is linked to a specific version of Mintlify. Please update the CLI if your local website looks different than production. - - - -```bash npm -npm i -g mintlify@latest -``` - -```bash yarn -yarn global upgrade mintlify -``` - - - -## Deployment - - - Unlimited editors available under the [Startup - Plan](https://mintlify.com/pricing) - - -You should see the following if the deploy successfully went through: - - - - - -## Troubleshooting - -Here's how to solve some common problems when working with the CLI. - - - - Update to Node v18. Run `mintlify install` and try again. - - -Go to the `C:/Users/Username/.mintlify/` directory and remove the `mint` -folder. Then Open the Git Bash in this location and run `git clone -https://github.com/mintlify/mint.git`. - -Repeat step 3. - - - - Try navigating to the root of your device and delete the ~/.mintlify folder. - Then run `mintlify dev` again. - - - -Curious about what changed in a CLI version? [Check out the CLI changelog.](/changelog/command-line) diff --git a/docs/en/data-sources/builtin-postgres.mdx b/docs/en/data-sources/builtin-postgres.mdx new file mode 100644 index 0000000..2400a33 --- /dev/null +++ b/docs/en/data-sources/builtin-postgres.mdx @@ -0,0 +1,24 @@ +--- +title: 'Built-in PostgreSQL' +--- +Morph has a built-in PostgreSQL database. This database can be utilised to temporarily store intermediate results of the data pipeline or to store final results and connect to external services. + +Open the workspace and go to the Database tab. From this tab you can use the Built-in PostgreSQL. + +From the Morph workspace, you can perform basic table and record editing. + +Built-in PostgreSQL + +## Connect Externally + +The connection string listed under ‘Postgres Connection’ on the screen is the connection string to access PostgreSQL hosted on the workspace. This can also be used to connect from external database clients or applications. + +## IP Restrictions + +It is very important to set up security when connecting to PostgreSQL externally. +You can restrict the IPs with access in the ‘IP Restrictions’ section of the Morph workspace. + +The default setting is ‘0.0.0.0/0’, so be sure to set IP restrictions before inserting important data as access is possible from anywhere. diff --git a/docs/en/data-sources/connectors.mdx b/docs/en/data-sources/connectors.mdx new file mode 100644 index 0000000..5ae7f4b --- /dev/null +++ b/docs/en/data-sources/connectors.mdx @@ -0,0 +1,63 @@ +--- +title: 'Connectors: DB / DWH / SaaS' +--- + +Morph provides an environment where users can manage and analyse data centrally by working with a variety of data sources. This section presents the main data sources supported by Morph. + +## SQL databases + +Morph allows users to work with a variety of external data sources. This allows users to query directly from the following databases and data warehouses using SQL. + +1. **MySQL**: Open source relational database management system, widely used in many applications worldwide. +2. **PostgreSQL**: A powerful open source object-relational database system, characterised by reliability, feature richness and performance. +3. **BigQuery**: Google Cloud's serverless, highly scalable data warehouse supports rapid SQL queries on large volumes of data. +4. **Snowflake**: Cloud-based data warehousing service with excellent data sharing and scalability to support diverse data workloads. +5. **Redshift**: Cloud data warehousing with Amazon Web Services, for fast analysis of large data sets. + +By connecting these data sources to Morph, data can be integrated across different platforms and tools to efficiently perform data analysis and business intelligence tasks. Morph's interface allows you to easily set up connections and query data directly. + +## SaaS + +Morph provides integration with many SaaS. This allows users to concentrate on implementing the necessary business logic, leaving the setup to Morph. + +You can start analysing the data in SaaS immediately by creating an Integration from the Connections tab, as shown below: + +Integrations + +Get the `access_token` from the Integration created in Python. + +```python +import pandas as pd + +import morph +from morph import MorphGlobalContext +from morphdb_utils.api import get_auth_token # ← access_tokenを取得する関数 + +@morph.func( + name="freee_get_balance", + description="Freee Preprocess", + output_paths=["_private/{name}/{now()}{ext()}"], + output_type="dataframe", +) +def freee_get_balance(context: MorphGlobalContext) -> pd.DataFrame: + access_token = get_auth_token("Freee") + + # ↓↓↓ call API with access_token ↓↓↓ +``` + +## Non-SQL databases + +Morph can also work with NoSQL databases and other non-SQL databases, including MongoDB and Cassandra. Data can be retrieved and analysed against these databases using Python, which utilises powerful libraries to facilitate data manipulation and transformation — effectively extracting insights from non-SQL databases. + +## Interfacing via API + +Morph can also be integrated with various external services and applications via API. This allows real-time data to be retrieved from web services or integrated with other cloud services, for example. + +Users can use Python scripts to retrieve data from the API and process and analyse it directly within Morph. This allows for flexible and customisable data integration. + +## File upload + +Morph also supports data upload from local files. Users can easily upload CSV, JSON, or Excel files and import them into your Workspace. Uploaded data is immediately available and can be analysed and visualised using Morph's tools. \ No newline at end of file diff --git a/docs/en/getting-started/how-morph-works.mdx b/docs/en/getting-started/how-morph-works.mdx new file mode 100644 index 0000000..c1196ae --- /dev/null +++ b/docs/en/getting-started/how-morph-works.mdx @@ -0,0 +1,41 @@ +--- +title: 'How Morph Works' +--- + +Morph is a comprehensive tool to support data-driven decision-making — from data import to analysis, visualization, and sharing. + +This section provides a step-by-step description of how Morph works. It allows users to easily manipulate data and gain insights quickly. + +--- + +{/* TODO 動画を埋め込む */} +{/* */} + + +## 1. Import data into your workspace + +Morph allows users to easily import data from a variety of data sources into their Workspace. Users can import a variety of file formats, including CSV, JSON, and Excel files. In addition, data can be imported directly from cloud storage and databases. + +## 2. Build a data pipeline on Canvas + +After data is imported into the workspace, a data pipeline is built on the canvas: Morph AI automates data cleaning, transformation, and integration for efficient data processing. Users can build the pipeline using drag-and-drop operations, and the AI suggests the best way to process the data. + +## 3. Edit code + +Morph provides a powerful editor for querying and analyzing data using SQL and Python. Users can edit code using Morph's built-in code editor, extract the data they need, and/or perform complex analyses. They can also see the results of their code execution in real-time, allowing them to get quick feedback. + +## 4. Generate visualizations and reports + +Once the analysis is complete, visualize the data using Morph's extensive visualization tools. Graphs and charts can be easily created, and interactive dashboards can be built. The visualizations generated can be exported as reports or shared in real-time. This allows you to effectively communicate data insights. + +## 5. Share with your team + +Morph allows you to easily share the data, pipelines, code, and visualizations you create with your team members. This facilitates collaboration across the team and efficient, data-driven decision-making. \ No newline at end of file diff --git a/docs/en/getting-started/why-morph.mdx b/docs/en/getting-started/why-morph.mdx new file mode 100644 index 0000000..ce125e5 --- /dev/null +++ b/docs/en/getting-started/why-morph.mdx @@ -0,0 +1,62 @@ +--- +title: Why Morph? +--- + +Morph is a data workspace that focuses on ‘using’ data rather than ‘storing’ it. + +Today, the importance of collecting and storing data is well understood and many companies operate data warehouses and databases. However, making good use of the data collected is not an easy task. + +In an age where critical business data is collected on a daily basis and new actions are required on a daily or weekly basis, it is essential to build a quick, flexible, data utilization infrastructure to understand the true meaning of data. + +This section confirms the importance of incorporating data analysis and insight extraction in an agile manner to your workflow, and using the power of AI to better understand data... and why Morph is the best tool for such workflows. + +--- + +## Challenges of traditional data tools + +Traditional data analysis and BI tools have: + +1. **High learning costs and long build times.** Many data tools require high learning costs, with proprietary extended programming languages and very complex configurations. They also require engineering to build and that must also be taken into account. +2. **Long lead times to make changes.** Suppose you have finally completed the build and have gained insights from your data. But if new indicators emerge from the learnings that you want to monitor, you have to build the analysis flow and dashboards from scratch. You need to convene the engineers again and get the dashboard to reflect the requirements from the business unit. +3. **Complex workarounds.** The problem becomes instantly more complex when you try to do more than what the tool has been designed specifically to do. In many cases, you will be forced to ask an engineer to build a workaround for you. + +In other words, it is like a waterfall model. This may not be a problem if the sequence of requirements definition, design, construction, and operational testing can take weeks or months, and furthermore, if the system is built once and then used on an annual basis. + +However, in an era of rapidly changing business environments, you may feel the need for faster data utilization cycles to make data-driven decisions. + +## Agile and flexible data analysis and BI + +Agile approaches have been widely adopted in software development, but the mindset can also be applied to data analysis and business intelligence (BI). + +### Benefits of the agile approach + +1. **Rapid feedback loop:** agile methodologies allow for rapid feedback through a series of short iterations (sprints). This allows immediate validation of the results of the analysis and, if necessary, a change of direction. +2. **User-centred development:** agility allows development to be based on the needs of the user, so that data analysis can be tailored to the specific requirements of the business scene. This enables users to quickly obtain the information they need and make decisions more smoothly. +3. **Flexibility:** an agile approach can be flexible as the business environment and requirements change. New data sources can be added and analysis methods can be changed quickly, so that decisions are always based on the latest information. + +### Specific methods of agile data analysis + +1. **Incremental data collection and integration:** focus on the most important data sets in the early stages and adopt a phased approach to adding and integrating data. This allows for gradual expansion of data coverage while providing value early on. +2. **Continuous communication and collaboration:** communicate frequently within and outside the team and actively incorporate stakeholder feedback. Share progress and make necessary adjustments through regular meetings and review sessions. +3. **Data pipeline automation:** automates data collection, integration, cleaning and analysis, reducing the burden of manual work. This improves the efficiency of analytical work and makes agile processes more effective. +4. **Deploy to the production environment:** deploy what you have tried in the sandbox environment smoothly to the production environment. This allows the results of data analysis to be quickly utilised in the next sprint. + +Agile data analytics and BI enable companies to extract value from their data more quickly and flexibly, giving their business a competitive edge, and Morph supports this approach, providing an environment where users can get the information they need in a timely manner. + +## Morph Features: + +### Complete cloud infrastructure for data utilization + +Morph has all the cloud infrastructure needed for data analysis. This includes cloud-based Postgres providing advanced computing power, scalable storage solutions and management of directed graph models to build data pipelines. Users can easily utilise these infrastructures and start analysing data quickly. + +### SQL・Python + +Morph supports both SQL and Python, the main languages of data analysis. This allows users to choose the language best suited to their own skill set and efficiently query and analyse data: they can perform simple queries using SQL or build advanced data science and machine learning models using Python. Furthermore, Morph seamlessly integrates these languages, allowing users to easily exchange data between different languages. + +### Support for all file formats + +Morph supports a variety of file formats, including CSV, JSON and Excel. This allows data from different sources to be easily imported and centrally managed. Users can integrate data from different formats and provide a unified view, significantly increasing the efficiency of data analysis. Data can also be exported flexibly and easily integrated with other systems and tools. + +### Morph AI + +Morph includes AI capabilities that help users gain a deeper understanding of their data and assist in the analysis process; Morph AI automatically detects patterns and trends in the data and provides key insights. In addition, an AI assistant helps users in creating queries and visualizing data, making it easier to perform complex analysis tasks. This allows users to perform sophisticated analyses and quickly obtain decision-useful information, even if they do not have specialist data knowledge. \ No newline at end of file diff --git a/docs/en/morph-ai/features-of-morphai.mdx b/docs/en/morph-ai/features-of-morphai.mdx new file mode 100644 index 0000000..0a8beab --- /dev/null +++ b/docs/en/morph-ai/features-of-morphai.mdx @@ -0,0 +1,29 @@ +--- +title: 'Morph AI Features' +--- + +Morph AI provides advanced capabilities focused on data analysis to power users' data-driven decision-making. This section details the features and benefits of Morph AI. + +## Why Morph AI is special + +While analysis is possible in regular AI chat, Morph AI is specifically designed for data analysis and provides more accurate results for your requirements. It differs from regular AI chat in the following ways + +- **Understands the data schema:** because Morph AI understands the schema (structure) of the user's database, it can understand the relationships between tables and the meaning of columns, enabling accurate analysis that takes into account data relationships. This ensures that even complex queries get the right results quickly. +- **Can take into account the schema resulting from previous and subsequent data processing:** the schema generated at each step of data processing can be understood and passed on appropriately to the next step. This allows data processing to proceed efficiently while maintaining the integrity of the entire pipeline. +- **Able to give knowledge such as the meaning of each column:** users can provide the AI with knowledge about the meaning and use of columns, which allows the AI to perform more accurate queries and analyses. For example, it can analyze sales trends with the understanding that columns in sales data represent revenue. + +## Build multi-step data pipelines + +Morph AI has the ability to automatically build complex data pipelines based on user requirements. All the user has to do is tell Morph AI the desired analysis results, and Morph AI will design and execute all the necessary steps. For example, even in complex cases involving multiple SQL queries or Python scripts, Morph AI will smoothly build the pipeline. Furthermore, the code for each step can be manually modified as required, allowing fine-tuning while maintaining the integrity of the entire pipeline. + +## Generate and modify code + +Morph AI can automatically generate and modify SQL and Python code for each step. This allows users to perform advanced data analysis without any technical expertise. Furthermore, even when connected to an external database, Morph AI generates optimised code that takes into account the database schema. For example, if new tables are added or existing columns are modified, it automatically adapts and provides accurate queries. + +## Asking questions about data + +Morph AI can answer questions about analysis and aggregate results quickly and accurately. Users can ask questions about specific data points or trends to gain detailed insights using natural language, for example, ‘What are the top 10 selling products this month?’ or ‘Which regions have the highest customer satisfaction?’ + +## Generate a report + +Morph AI generates rich reports based on aggregate and visualization results. Users can customise the content of the reports using prompts. For example, the format and content of reports can be tailored to specific needs, such as ‘agenda for next marketing meeting’ or ‘monthly report to management’. The generated reports are visually attractive and easy to understand, making them a powerful tool to support decision-making. \ No newline at end of file diff --git a/docs/en/rest-api/authentication.mdx b/docs/en/rest-api/authentication.mdx new file mode 100644 index 0000000..a9dbc64 --- /dev/null +++ b/docs/en/rest-api/authentication.mdx @@ -0,0 +1,40 @@ +--- +title: 'Authentication' +--- + +To use Morph's API, an API key must be included in the header of the request; the API key can be obtained from the Morph dashboard and ensures secure access. + + + +## How to obtain an API key: + +1. from the Home screen, press the ‘Secrets’ tab to display the following page. +2. next press the ‘Create Secret’ button, enter the Secret name and Source IP and confirm with the ‘Create’ button. +※If the Source IP is left blank, all IPs are allowed; to restrict the Source IP, enter a comma-separated list. +3. if the created API key is no longer required, it can be deleted from the ‘Delete’ button. +4. press the ‘Delete’ button from the Delete confirmation modal that appears, and the API key will be deleted completely. +**[NB!] This operation cannot be undone.** + +## API URL + +The URL of the API will be in the following format: + +BaseURL: `https://beta-api.morphdb.io/v0/rest/[Your_DatabaseID]/` + +**How to check the database ID `[Your_DatabaseID]`** + +Within the Morph workspace screen, the database ID is displayed following `/workspace`. + +`https://beta-app.morphdb.io/workspace/[database_id]?selected...` + +**How to check the table name `[**Your_Table_Name**]`**. + +When querying for a specific table in the database schema, the URL of the API will be of the form: + +URL: `https://beta-api.morphdb.io/v0/rest/[Your_DatabaseID]/[Your_Table_Name]` + +The table name is what you see in the Built-in Tables in the side menu. The table name is also what appears in the editor tab when you select a table. \ No newline at end of file diff --git a/docs/en/rest-api/overview.mdx b/docs/en/rest-api/overview.mdx new file mode 100644 index 0000000..ae662b3 --- /dev/null +++ b/docs/en/rest-api/overview.mdx @@ -0,0 +1,46 @@ +--- +title: 'Overview' +--- + +Morph automatically generates and provides an API based on the database schema created. This allows data integration and application building using Morph's database through the API without the need to implement additional code. + +The API is designed in a RESTful format and can be accessed securely using API keys created from within the Morph dashboard. + +## Features + +The Morph database API is provided by [PostgREST](https://postgrest.org/) and offers the following benefits: + +- **Efficient development and collaboration:** + - The API is generated directly from the database schema, significantly reducing back-end development time and costs. +- **Auto-update/Immediacy:** + - Changes to the database schema are immediately reflected in the API, eliminating the need for manual synchronisation and maintenance. +- **Performance:** + - The lightweight, stateless design allows direct execution of SQL queries for extremely fast data access. +- **Flexibility:** + - In addition to simple CRUD operations, complex operations such as query combination, filtering and sorting can be easily performed. +- **Security:** + - Strict type checking and automatic escaping of query parameters provide a high level of protection against attacks such as SQL injection. + +### RESTful architecture + +The API employs standard HTTP methods and provides intuitive and easy-to-use CRUD operations based on a RESTful architecture. Its design allows for extensive integration with various programming languages and applications, making it easy for developers to integrate the system. + +In addition, the following advanced features are also provided: + +- **Filtering and searching:** + - Complex query parameters can be used to precisely extract only those records that match specific criteria. +- **Sorting and pagination:** + - Sort the result set and use pagination to provide the data in small chunks. +- **Relational query:** + - Data can be retrieved from multiple related tables in a single query. This allows data to be retrieved by utilising the relationships in the relational database without the need for multiple API calls. +- **Aggregation:** + - Aggregation features such as grouping, counting, averaging, maximum and minimum values can be included in queries. +- **Batch requests:** + - Multiple API calls can be executed in a single HTTP request, reducing network latency and ensuring efficient data processing. + +In addition to these, various other functions are provided by [PostgREST](https://postgrest.org/). For more information, please see the following reference documents: + +### Reference: + +- [PostgREST official documentation](https://postgrest.org/en/v12/) +- [PostgREST sourcecode](https://github.com/PostgREST/postgrest) \ No newline at end of file diff --git a/docs/en/rest-api/quickstart.mdx b/docs/en/rest-api/quickstart.mdx new file mode 100644 index 0000000..3de2ee2 --- /dev/null +++ b/docs/en/rest-api/quickstart.mdx @@ -0,0 +1,68 @@ +--- +title: 'Quickstart' +--- + +This chapter provides step-by-step instructions on how to perform basic CRUD operations (Create, Read, Update, and Delete) using Morph's API. Through this tutorial, you can easily start working with Morph's database. + +In this example, the API is invoked using the cURL command, so you can try it out by pasting the command into your terminal environment or an API client tool such as [Postman](https://www.postman.com/). + +### API calls + +*※**`[**Your_DatabaseID**]`,** `[Your_Table_Slug]`, **`[Your_API_Key]`** should be substituted.* + +**Create:** + +To create new data, use the POST method. + +```sh +curl --location --request POST 'https://beta-api.morphdb.io/v0/rest/[Your_DatabaseID]/[Your_Table_Slug]' \ +--header 'x-api-key: [Your_API_Key]' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "column1": "value1", + "column2": "value2" +}' +``` + +**Read:** + +To read data from a table, use the GET method. + +```sh +curl --location 'https://beta-api.morphdb.io/v0/rest/[Your_DatabaseID]/[Your_Table_Slug]' \ +--header 'x-api-key: [Your_API_Key]' + +``` +**Update:** + +To update existing data, use the PUT method. + +```sh +curl --location --request PUT 'https://beta-api.morphdb.io/v0/rest/[Your_DatabaseID]/[Your_Table_Slug]' \ +--header 'x-api-key: [Your_API_Key]' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "column1": "new_value1" +}' +``` + +**Delete:** + +To delete data, use the DELETE method. + +```sh +curl --location --request DELETE 'https://beta-api.morphdb.io/v0/rest/[Your_DatabaseID]/[Your_Table_Slug]' \ +--header 'x-api-key: [Your_API_Key]' \ +--header 'Content-Type: application/json' \ +--data-raw '{ + "column1": "value_to_delete" +}' + +``` + +In addition to these, [PostgREST](https://postgrest.org/) allows various query patterns to be realized according to the users's application. For more information, please see the following reference documents. + +### Reference: + +- [PostgREST official documentation](https://postgrest.org/en/v12/) +- [PostgREST sourcecode](https://github.com/PostgREST/postgrest) \ No newline at end of file diff --git a/docs/en/team-setting/credit.mdx b/docs/en/team-setting/credit.mdx new file mode 100644 index 0000000..63152ae --- /dev/null +++ b/docs/en/team-setting/credit.mdx @@ -0,0 +1,25 @@ +--- +title: 'Credit' +--- + +Morph manages the execution of cloud machines to run workspaces and AI functions by consuming credits. + +A certain number of credits are packaged with each usage plan and additional credits can be purchased. + +## Credit usage + +Each function consumes credits according to the table below: + +| Credit consumption / 1 hour | CPU | Memory | +| :--- | :--- | :--- | +| 30 | 2vCPU | 4MB | + +| Credit consumption | AI function | +| :--- | :--- | +| 6 | Chat messages | +| 6 | Report generation | +| 18 | Multistep generation | + +## Credit management + +You can check how much credit you have used, and purchase additional credits from the ‘Credit’ tab on the home screen. \ No newline at end of file diff --git a/docs/en/team-setting/manage-members.mdx b/docs/en/team-setting/manage-members.mdx new file mode 100644 index 0000000..b4c24a1 --- /dev/null +++ b/docs/en/team-setting/manage-members.mdx @@ -0,0 +1,11 @@ +--- +title: 'Manage Members' +--- + +Team members can be added and managed from the ‘Members’ tab on the home page. Member permissions include Admin and General. + +Admin users are authorised to perform the following operations: + +- Manage users +- Manage plans +- Credit management \ No newline at end of file diff --git a/docs/en/workspace/about-canvas.mdx b/docs/en/workspace/about-canvas.mdx new file mode 100644 index 0000000..da29d7a --- /dev/null +++ b/docs/en/workspace/about-canvas.mdx @@ -0,0 +1,20 @@ +--- +title: 'Canvas' +--- + +Morph's Canvas is an interactive visual tool that allows users to intuitively design and manage data pipelines. Canvas allows users to visually understand and efficiently manipulate complex data flows. + +Canvas Overview + +## Creating a visual data pipeline + +Canvas provides an intuitive interface for building data pipelines with drag-and-drop operations. SQL, Python, and json files can be placed on the canvas. + + \ No newline at end of file diff --git a/docs/en/workspace/about-workspace.mdx b/docs/en/workspace/about-workspace.mdx new file mode 100644 index 0000000..baf095f --- /dev/null +++ b/docs/en/workspace/about-workspace.mdx @@ -0,0 +1,28 @@ +--- +title: 'Workspace Overview' +--- + +The Morph workspace consists of three sections: + +- **Code:** a VS Code-based editor and a proprietary framework allow you to build data applications and analyse data fast. +- **Database:** built-in PostgreSQL can be used. You can build the data marts you want to use during the analysis process and in your data applications, enabling more flexible data utilization. +- **Connections:** create RDB / data warehouse / SaaS connections and start analysing immediately without writing complicated authentication code. + +Workspace Overview + +You can find out more about each section on the following pages: + + + + An overview of the data analysis and data application building. + + + The Built-in PostgreSQL available in the Database section. + + + How to create and use linkages in the Connections section. + + diff --git a/docs/en/workspace/data-application.mdx b/docs/en/workspace/data-application.mdx new file mode 100644 index 0000000..91d5444 --- /dev/null +++ b/docs/en/workspace/data-application.mdx @@ -0,0 +1,87 @@ +--- +title: 'Data Applications' +--- + +Morph makes it easy to create rich data applications using a combination of markdown formats and React components. + +This section provides step-by-step instructions for building data applications. + +## MDX + +When building data applications in Morph, use [MDX (https://mdxjs.com/)](https://mdxjs.com/) For more information on how to use MDX, see [Official MDX documentation](https://mdxjs.com/). See also. + +## 0. Setup + +{/* 通常この手順は、ワークスペースのシステムが自動で実行するため、スキップして構いません。どうしても起動がうまくいかない場合に試してください。 */} + +To build a data application, first open the Morph workspace and execute the following commands. This may take several minutes. + +```bash +npm i -S @use-morph/page-build && npx morph-page init +``` + + +## 1. Create MDX files + + + +First, create an mdx file to describe the content of your data application. mdx files combine text written in markdown format with React components to create rich content. +Create the mdx file in the `src` directory of your workspace home directory. + +## 2. Edit MDX files + +Once the MDX file has been created, the contents can be edited. + +One very important element is the need to EXPORT two variables: + +- `name`: This is the pathname of the page for this file. For example, if name is `index`, this is `/`, and if name is `about`, this is `/about`. + +- `title`: This is the title of the page for this file. It appears in the menu and title in the screen. + +The following is an example of an `index.mdx` file: + +```tsx index.mdx + +export const name = 'index'; +export const title = 'Top page'; + +# Top page + +This is the top page of the data application. + +``` + +Normal markdown notation can be used. + +## 3. Using Morph data + +To use Morph data, a dedicated component is used to pass the name of the Python function or SQL file as a property. + +Below is an example of a table display of a DataFrame from a Python run: + +```tsx index.mdx + +export const name = 'index'; +export const title = 'Top page'; + +# Top page + +This is the top page of the data application. + +import { DataTable } from '@use-morph/page'; + +
+ +
+ +``` + +For detailed specifications of Morph components, see the [Data Application page](/data-application/en/how-to-build-data-application). + +## 4. Launching data applications + +To start the data application, click on the Preview button in the top right-hand corner of the mdx file editing screen. \ No newline at end of file diff --git a/docs/en/workspace/environment.mdx b/docs/en/workspace/environment.mdx new file mode 100644 index 0000000..decc3a3 --- /dev/null +++ b/docs/en/workspace/environment.mdx @@ -0,0 +1,53 @@ +--- +title: 'Development Environment' +--- + +The Code section of the Morph workspace provides a development environment for building data analysis and data applications. + +Every workspace is allocated its own VM (Virtual Machine), so each user has their own completely separate workspace. +Even within the same team, each user has a completely separate VM, so there is no conflict in work. To enable collaboration between engineers, each VM is connected to a managed GitLab. + +You can also utilise a VS Code-based editor for executing code and coding on the VMs to build data applications. + +The organisation on the workspace is as follows: + +Workspace Architecture + +## Using Git + +Code automatically installs a built-in Git in the VM. Therefore, if you want to share your work on a workspace with other members, you need to commit to Git. + +The same workspace is tied to the same Gitlab project, so you can share your work with other members by committing your code. + +You can use VS Code's default Git manipulation functionality or Git from the Terminal. Of course, you can also install your preferred Git manipulation extension in the VS Code Extension. + +Workspace Git + +### About the framework + +The Morph framework allows you to use a range of tools to analyse, visualise and operationalise your data. +The following pages show you how to build each of these: + + + + SQL and Python can be used to analyse data. + + + GUIs make it easy to perform visualisations from aggregated data. + + + Rich data applications can be easily created using MDX. + + + Use Canvas to visualise data flows. + + + DAGs created in SQL or Python can be executed on a scheduled basis. + + diff --git a/docs/en/workspace/job-schedule.mdx b/docs/en/workspace/job-schedule.mdx new file mode 100644 index 0000000..7ca285a --- /dev/null +++ b/docs/en/workspace/job-schedule.mdx @@ -0,0 +1,29 @@ +--- +title: 'Job Scheduling' +--- + +This section describes a mechanism for scheduling and executing data pipelines created in SQL or Python. + +Functions such as `load_data` in SQL and `@morph.load_data()` in Python allow multiple processes to be built as a pipeline. +Scheduled execution of this pipeline can automate recurring tasks such as daily sales totals. + +As shown below, when you open the SQL/Python file, ‘Run Schedules’ appears in the tab on the right-hand side. +Here you can check the jobs currently set up and register new jobs. + +Job Schedule + +## Job scheduling settings + +Job scheduling should be configured for the last function in the pipeline. +The `load_data` function also executes the function to be loaded at runtime, so any data pipeline can be executed periodically by specifying the last function in the data pipeline. + +The execution date and time can be set flexibly, such as ‘every day’ or ‘only on weekdays’. The execution time can also be set in 15-minute increments. + + \ No newline at end of file diff --git a/docs/en/workspace/sql-python.mdx b/docs/en/workspace/sql-python.mdx new file mode 100644 index 0000000..c83cf46 --- /dev/null +++ b/docs/en/workspace/sql-python.mdx @@ -0,0 +1,110 @@ +--- +title: 'Build with SQL & Python' +--- + +The workspace allows data analysis using SQL and Python. + +When you open the workspace, a `src` directory exists in the project root. Place and run SQL and Python files in this `src` directory. + +The `morph_project.yml` file is set to `source_paths: src`. It is possible to change the directory where the source code is placed by changing this value. + +Workspace Editor SQL + +### Execution of source code + +The SQL or Python you have written can be executed using the ‘RUN’ button located in the top right-hand corner of the editor, as shown in the image below. + +When executed, the corresponding Morph CLI log is output in the terminal and the data from the execution is displayed in the ‘Result’ tab in the right sidebar. + +Workspace Editor SQL Run + +## Implementing source code + +SQL and Python on Morph is run by the Morph CLI installed on a VM on the workspace. It builds on the DB and standard Python scripts used, and provides extensions to allow more powerful data analysis by the user. + +In addition, in Morph, files for analysis are named at runtime. Naming makes it easier to access the results of that file from other files. +This mechanism is a very important part of the Morph framework. + +Note that duplicate names of this name at runtime will result in a compile error. + +### SQL + +You can use jinja2 to write SQL, and within jinja you can use Morph's own config, a function for writing metadata. +In config, you can describe the processing of the function. This allows you to leave the processing details for team members and for the Morph AI to interpret the developer's intentions in detail. + +In the SQL below, the config function is used to name and describe the process in detail. Using jinja in this way allows metadata to be written directly in the SQL. + +```sql +{{ + config( + name="example_sql_cell", + description="Example SQL cell", + ) +}} + +select * from customers limit 10 +``` + + +If the config function is undefined or not named by name, the sql file name is used as the name. +Example: example_sql_cell.sql, in which case example_sql_cell is used as the name. + + + + +In Morph, it is forbidden to write a semicolon at the end of SQL to prevent multiple SQLs in one SQL file. If you have written one, an error message will be displayed, so delete the semi-colon and run the program again. + + +For detailed config settings and other grammar rules, see the following detailed pages: + + + + Describes how to run SQL on Morph. + + + +### Python + +In Python files, functions can be registered as functions that can be executed on Morph by adding the annotation `@morph.func` to the function. This mechanism allows Python files to contain and execute multiple functions in a single file. + +As with SQL, Python files can describe the name and description of a function by setting name and description to the arguments of this `@morph.func` annotation. + +In addition, `@morph.load_data` can be used to load the results of other SQL/Python executions directly into variables. The loaded variables can be referenced from within the `context` argument. +In the following example, the result of an SQL execution called example_sql_cell is read in by a Python function called example_python_cell, which returns the accessed result as is. + +The `context` variable contains data in a dictionary type with the same name as the name of the read target. + +```python +import pandas as pd + +import morph +from morph import MorphGlobalContext + +@morph.func( + name="example_python_cell", + description="Example Python cell", +) +@morph.load_data("example_sql_cell") +def main(context: MorphGlobalContext) -> pd.DataFrame: + sql_result_df = context.data["example_sql_cell"] + return sql_result_df + +``` + + +Python functions must always be given `@morph.func`, but arguments such as name are not required. If not specified, the function name is used as-is as the name. + + +For more information on convenience functions, including detailed settings for `@morph.func` / `@morph.load_data` and other annotations, see the details page below: + + + + This section explains how to run Python on Morph. + + diff --git a/docs/en/workspace/template.mdx b/docs/en/workspace/template.mdx new file mode 100644 index 0000000..15f6482 --- /dev/null +++ b/docs/en/workspace/template.mdx @@ -0,0 +1,139 @@ +--- +title: 'Template' +--- + +Morph has a template facility to make writing SQL and Python code easier. +Two types of templates exist. + +## Built-in Template + +When the source code is opened on the workspace, a standard template created by the Morph team can be seen in the sidebar on the right. +This template acts as a snippet of the Morph framework and by copying it you can immediately start implementing the analysis part. + +Workspace Template + + +If you have any requests for templates, please do not hesitate to request them from `shibata@morphdb.io`! + + +## Custom Template + +In the `templates` folder, user-defined template files can be created and managed. + +New files can be created from the created templates with the `morph create` command. You can also search for them with the `morph search` command. + +### Steps to create a template + +**(Step 1) Create template files** + +Template files are created under the templates directory. + +The following placeholders can be defined during creation. (The values defined in the placeholders can be specified in the morph create command options described below, which will create the file with the replacements). + +- `${MORPH_NAME}`: replaced by the template name (old alias). (Specified by the `--name` option of the `morph create` command). +- `${MORPH_DESCRIPTION}`: replaced by the template description. (Specified by the `--description` option of the `morph create` command) + +E.g: + +```python +import pandas as pd + +import morph +from morph import MorphGlobalContext + +# morph.func is a decorator that takes in the following parameters: +# name: The identifier for the file alias. The function name will be used if not provided. +# description: The description for the function. +# output_paths: The destination paths for the output. +# output_type: The return type of the function +@morph.func( + name="${MORPH_NAME}", + description="${MORPH_DESCRIPTION}", + output_paths=["_private/{name}/{now()}{ext()}"], + output_type="dataframe", +) +def main(context: MorphGlobalContext) -> pd.DataFrame: + return pd.DataFrame({{"key1": [3, 2, 1], "key2": [5, 4, 3]}}) +``` + +**(Step 2) Edit template.yaml** + +Edit template.yaml directly under the templates folder, see example below for Syntax. + +※The src can be defined as the full path or relative to template.yaml. + +E.g: + +```yaml +# [Description] +# This file is used to manage your local templates. +# +# [Syntax] +# This file is written in YAML format. The following is an example of the syntax. +# E.g.) +# templates: +# - name: [required] +# title: +# description: +# src: [required] +# language: [required|options:"python", "sql"] +# +# [Compiling] +# After editing this file, you need to compile it using the following command. +# This command will validate the syntax of the file and make sure your templates are available. +# E.g.) +# $ morph template compile + +templates: + - name: test + title: Test Template + description: This is a test template. + src: ./python/test.py + language: python +``` + +**(Step 3) Compile** + +You can check the validity of template.yaml with the following command. Once you have created the template file and even edited template.yaml, hit the following command from the VS Code terminal. + +```bash +$ morph template compile +$ # If you run it and nothing is displayed, you have succeeded.🎉 +``` + +## How to use templates + +The templates you set up can be used in the menu on the right-hand side of the workspace or when creating a new canvas in the canvas. + +**Select from the list of templates** + +Workspace Template Menu + +**Select from canvas** +Workspace Template Canvas + +### How to use templates from the CLI + +It can be created with the following command: + +※The `