From your editors, initialize the tokenizer like this:
var t = new SqlTokenizer(sql);
Then call any of these methods — no need to touch .sqlTokenized or know internal token positions:
t.getTables() → ["orders", "customers"]t.getTables({withAliases: true}) → [{name, alias, schema, statementIndex}]t.getColumns() → [{column, alias, table, role, statementIndex, statementType}]"target" (INSERT/UPDATE/CREATE columns being written), "source" (INSERT...SELECT source columns), or "select" (SELECT output columns).value field. For CREATE TABLE, each entry gets a datatype.t.getStatements() → [{index, type, sql, tables, columns, ctes, where, targetTable, targetColumns, sourceColumns}]t.getStatement(index) → Returns the same detailed object as above for a single statement by its index.t.getStatementTypes() → ["SELECT", "INSERT", "UPDATE", "CREATE TABLE", "DELETE"]t.getStatementCount() → Returns the total integer count of statements parsed.t.getCTEs() → [{name, recursive, columns, body, statementIndex}]t.getSQL() → Returns the reconstructed, normalized SQL string.Every method that returns per-statement data accepts an optional {statement: n} parameter to filter the results to just one statement. All return shapes are flat objects with string/array fields — no internal token positions are leaked through the API.