Skip to content

Conversation

@xfh
Copy link
Contributor

@xfh xfh commented Jul 27, 2025

Hi

I have some existing database schema that uses postgres' time datatype.
Zero doesn't allow me to sync these tables.

This is an attempt to add support for time, relying on postgres' parser for string formats.
https://www.postgresql.org/docs/current/datatype-datetime.html#DATATYPE-DATETIME-INPUT-TIMES

I don't know how I can build a version of zero that I can test locally. Some test are failing, but unrelated to my changes.
Any guidance is welcome.

@vercel
Copy link

vercel bot commented Jul 27, 2025

@xfh is attempting to deploy a commit to the Rocicorp Team on Vercel.

A member of the Team first needs to authorize it.

@xfh xfh marked this pull request as draft July 27, 2025 21:31
@xfh xfh changed the title draft: add support for 'time' datatype feat(zero): add support for 'time' datatype Jul 27, 2025
@tantaman
Copy link
Contributor

Curious why you chose string rather than bigint and store nanosecond precision in zero-cache for time?

Seems easier to deal with time as "number of nanoseconds since the beginning of the day" rather than as a string.

@tantaman tantaman marked this pull request as ready for review July 28, 2025 13:55
@xfh
Copy link
Contributor Author

xfh commented Jul 28, 2025

I think it is a lot more natural to work with strings, because of the browser support:
The HTML time input element supports strings.
The recent Temporal PlainTime supports parsing an RFC 9557 string as well (using the from static method).

Numeric values are a bit odd for time, because you will always have to convert it to something useful (user facing), whereas the strings can often be used directly. Also, you need to know the resolution of the numeric value, but you can't tell just from the value, which I always found harder to work with.
By the way, postgres supports only microseconds resolution for time.

@xfh xfh force-pushed the support-pg-time-type branch from e893e42 to f721687 Compare July 29, 2025 09:30
@xfh
Copy link
Contributor Author

xfh commented Jul 29, 2025

Hi @tantaman, please let me know what the idea of the pipeline-driver test is, regarding this column definition:

Should the test reflect how it deals with unsupported datatypes? In that case, the "ignored" type can simply change to BYTEA to make the test pass.

Based on the values that are set later on and the git history, it looks like typical timestamp values.

INSERT INTO ISSUES (id, closed, ignored, _0_version) VALUES ('1', 0, 1728345600000, '123');
INSERT INTO ISSUES (id, closed, ignored, _0_version) VALUES ('2', 1, 1722902400000, '123');

I can change the type to timestamp and update the expectations, or set some time values as strings if you prefer. Let me know what you prefer.

@xfh xfh force-pushed the support-pg-time-type branch from f721687 to 3c99e04 Compare July 29, 2025 12:47
@tantaman
Copy link
Contributor

tantaman commented Aug 1, 2025

Pulling this down to check

@tantaman
Copy link
Contributor

tantaman commented Aug 4, 2025

microseconds since 00 just seems less likely to get munged up and much easier to use anytime you need to do some sort of computation on a date.

To convert to a string it should be:

function microsecondsToTimeString(microseconds: number): string {
  // Convert microseconds to seconds
  const totalSeconds = Math.floor(microseconds / 1_000_000);
  
  // Get the fractional microseconds (remainder that does not fit into a whole second)
  const fractionalMicroseconds = microseconds % 1_000_000;
  
  // Calculate hours, minutes, and seconds
  const hours = Math.floor(totalSeconds / 3600);
  const minutes = Math.floor((totalSeconds % 3600) / 60);
  const seconds = totalSeconds % 60;
  
  // Format with leading zeros
  const hoursStr = hours.toString().padStart(2, '0');
  const minutesStr = minutes.toString().padStart(2, '0');
  const secondsStr = seconds.toString().padStart(2, '0');
  const microsecondsStr = fractionalMicroseconds.toString().padStart(6, '0');
  
  return `${hoursStr}:${minutesStr}:${secondsStr}.${microsecondsStr}`;
}

@tantaman
Copy link
Contributor

tantaman commented Aug 5, 2025

the problem with the tests was that all time columns were previously expected to be ignored during replication. Updating those. Also going with milliseconds since 00 since that matches how we handle date.

@xfh
Copy link
Contributor Author

xfh commented Aug 5, 2025 via email

@tantaman tantaman force-pushed the support-pg-time-type branch 2 times, most recently from da03c56 to 515ffd5 Compare August 7, 2025 16:23
@tantaman tantaman requested a review from darkgnotic August 7, 2025 16:24
@tantaman
Copy link
Contributor

tantaman commented Aug 7, 2025

ok, this should be good to go now. Waiting on tests then will merge today.

@tantaman tantaman force-pushed the support-pg-time-type branch 2 times, most recently from e681660 to 60bedcd Compare August 7, 2025 16:44
@tantaman tantaman force-pushed the support-pg-time-type branch from 60bedcd to 3ef453c Compare August 7, 2025 16:58
@tantaman tantaman enabled auto-merge (squash) August 7, 2025 17:30
@tantaman tantaman merged commit dfdd7f7 into rocicorp:main Aug 7, 2025
11 of 14 checks passed
['float4', 'number'],
['float8', 'number'],
['date', 'number'],
['time', 'string'],
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this should change to number, because you changed zero's format to milliseconds since 00.


// Date/Time types
'date': 'number',
'time': 'string',
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. This should now be type number

@tantaman
Copy link
Contributor

tantaman commented Aug 8, 2025 via email

@xfh
Copy link
Contributor Author

xfh commented Aug 8, 2025

fixed in #4709

@aboodman
Copy link
Contributor

This PR didn't work, at least with custom mutators. When a mutator runs I get:

Error: Schema incompatibility detected between your zero schema definition and the database:

  - Type mismatch for column "t" in table "message": time without time zone is currently unsupported in Zero. Please file a bug at https://bugs.rocicorp.dev/

Please update your schema definition to match the database or migrate your database to match the schema.
    at assert (file:///Users/aa/work/hello-zero-solid/node_modules/@rocicorp/zero/out/shared/src/asserts.js:3:15)
    at getServerSchema (file:///Users/aa/work/hello-zero-solid/node_modules/@rocicorp/zero/out/zero-server/src/schema.js:88:5)
    at process.processTicksAndRejections (node:internal/process/task_queues:105:5)
    at async makeServerTransaction (file:///Users/aa/work/hello-zero-solid/node_modules/@rocicorp/zero/out/zero-server/src/custom.js:23:26)

This appears to be happening because this create table statement:

CREATE TABLE "message" (
  "id" VARCHAR PRIMARY KEY,
  "sender_id" VARCHAR REFERENCES "user"(id),
  "medium_id" VARCHAR REFERENCES "medium"(id),
  "body" VARCHAR NOT NULL,
  "timestamp" TIMESTAMP not null,
  "t" TIME not null
);

leads to this datatype:

user@127.0.0.1:postgres> describe message;
+-----------+-----------------------------+-----------+
| Column    | Type                        | Modifiers |
|-----------+-----------------------------+-----------|
| id        | character varying           |  not null |
| sender_id | character varying           |           |
| medium_id | character varying           |           |
| body      | character varying           |  not null |
| timestamp | timestamp without time zone |  not null |
| t         | time without time zone      |  not null |
+-----------+-----------------------------+-----------+

which is not listed in pg.ts:

export const pgToZqlTypeMap = Object.freeze({
  // Numeric types
  ...pgToZqlNumericTypeMap,

  // Date/Time types
  'date': 'number',
  'time': 'number',
  'timestamp': 'number',
  'timestamptz': 'number',
  'timestamp with time zone': 'number',
  'timestamp without time zone': 'number',

@aboodman
Copy link
Contributor

I am not sure what all tests need to be fixed so leaving this for one of you two.

@aboodman
Copy link
Contributor

Also, I don't know if you came out at microseconds since 00:00 or milliseconds, but can you please make sure it matches whatever we do for datetime. I assume that we do millis since epoch there to match JS standard.

@aboodman
Copy link
Contributor

Here is a PR against one of our sample apps that you can use to replicate the problem: rocicorp/hello-zero-solid#25

@xfh
Copy link
Contributor Author

xfh commented Aug 29, 2025

I reproduce, thanks for the setup!

Adding time without time zone to the pgToZqlTypeMap does help with the "unsupported error", but it is not enough. I am afraid I don't understand the architectural design yet.

There is some conversion of ZQL types to Postgres types happening here in sql.ts:

function formatCommonToSingularAndPlural(
index: number,
arg: ColumnSqlConvertArg,
) {
// Ok, so what is with all the `::text` casts
// before the final cast?
// This is to force the statement to describe its arguments
// as being text. Without the text cast the args are described as
// being bool/json/numeric/whatever and the bindings try to coerce
// the inputs to those types.
const valuePlaceholder = arg.plural ? 'value' : `$${index}`;
switch (arg.type) {
case 'date':
case 'timestamp':
case 'timestamp without time zone':
return `to_timestamp(${valuePlaceholder}::text::bigint / 1000.0) AT TIME ZONE 'UTC'`;
case 'timestamptz':
case 'timestamp with time zone':
return `to_timestamp(${valuePlaceholder}::text::bigint / 1000.0)`;
// uuid doesn't support collation, so we compare as text
case 'uuid':
return arg.isComparison
? `${valuePlaceholder}::text COLLATE "${Z2S_COLLATION}"`
: `${valuePlaceholder}::text::uuid`;
}
if (arg.isEnum) {
return arg.isComparison
? `${valuePlaceholder}::text COLLATE "${Z2S_COLLATION}"`
: `${valuePlaceholder}::text::"${arg.type}"`;
}
if (isPgStringType(arg.type)) {
// For comparison cast to the general `text` type, not the
// specific column type (i.e. `arg.type`), because we don't want to
// force the value being compared to the size/max-size of the column
// type before comparison.
return arg.isComparison
? `${valuePlaceholder}::text COLLATE "${Z2S_COLLATION}"`
: `${valuePlaceholder}::text::${arg.type}`;
}
if (isPgNumberType(arg.type)) {
// For comparison cast to `double precision` which uses IEEE 754 (the same
// representation as JavaScript numbers which will accurately
// represent any number value from zql) not the specific column type
// (i.e. `arg.type`), because we don't want to force the value being
// compared to the range and precision of the column type before comparison.
return arg.isComparison
? `${valuePlaceholder}::text::double precision`
: `${valuePlaceholder}::text::${arg.type}`;
}
return `${valuePlaceholder}::text::${arg.type}`;
}

But there is also the serialization & parsing logic in pg.ts:

export const postgresTypeConfig = (
jsonAsString?: 'json-as-string' | undefined,
) => ({
// Type the type IDs as `number` so that Typescript doesn't complain about
// referencing external types during type inference.
types: {
bigint: postgres.BigInt,
json: {
to: JSON,
from: [JSON, JSONB],
serialize: BigIntJSON.stringify,
parse: jsonAsString ? (x: string) => x : BigIntJSON.parse,
},
// Timestamps are converted to PreciseDate objects.
timestamp: {
to: TIMESTAMP,
from: [TIMESTAMP, TIMESTAMPTZ],
serialize: serializeTimestamp,
parse: timestampToFpMillis,
},
// Times are converted as strings
time: {
to: TIME,
from: [TIME],
serialize: (x: unknown) => {
switch (typeof x) {
case 'string':
return x; // Let Postgres parse it
case 'number':
return millisecondsToPostgresTime(x);
}
throw new Error(`Unsupported type "${typeof x}" for time: ${x}`);
},
parse: postgresTimeToMilliseconds,
},
// The DATE type is stored directly as the PG normalized date string.
date: {
to: DATE,
from: [DATE],
serialize: (x: string | Date) =>
(x instanceof Date ? x : new Date(x)).toISOString(),
parse: dateToUTCMidnight,
},
// Returns a `js` number which can lose precision for large numbers.
// JS number is 53 bits so this should generally not occur.
// An API will be provided for users to override this type.
numeric: {
to: NUMERIC,
from: [NUMERIC],
serialize: (x: number) => String(x), // pg expects a string
parse: (x: string | number) => Number(x),
},
},
});

I don't understand, why there are two different approaches used, to convert to native postgres types. The placeholder approach in sql.ts is assuming that the ZQL value can be easily converted to postgres, mostly through casting. The postgresTypeConfig in pg.ts makes of typescript to serialize or parse values.

I've tested a time column in the hello-zero project (without custom mutators) - there the time values are saved and synced as they should. With custom mutators it doesn't work, because the value that is passed to postgres is still milliseconds since midnight. It seems a bit fragile, to implement the conversion twice. Wouldn't it be saver convert the value that is given to the query as an argument using the serializer functions implemented in pg.ts? I am probably not seeing the big picture here, sorry.

@tantaman, maybe you could reconsider using a string representation of TIME? It would remove all the complexity of converting to and from milliseconds since midnight. Note also @aboodman's assumption that it was millis since epoch. Since a time is independent of a date, it cannot be since epoch. In my opinion millis are rather confusing for TIME values. Having a string representation instead of a numeric one could indicate that time and date values are fundamentally different.

In the meantime, I have found a way to add support for time in sql.ts with the current millis since midnight approach:

case 'time':
case 'time without time zone': 
    return `(INTERVAL '1 millisecond' * ${valuePlaceholder})::TIME`;

Let me know which direction you want to go and I'll implement tests and a PR.

@aboodman
Copy link
Contributor

aboodman commented Aug 30, 2025 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants