Skip to main content
You can set read_consistency_interval on the connection to control how often reads check for updates from other writers. There are three possible settings for read_consistency_interval when working with LanceTable:
  1. Unset (default): no automatic cross-process refresh checks.
  2. Zero seconds: check for updates on every read (strongest freshness).
  3. Non-zero interval: check for updates after the interval elapses (eventual refresh).
The value you set depends on your application’s consistency needs and performance requirements. For example, a real-time dashboard might require strong consistency, while a batch analytics job might be fine with eventual consistency.
Consistency in Remote TablesIn LanceDB Enterprise (where you use RemoteTable), consistency is deployment-configured (via a weak_read_consistency_interval_seconds parameter in the cluster setup), and is not an end-user setting in the SDK. You can still use checkout_latest / checkoutLatest for explicit manual refresh.

Configure Consistency Parameters

To set strong consistency, set the interval to 0: For eventual consistency, use a non-zero interval: With the default unset interval, tables do not auto-refresh from other writers. To manually check for updates, use checkout_latest / checkoutLatest:

Handle bad vectors

This section is currently specific to the Python SDK.
In LanceDB Python, you can use the on_bad_vectors parameter to choose how invalid vector values are handled. Invalid vectors are vectors that are not valid because:
  1. They are the wrong dimension
  2. They contain NaN values
  3. They are null but are on a non-nullable field
By default, LanceDB will raise an error if it encounters a bad vector. You can also choose one of the following options:
  • drop: Ignore rows with bad vectors
  • fill: Replace bad values (NaNs) or missing values (too few dimensions) with the fill value specified in the fill_value parameter. An input like [1.0, NaN, 3.0] will be replaced with [1.0, 0.0, 3.0] if fill_value=0.0.
  • null: Replace bad vectors with null (only works if the column is nullable). A bad vector [1.0, NaN, 3.0] will be replaced with null if the column is nullable. If the vector column is non-nullable, then bad vectors will cause an error