Skip to content

Easier access to metadata and other features of properties/actions#258

Merged
rwb27 merged 17 commits intomainfrom
descriptor-info
Feb 25, 2026
Merged

Easier access to metadata and other features of properties/actions#258
rwb27 merged 17 commits intomainfrom
descriptor-info

Conversation

@rwb27
Copy link
Copy Markdown
Collaborator

@rwb27 rwb27 commented Feb 3, 2026

Currently, Actions and Properties are implemented through Descriptor classes. These are hard to interact with directly: in the case of a property, we can get and set it nicely by getting/setting the attribute of the Thing, but accessing anything else is inelegant:

This merge request adds some mapping properties to a .Thing that allow:

self.properties["foo"].observe()
self.actions["increment_foo"].title

and so on. This will clean up LabThings code in a few places, and I think it also has the potential to make downstream code more readable, particularly when describing a UI or doing things like resetting properties to their default values.

There's a fair bit of additional code, but I think the majority of it is quite straightforward.

The most noticeable change in behaviour is that Thing.settings.model is now available, which allows us to load/save settings using a BaseModel that means we no longer need settings to accept dictionaries when they are typed as BaseModel subclasses. There are also a few places where mypy is now happy, because it gets confused by passing descriptor objects around.

Design rationale

Class hierarchy

This MR introduces quite a few new lines, and quite a few new classes. It's worth explaining why I think having these new classes is useful: the BaseDescriptorInfo subclasses more or less follow the BaseDescriptor subclasses - we have BaseProperty and PropertyInfo, Action and ActionInfo and so on.

I think the rationale for two classes for each affordance is clear: we need a descriptor - that's what makes the properties or actions work as expected on a Thing. The "info" class serves two purposes:

  • It provides access to the features of the descriptor, but in an object that can be passed around without confusing mypy.
  • It provides a convenient way to "bind" to a Thing instance. So, for example, mything.properties["foo"] gives an object that allows us to access MyThing.foo that passes mything when it's needed. The current code to do this is a bit ugly, and looks a bit like:
    descriptor = mything.__class__.foo
    model = descriptor.get_model(thing=mything)
    (ignore for the moment that the model doesn't depend on the Thing instance, that's a possible future change). The snippet above would become:
    model = mything.properties["foo"].model
    I think that's way more readable. Actions like resetting a property to its default value would become much nicer because there's a way to refer to a property of a particular instance, not just of the class.

Dictionary-style lookup

It would be possible to implement __getattr__ on the descriptor info collections like Thing.properties such that you could access property foo as Thing.properties.foo. There are two main reasons I didn't do this:

  • It confuses type checkers: without a custom plugin, this will show up as invalid attribute access.
  • It makes no distinction between affordance names and properties of the collection. For example, Thing.settings.model gives us a pydantic model for all the settings. That could be confused with a setting called model.

The main argument in favour of using __getattr__ rather than __getitem__ is that it could statically check for attributes existing. I think this is quite hard, and would need a custom mypy plugin. I also think that it should be possible to statically type dictionary-style access, by using a plugin to type the keys as Literal["myprop1", "myprop2", ...]. I think the hard part in either case will be getting a statically-defined list of the properties/actions/etc. of a Thing subclass, and having done a little reading I think it's more work than we are likely to have capacity for in the near future.

I think the property collection already improves reliability of things like UI generation: referring to properties using names only gets checked once the UI has been sent to the client, and the client tries to access the properties. In contrast, referring to properties as properties["name"] will check the named property exists before the UI description is sent to the front end, and so is much easier to test, and results in more fixable errors.

rwb27 added a commit that referenced this pull request Feb 9, 2026
I am confused by this - it's possible `dmypy` confused me into changing it, but I have now changed it back and it's passing mypy.

This line should be fixed by #258 in any case.
rwb27 added 10 commits February 9, 2026 22:11
This adds a new class that provides access to the useful methods of a BaseDescriptor, without needing
to retrieve the BaseDescriptor object directly.

This should tidy up code in a few places, where we want to refer to the affordances directly, not just their values.
This commit creates a new class, `BaseDescriptorInfo`. The intention is that using `BaseDescriptorInfo` will be more convenient than passing around descriptors. It may also be bound to an object, which should be significantly more convenient when both a Thing instance and a descriptor need to be referenced.

An important side-effect that I'll note here is that `BaseDescriptor` is now a *Data Descriptor* as it implements a `__set__` method. This is arguably the way it should always have been, and simply means that `BaseDescriptor` instances won't get overwritten by the instance dictionary.

Making `BaseDescriptor` instances read-only data descriptors by default means I can get rid of a dummy `__set__` method from `ThingSlot`.
This introduces the `DescriptorInfoCollection` class, and a descriptor to return it.

The `DescriptorInfoCollection` is a mapping that returns `DescriptorInfo` objects. This makes it convenient to obtain both bound and unbound `DescriptorInfo` objects.
This includes tests for `PropertyInfo` and a fix to handle access to missing fields correctly.

`.Thing.properties[<name>]` now returns a `PropertyInfo` object allowing access to the property's metadata.

`.Thing.settings` does the same for settings, and also builds a model for loading/saving the settings. It does not, yet, load/save them, and needs test code.
This now means that a test for `isinstance(obj, self.value_type)` works as expected.

I've added the descriptor name to a couple of error messages, for clarity, and improved docstrings to satisfy flake8.
This commit makes use of the new `Thing.settings` to generate a model for the settings file, and load/save the settings using a model. This has two advantages:
* Settings that are typed as a model are now correctly loaded as a model.
* Settings with constraints are validated when the settings file is loaded.

The first point should get rid of some unnecessary code downstream.

The second point is related to a change in behaviour: broken or invalid settings files, including those that have extra keys in them, now cause an error and will stop the server from loading.
This was accidentally defined on PropertyInfo, when it should have been in SettingInfo.
This is added, largely for completeness so it's consistent between actions, properties and settings.
This fixes a few minor problems that caused tests to fail:

1. DescriptorInfo objects don't get cached, so we need to test for equality rather than identity (== vs is).
2. There were some bad default values for constrained settings. This caused an error when settings were written to disk. Possibly we should enable "validate_default" for these.
3. There was a bad name for a private property, used to test an error condition.

As part of this I've implemented `__eq__` and `__repr__` on BaseDescriptorInfo. I think that is likely to prove useful in other contexts. I added a test for _repr_, __eq__ is tested in test_descriptorinfocollection().
@barecheck
Copy link
Copy Markdown

barecheck bot commented Feb 10, 2026

Barecheck - Code coverage report

Total: 96.52%

Your code coverage diff: 0.05% ▴

Uncovered files and lines
FileLines
src/labthings_fastapi/actions.py511-512, 525, 550-551, 724, 868-869, 872, 875, 923
src/labthings_fastapi/base_descriptor.py287, 301-302, 636
src/labthings_fastapi/properties.py693, 697, 720-723, 795, 814, 1023
src/labthings_fastapi/thing.py222, 230, 263, 315-317, 349, 437

@rwb27 rwb27 requested a review from julianstirling February 10, 2026 13:19
Copy link
Copy Markdown
Contributor

@julianstirling julianstirling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks nice.

My only worry is that hard erroring on validation issues will cause servers not to boot in a way that is difficult for the user to fix. This is inconsistent with the JSON not being readable where we warn and continue.

@rwb27
Copy link
Copy Markdown
Collaborator Author

rwb27 commented Feb 23, 2026

Looks nice.

My only worry is that hard erroring on validation issues will cause servers not to boot in a way that is difficult for the user to fix. This is inconsistent with the JSON not being readable where we warn and continue.

Personally I'd quite like to follow the rationale we've applied elsewhere, and fail to start if there's a problem: particularly given that a damaged or out of date settings file is likely to be summarily overwritten, I think it probably is worth an error. However, it may be that the right thing to do for now is to ignore it with a warning, and implement a future PR that errors, offering a way to delete the offending config file from the fallback server.

@julianstirling
Copy link
Copy Markdown
Contributor

Personally I'd quite like to follow the rationale we've applied elsewhere, and fail to start if there's a problem: particularly given that a damaged or out of date settings file is likely to be summarily overwritten, I think it probably is worth an error. However, it may be that the right thing to do for now is to ignore it with a warning, and implement a future PR that errors, offering a way to delete the offending config file from the fallback server.

I am generally in favour of clearly defined errors as downstream can handle them if needed. In the case of lifecycle things like settings, there is no place I can catch and handle that exception if I want to.

Consider a downstream application that changes the minimum value for a setting from 10 to 20 and pushes and update. If someone has 10 set, the entire server fails to start. The application may anticipate this but has no possible way it can capture and handle that error as it is abstracted away from me by the framework.

I would say if we want to error about settings we need to do it in a way that the application has functionality to handle it.

@rwb27
Copy link
Copy Markdown
Collaborator Author

rwb27 commented Feb 24, 2026

I would say if we want to error about settings we need to do it in a way that the application has functionality to handle it.

I think that's appropriate. I was hoping we could move as much of the settings-related logic as possible into the new Thing.settings descriptor, but I think it might make a lot of sense to keep something like _load_settings as a method of Thing. That way, if someone wants to override the default logic, it's possible.

I'm aware the problem of versioned config files with upgrade paths is one that's been addressed by a bunch of libraries, so perhaps with a little thought it would even be possible to put that in place. That probably requires an opt in though, as I guess as a minimum it would need to store schemas for all the different versions, and I can't see a neat way to do that without a chunk of extra effort by the Thing.

My preferred default would probably be that a broken settings file results in the fallback server, but with a button to delete/rename the offending file and restart.That way you wouldn't have to SSH in and fix it, but it's obvious to the user that they've now lost their settings. If it's being really helpful, it could even offer to download the offending file and/or remove just the broken key.

Obviously that's logic that, for the OFM, belongs in our mythical config server.

@julianstirling
Copy link
Copy Markdown
Contributor

When I say handle it I mean as a person writing a downstream application I need a clear place to try/except an error during my start up. Because LabThings shouldn't have the power to decide whether a downstream application fails on a settings error without giving the downstream application a place to handle it.

One option would be to capture any errors when setting settings in a list. Carry on setting all settings where it is possible and then run a method built into thing that is something like:

def _handle_load_settings_exceptions(self, exceptions: Mapping[str, BaseException]) -> None:
    """Handle any errors raised during loading settings. If there are exceptions, the first exception is raised.

    :param exceptions: A mapping of each setting that failed to load from the settings file to the exception that
        was raised
    """
    if not exceptions:
        return
    # Raise from first error found
    err = next(iter(exceptions.values()))
    if len(exceptions) == 1:
        setting = list(exceptions)[0]
        raise lt.excpetions.LoadingSettingsError(f"Could not load the setting: {setting}") from err

    settings = ", ".join(exceptions)
    raise lt.excpetions.LoadingSettingsError(f"Could not load the following settings: {settings}") from err

This way all settings can be loaded first, and then the application developer has a clear way to handle the errors per setting if they choose to.

This reverts to the previous try-and-ignore behaviour. I'll make an issue to revisit the proposed change in another PR.
@rwb27
Copy link
Copy Markdown
Collaborator Author

rwb27 commented Feb 24, 2026

When I say handle it I mean as a person writing a downstream application I need a clear place to try/except an error during my start up. Because LabThings shouldn't have the power to decide whether a downstream application fails on a settings error without giving the downstream application a place to handle it.

One option would be to capture any errors when setting settings in a list. Carry on setting all settings where it is possible and then run a method built into thing that is something like:

def _handle_load_settings_exceptions(self, exceptions: Mapping[str, BaseException]) -> None:
    """Handle any errors raised during loading settings. If there are exceptions, the first exception is raised.

    :param exceptions: A mapping of each setting that failed to load from the settings file to the exception that
        was raised
    """
    if not exceptions:
        return
    # Raise from first error found
    err = next(iter(exceptions.values()))
    if len(exceptions) == 1:
        setting = list(exceptions)[0]
        raise lt.excpetions.LoadingSettingsError(f"Could not load the setting: {setting}") from err

    settings = ", ".join(exceptions)
    raise lt.excpetions.LoadingSettingsError(f"Could not load the following settings: {settings}") from err

This way all settings can be loaded first, and then the application developer has a clear way to handle the errors per setting if they choose to.

When I say handle it I mean as a person writing a downstream application I need a clear place to try/except an error during my start up. Because LabThings shouldn't have the power to decide whether a downstream application fails on a settings error without giving the downstream application a place to handle it.

One option would be to capture any errors when setting settings in a list. Carry on setting all settings where it is possible and then run a method built into thing that is something like:

def _handle_load_settings_exceptions(self, exceptions: Mapping[str, BaseException]) -> None:
    """Handle any errors raised during loading settings. If there are exceptions, the first exception is raised.

    :param exceptions: A mapping of each setting that failed to load from the settings file to the exception that
        was raised
    """
    if not exceptions:
        return
    # Raise from first error found
    err = next(iter(exceptions.values()))
    if len(exceptions) == 1:
        setting = list(exceptions)[0]
        raise lt.excpetions.LoadingSettingsError(f"Could not load the setting: {setting}") from err

    settings = ", ".join(exceptions)
    raise lt.excpetions.LoadingSettingsError(f"Could not load the following settings: {settings}") from err

This way all settings can be loaded first, and then the application developer has a clear way to handle the errors per setting if they choose to.

I think it's worth making a distinction between a downstream application and a Thing subclass. A downstream application should be able to catch errors that happen on server start-up. I think if a Thing is misconfigured it's not unreasonable to say that the application that's starting LabThings would need to handle that, rather than the Thing.

Whether a broken settings file counts as a misconfigured Thing can be debated, but having just looked at the code, I think a solution that uses the existing load_settings method would be fine, and could certainly achieve what you suggest. The minimal fix in a downstream Thing subclass might be:

class ThingThatStartsMoreReliably(lt.Thing):
    def load_settings(self):
        try:
            super().load_settings(self)
        except Exception:
            pass

We could also add code that would load settings one by one, to allow a partially-broken settings.json to work. I am not sure how helpful that would be in practice.

My feeling is that, if downstream code wants to handle exceptions, it probably keeps life clearer if it does so with a try: block. That keeps the namespace cleaner, avoids the need to find the exception handling function, and makes it easier for linters to shout about inappropriately broad catch-all handlers, like the one in my example...

@rwb27 rwb27 requested a review from julianstirling February 24, 2026 12:02
@rwb27
Copy link
Copy Markdown
Collaborator Author

rwb27 commented Feb 24, 2026

It's worth noting here that, while I've reverted the exeption to a warning, loading the settings with a model does mean that it's all-or-nothing, so one invalid setting will cause the whole file to be ignored.

@julianstirling
Copy link
Copy Markdown
Contributor

It's worth noting here that, while I've reverted the exeption to a warning, loading the settings with a model does mean that it's all-or-nothing, so one invalid setting will cause the whole file to be ignored.

Can we not do it as a model per setting?

What happens in the case where we add a new setting? Will that fail or does that have default set so it can be missing in the config file?

@julianstirling
Copy link
Copy Markdown
Contributor

I think it's worth making a distinction between a downstream application and a Thing subclass. A downstream application should be able to catch errors that happen on server start-up. I think if a Thing is misconfigured it's not unreasonable to say that the application that's starting LabThings would need to handle that, rather than the Thing.

But how do I choose in that case that I don't mind that the settings revert to default and carry on.

We could also add code that would load settings one by one, to allow a partially-broken settings.json to work. I am not sure how helpful that would be in practice.

For development friendliness this I think essential. As we develop we make tweaks. If I loose all of my settings because one changes that is frustrating.

Think about being in a lab and developing something, you add a bunch of settings because you think you need them. You delete one setting because you no longer need it and everything falls over and you loose the value of every other setting.

@rwb27
Copy link
Copy Markdown
Collaborator Author

rwb27 commented Feb 25, 2026

I can see two important use cases here, which I think want different behaviours: during development, I agree we probably want it to be pretty permissive, and make a best guess at loading the settings, resetting anything that is not valid. In production, I feel like it would be a good idea to be quite strict - if the settings file doesn't match the current version of the Thing that's an error we ought to do something about: otherwise I can see people getting confused why their settings got lost or broken.

We do already have a model-per-setting, and I then construct a model for all the settings. This currently doesn't cause an error if settings are missing, only if they don't match the model or if extra settings are present. Obviously the latter would be easy to change with model config.

This really is a discussion for another PR, so I'll revert to doing it setting-by-setting for now. I think I'd still be keen to implement a stricter mode for production, because I think it will save headaches in the future if we make sure settings are either valid or have been properly migrated. Perhaps it's unwise to try to increase the strictness before we've got e.g. versioning and auto migration of settings files, and that's feeling like a bigger change.

I might argue in the short term to make invalid or broken settings files an error in the logs, just to up its prominence - at some point we probably also want to pop errors up in the UI without relying on people checking them - that I suspect is already on your radar.

@julianstirling
Copy link
Copy Markdown
Contributor

I might argue in the short term to make invalid or broken settings files an error in the logs, just to up its prominence - at some point we probably also want to pop errors up in the UI without relying on people checking them - that I suspect is already on your radar.

Error in the logs sounds like a good compromise for now.

I agree having a way to enable always failing when the server starts up. As there is a big difference between me loosing my Lens Shading table every time I switch branch, and a lab technician not knowing that one of their standard settings were reset.

Ensuring logs aren't silently missed is on my radar, but I don't think have an issue yet. I'll make one. Thanks

This will load all the keys that are valid and log errors for the rest.
I now log a warning per-setting if they are not valid. This gets rid of a bunch of error combining logic.

We may want to revert to that if we upgrade these to errors - if so, you can see the previous commit.
This is caught internally - but it seemed easier to add an explanation in the docstring than to exempt it and document that. This isn't a user-facing function anyway.
@julianstirling
Copy link
Copy Markdown
Contributor

I am happy with the changes in the last 3 commits. There are failing tests

This corrects a typo from two commits ago.
@julianstirling
Copy link
Copy Markdown
Contributor

Nice to see a test doing it's job! Thanks for the fix.

@rwb27 rwb27 merged commit 2e0eb20 into main Feb 25, 2026
14 checks passed
@rwb27 rwb27 deleted the descriptor-info branch February 25, 2026 16:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants