fix: ensure keys are correct type when serializing from data#630
fix: ensure keys are correct type when serializing from data#630allenporter merged 2 commits intomainfrom
Conversation
There was a problem hiding this comment.
Pull request overview
This PR fixes an issue where dictionary keys with non-string types (particularly int) were being converted to strings when serialized to JSON and not converted back to their original type during deserialization. This caused duplicate entries in data structures like home_map_info that use integer keys. The fix adds type casting for dictionary keys during deserialization.
Key Changes:
- Modified
_convert_to_class_objto extract both key and value types from dict type annotations - Added key type conversion by calling
key_type(k)during dictionary comprehension
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| if get_origin(class_type) is dict: | ||
| _, value_type = get_args(class_type) # assume keys are only basic types | ||
| return {k: RoborockBase._convert_to_class_obj(value_type, v) for k, v in value.items()} | ||
| key_type, value_type = get_args(class_type) |
There was a problem hiding this comment.
This unpacking will fail with a ValueError: not enough values to unpack when encountering untyped dict fields (e.g., device_status: dict | None at line 242 in this file). For untyped dicts, get_args(dict) returns an empty tuple ().
Consider adding a check for this case:
args = get_args(class_type)
if not args:
# Untyped dict - no conversion needed
return value
key_type, value_type = args| key_type, value_type = get_args(class_type) | |
| args = get_args(class_type) | |
| if not args: | |
| # Untyped dict - no conversion needed | |
| return value | |
| key_type, value_type = args |
roborock/data/containers.py
Outdated
| key_type, value_type = get_args(class_type) | ||
| return {key_type(k): RoborockBase._convert_to_class_obj(value_type, v) for k, v in value.items()} |
There was a problem hiding this comment.
The new key type conversion behavior lacks test coverage. Consider adding a test case in tests/test_containers.py that verifies dictionary keys are correctly converted when deserializing from JSON. For example:
@dataclass
class ObjectWithIntKeys(RoborockBase):
data: dict[int, str] | None = None
def test_dict_int_key_conversion():
"""Test that int keys are preserved when round-tripping through JSON."""
obj = ObjectWithIntKeys(data={1: "one", 2: "two"})
json_data = json.dumps(obj.as_dict())
restored = ObjectWithIntKeys.from_dict(json.loads(json_data))
assert restored.data == {1: "one", 2: "two"}
assert all(isinstance(k, int) for k in restored.data.keys())This would help prevent regressions of the duplicate key issue mentioned in the PR description.
So the problem for the unique ids is that we were getting duplicates in the home data info.
this happened because when we dump to a json file, the json saves the int key as a str. When we load it back, we load it as a str. Then we get the new home data and we add the ints. Now we have both a str and a int version of the same data.
previously we did not do any casting for the keys of dictionaries.
Now we do some basic casting to ensure it is the right type.