A set of Roslyn Analyzers aimed to enforce some design good practices and code quality (QA) rules.
This section describes the rules included in this package.
Every rule is accompanied by the following information and clues:
- Category → identify the area of interest of the rule, and can have one of the following values: Design / Naming / Style / Usage / Performance / Security
- Severity → state the default severity level of the rule. The severity level can be changed by editing the .editorconfig file used by the project/solution. Possible values are enumerated by the DiagnosticSeverity enum
- Description, motivations and fixes → a detailed explanation of the detected issue, and a brief description on how to change your code in order to solve it.
- See also → a list of similar/related rules, or related knowledge base
| Id | Category | Description | Default severity | Is enabled | Code fix |
|---|---|---|---|---|---|
| DSA001 | Design | WebApi controller methods should not contain data-manipulation business logics through a LINQ query expression. | ⚠ Warning | ✅ | ❌ |
| DSA002 | Design | WebApi controller methods should not contain data-manipulation business logics through a LINQ fluent query. | ⚠ Warning | ✅ | ❌ |
| DSA003 | Code Smells | Use String.IsNullOrWhiteSpace instead of String.IsNullOrEmpty |
⚠ Warning | ✅ | ❌ |
| DSA004 | Code Smells | Use DateTime.UtcNow instead of DateTime.Now |
⚠ Warning | ✅ | ❌ |
| DSA005 | Code Smells | Potential non-deterministic point-in-time execution | ⛔ Error | ✅ | ❌ |
| DSA006 | Code Smells | General exceptions should not be thrown by user code | ⛔ Error | ✅ | ❌ |
| DSA007 | Code Smells | When initializing a lazy field, use a robust locking pattern, i.e. the "if-lock-if" (aka "double checked locking") | ⚠ Warning | ✅ | ❌ |
| DSA008 | Bug | The Required Attribute has no impact on a not-nullable DateTime | ⛔ Error | ✅ | ❌ |
| DSA009 | Bug | The Required Attribute has no impact on a not-nullable DateTimeOffset | ⛔ Error | ✅ | ❌ |
| DSA011 | Design | Avoid lazily initialized, self-contained, static singleton properties | ⚠ Warning | ✅ | ❌ |
| DSA012 | Design | Avoid the "if not exists, then insert" check-then-act antipattern on database types (TOCTOU) | ⚠ Warning | ✅ | ❌ |
| DSA013 | Security | Minimal API endpoints should have an explicit authorization configuration | ⚠ Warning | ✅ | ❌ |
| DSA014 | Security | Minimal API endpoints on route groups should have an explicit authorization configuration | ⚠ Warning | ✅ | ❌ |
| DSA015 | Security | Minimal API endpoints on parameterized route builders should have an explicit authorization configuration | ⚠ Warning | ✅ | ❌ |
| DSA016 | Code Smells | Avoid repeated invocation of the same enumeration method with identical arguments | ⚠ Warning | ✅ | ❌ |
| DSA017 | Design | Use the collection's atomic operation instead of the check-then-act pattern | ⚠ Warning | ✅ | ❌ |
| DSA018 | Design | Protect the check-then-act pattern with a lock or use a collection with built-in duplicate handling | ⚠ Warning | ✅ | ❌ |
| DSA019 | Code Smells | Avoid repeated deeply nested member access chains | ⚠ Warning | ✅ | ✅ |
Don't use Entity Framework to launch LINQ queries in a WebApi controller.
- Category: Design
- Severity: Warning ⚠
- Related rules: DSA002
WebApi controller methods should not contain data-manipulation business logics through a LINQ query expression.
In the analyzed code, a WebApi controller method is
using Entity Framework DbContext to directly manipulate data through a LINQ query expression.
WebApi controllers should not contain data-manipulation business logics.
This is a typical violation of the "Single Responsibility" rule of the "SOLID" principles, because the controller is doing too many things outside its own purpose.
Security-wise, mixing data access logic directly into the presentation layer weakens compartmentalization and increases the attack surface, making it harder to apply consistent authorization, input validation, and audit logging at the data access boundary.
- MITRE, CWE-653: Improper Isolation or Compartmentalization
- MITRE, CWE-1057: Data Access Operations Outside of Expected Data Manager Component
In order to fix the problem, the code could be modified in order to rely on the "Indirection pattern" and maximize the "Low coupling evaluative pattern" of the "GRASP" principles. Move the data-manipulation business logics into a more appropriate class, or even better, an injected service.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA001: WebApi controller methods should not contain data-manipulation business logics through a LINQ query expression.
dotnet_diagnostic.DSA001.severity = error
public class MyEntitiesController : ControllerBase
{
protected MyDbContext DbContext { get; }
public MyEntitiesController(MyDbContext dbContext)
{
DbContext = dbContext;
}
[HttpGet]
public IEnumerable<MyEntity> GetAll_NotOk()
{
// this WILL trigger the rule
var query = from entities in DbContext.MyEntities where entities.Id > 0 select entities;
return query.ToList();
}
[HttpPost]
public IEnumerable<long> GetAll_Ok()
{
// this WILL NOT trigger the rule
var query = DbContext.MyEntities.Where(entities => entities.Id > 0).Select(entities=>entities.Id);
return query.ToList();
}
}Don't use an Entity Framework DbSet to launch queries in a WebApi controller.
- Category: Design
- Severity: Warning ⚠
- Related rules: DSA001
WebApi controller methods should not contain data-manipulation business logics through a LINQ fluent query.
In the analyzed code, a WebApi controller method is
using Entity Framework DbSet to directly manipulate data through a LINQ fluent query.
WebApi controllers should not contain data-manipulation business logics.
This is a typical violation of the "Single Responsibility" rule of the "SOLID" principles, because the controller is doing too many things outside its own purpose.
Security-wise, mixing data access logic directly into the presentation layer weakens compartmentalization and increases the attack surface, making it harder to apply consistent authorization, input validation, and audit logging at the data access boundary.
- MITRE, CWE-653: Improper Isolation or Compartmentalization
- MITRE, CWE-1057: Data Access Operations Outside of Expected Data Manager Component
In order to fix the problem, the code could be modified in order to rely on the "Indirection pattern" and maximize
the "Low coupling evaluative pattern" of the "GRASP"
principles.
Move the data-manipulation business logics into a more appropriate class, or even better, an injected service.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA002: WebApi controller methods should not contain data-manipulation business logics through a LINQ fluent query.
dotnet_diagnostic.DSA002.severity = error
public class MyEntitiesController : Microsoft.AspNetCore.Mvc.ControllerBase
{
protected MyDbContext DbContext { get; }
public MyEntitiesController(MyDbContext dbContext)
{
this.DbContext = dbContext;
}
[HttpGet]
public IEnumerable<MyEntity> GetAll0()
{
// this WILL NOT trigger the rule
var query = from entities in DbContext.MyEntities where entities.Id > 0 select entities;
return query.ToList();
}
[HttpPost]
public IEnumerable<long> GetAll1()
{
// this WILL trigger the rule
var query = DbContext.MyEntities.Where(entities => entities.Id > 0).Select(entities=>entities.Id);
return query.ToList();
}
}Use IsNullOrWhiteSpace instead of String.IsNullOrEmpty.
- Category: Code smells
- Severity: Warning ⚠
Usually, business logics distinguish between "string with content", and "string NULL or without meaningful content".
Thus, statistically speaking, almost every call to string.IsNullOrEmpty could or should be replaced by a call to string.IsNullOrWhiteSpace, because in the large majority of cases, a string
composed by only spaces, tabs, and return chars is not considered valid because it doesn't have "meaningful content".
In most cases, string.IsNullOrEmpty is used by mistake, or has been written when string.IsNullOrWhiteSpace was not available.
Security-wise, using IsNullOrEmpty instead of IsNullOrWhiteSpace can allow whitespace-only strings to bypass input validation, potentially leading to injection attacks, data corruption, or logic bypass when the application treats whitespace-only input as valid content.
Don't use string.IsNullOrEmpty. Use string.IsNullOrWhiteSpace instead.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA003: Use String.IsNullOrWhiteSpace instead of String.IsNullOrEmpty
dotnet_diagnostic.DSA003.severity = error
public class MyClass
{
public bool IsOk(string s)
{
// this WILL NOT trigger the rule
return string.IsNullOrWhiteSpace(s);
}
public bool IsNotOk(string s)
{
// this WILL trigger the rule
return string.IsNullOrEmpty(s);
}
}Use DateTime.UtcNow instead of DateTime.Now.
- Category: Code smells
- Severity: Warning ⚠
Using DateTime.Now into business logics potentially leads to many different problems:
- Incoherence between nodes or processes running in different timezones (even in the same country, i.e. USA, Russia, China, etc)
- Unexpected behaviours in-between legal time changes
- Code conversion problems and loss of timezone info when saving/loading data to/from a datastore
Security-wise, this is correlated to the CWE category “7PK” (CWE-361)
Cit:
"This category represents one of the phyla in the Seven Pernicious Kingdoms vulnerability classification. It includes weaknesses related to the improper management of time and state in an environment
that supports simultaneous or near-simultaneous computation by multiple systems, processes, or threads. According to the authors of the Seven Pernicious Kingdoms, "Distributed computation is about
time and state. That is, in order for more than one component to communicate, state must be shared, and all that takes time. Most programmers anthropomorphize their work. They think about one thread
of control carrying out the entire program in the same way they would if they had to do the job themselves. Modern computers, however, switch between tasks very quickly, and in multi-core, multi-CPU,
or distributed systems, two events may take place at exactly the same time. Defects rush to fill the gap between the programmer's model of how a program executes and what happens in reality. These
defects are related to unexpected interactions between threads, processes, time, and information. These interactions happen through shared state: semaphores, variables, the file system, and,
basically, anything that can store information."
Don't use DateTime.Now. Use DateTime.UtcNow instead
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA004: Use DateTime.UtcNow instead of DateTime.Now
dotnet_diagnostic.DSA004.severity = error
public class MyClass
{
public DateTime IsOk()
{
// this WILL NOT trigger the rule
return DateTime.UtcNow;
}
public DateTime IsNotOk()
{
// this WILL trigger the rule
return DateTime.Now;
}
}Potential non-deterministic point-in-time execution due to multiple usages of DateTime.UtcNow or DateTime.Now in the same method.
- Category: Code smells
- Severity: Error ⛔
An execution flow must always be as deterministic as possible. This means that all decisions inside a scope or algorithm must be performed on a "stable" and immutable set of parameters/conditions. When dealing with dates and times, always ensure that the point-in-time reference is fixed, otherwise the algorithm would work on a "sliding window", leading to unpredictable results. This is particularly impacting in:
- datasource-dependent flows
- slow-running algorithms
- and in-between legal time changes.
Security-wise, this is correlated to the CWE category “7PK” (CWE-361)
Cit:
"This category represents one of the phyla in the Seven Pernicious Kingdoms vulnerability classification. It includes weaknesses related to the improper management of time and state in an environment
that supports simultaneous or near-simultaneous computation by multiple systems, processes, or threads. According to the authors of the Seven Pernicious Kingdoms, "Distributed computation is about
time and state. That is, in order for more than one component to communicate, state must be shared, and all that takes time. Most programmers anthropomorphize their work. They think about one thread
of control carrying out the entire program in the same way they would if they had to do the job themselves. Modern computers, however, switch between tasks very quickly, and in multi-core, multi-CPU,
or distributed systems, two events may take place at exactly the same time. Defects rush to fill the gap between the programmer's model of how a program executes and what happens in reality. These
defects are related to unexpected interactions between threads, processes, time, and information. These interactions happen through shared state: semaphores, variables, the file system, and,
basically, anything that can store information."
In order to avoid problems, apply one of these, depending on the situation:
- When measuring elapsed time, use a
StopWatch.StartNew()combined withStopWatch.Elapsed - When NOT measuring elapsed time, set a
var now = DateTime.UtcNowvariable at the top of the method, or at the beginning of an execution flow/algorithm, and reuse that variable in all places instead ofDateTime.***Now.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA005: Potential non-deterministic point-in-time execution
dotnet_diagnostic.DSA005.severity = error
public class MyClass
{
public bool IsOk(string s)
{
var now = DateTime.UtcNow; // fixed point-in-time reference
DoSomething(now); // this WILL NOT trigger the rule
for(int i; i < 10; i++)
{
DoOtherThings(now); // this WILL NOT trigger the rule
}
}
public bool IsNotOk(string s)
{
DoSomething(DateTime.UtcNow); // this WILL trigger the rule
for(int i; i < 10; i++)
{
DoOtherThings(DateTime.UtcNow); // this WILL trigger the rule
}
}
}General exceptions should not be thrown by user code.
- Category: Code smells
- Severity: Error ⛔
General exceptions should never be thrown, because throwing them prevents calling methods from discriminating between system-generated exceptions, and application-generated errors.
This is a bad smell, and could lead to stability and security concerns.
General exceptions that trigger this rule are:
ExceptionSystemExceptionApplicationExceptionIndexOutOfRangeExceptionNullReferenceExceptionOutOfMemoryExceptionExecutionEngineException
Security-wise, this is correlated to MITRE, CWE-397 - Declaration of Throws for Generic Exception
Use scenario-specific exceptions, i.e. ArgumentException, ArgumentNullException, InvalidOperationException, etc.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA006: General exceptions should never be thrown.
dotnet_diagnostic.DSA006.severity = error
public class MyClass
{
public void IsOk(int id)
{
if(id < 0) // this is OK, and will NOT be matched by the rule
throw new ArgumentException(nameof(id),"Invalid id");
}
public void IsNotOk(int id)
{
if(id < 0) // this is NOT OK, and will be matched by the rule
throw new SystemException("Invalid id");
}
}When initializing a lazy field (and in particular fields containing the instance of a singleton object), use a robust locking pattern, i.e. the “if-lock-if” (aka “double checked locking”)
- Category: Code smells
- Severity: Warning ⚠
Cit. Wikipedia:
The "double-checked locking" (also known as "double-checked locking optimization") is a software design pattern used to reduce the overhead of acquiring a lock by testing the locking criterion (the "
lock hint") before acquiring the lock.
Locking occurs only if the locking criterion check indicates that locking is required.
The pattern is typically used to reduce locking overhead when implementing "lazy initialization" in a multi-threaded environment, especially as part of the Singleton pattern.
Lazy initialization avoids initializing a value until the first time it is accessed.
- Wikipedia: Double-checked_locking
- Microsoft Documentation: Managed Threading Best Practices
- MITRE, CWE-667: Improper Locking (4.16)
- MITRE, CWE-413: Improper Resource Locking (4.16)
Instead of just writing something like this....
public class MyClass
{
private string _theField;
private readonly object _theLock = new object();
public void IsOk(int id)
{
lock(_theLock){ // ❌ too early, very wasteful, poor performances
if(_theField == null) {
_theField = ComputeExpensiveValue(id);
}
}
}
}... or something like this....
public class MyClass
{
private string _theField;
private readonly object _theLock = new object();
public void IsOk(int id)
{
if(_theField == null) {
lock(_theLock){ // ❌ too late, and thread-unsafe
_theField = ComputeExpensiveValue(id); // ⚠ this could be executed multiple times !
}
}
}
}... use the following if-lock-if pattern:
public class MyClass
{
private string _theField;
private readonly object _theLock = new object();
public void IsOk(int id)
{
if(_theField == null) { // ✅ efficient and fast pre-check (few nanoseconds)
lock(_theLock){ // ✅ protects against race conditions and multithreading
if(_theField == null) { // ✅ only if really needed, safely initialize
_theField = ComputeExpensiveValue(id); // ✅ guaranteed to be executed only once
}
}
}
}
}In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA007: Use the double-checked lazy initialization pattern
dotnet_diagnostic.DSA007.severity = warning
The Required Attribute has no impact on a not-nullable DateTime property.
- Category: Bug
- Severity: ⛔ Error
It is a common misunderstanding that the Required Attribute is somehow able to validate a
not-nullable DateTime property.
In reality, not-nullable, not-string types are ignored by the Required Attribute, so it doesn't
make any sense to use it in this context.
Remove the Required Attribute, or make the property nullable. If a "valid date"-like validation is needed, use Range Attribute.
- DSA009 - The Required Attribute has no impact on a not-nullable DateTimeOffset
- Required Attribute
- Range Attribute
- DateTime
- MITRE, CWE-20: Improper Input Validation
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA008: The Required Attribute has no impact on a not-nullable DateTime
dotnet_diagnostic.DSA008.severity = warning
The Required Attribute has no impact on a not-nullable DateTimeOffset property.
- Category: Bug
- Severity: ⛔ Error
It is a common misunderstanding that the Required Attribute is somehow able to validate a
not-nullable DateTimeOffset property.
In reality, not-nullable, not-string types are ignored by the Required Attribute, so it doesn't
make any sense to use it in this context.
Remove the Required Attribute, or make the property nullable. If a "valid date"-like validation is needed, use Range Attribute.
- DSA008 - The Required Attribute has no impact on a not-nullable DateTime
- Required Attribute
- Range Attribute
- DateTimeOffset
- MITRE, CWE-20: Improper Input Validation
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA009: The Required Attribute has no impact on a not-nullable DateTimeOffset
dotnet_diagnostic.DSA009.severity = warning
Avoid lazily initialized, self-contained, static singleton properties
- Category: Design
- Severity: ⚠ Warning
The Singleton Pattern is subject of many controversies. Technically, there is nothing wrong with it, but its usefulness and robustness is very implementation-dependent, and in some cases it's seen as an anti-pattern.
A good strategy to make use of this pattern, is to use an IoC/DI framework that ensures proper thread-safeness, dependencies management, and resources allocation/deallocation.
// Good/Safe implementation, based on Microsoft IoC/DI
services.AddSingleton<IMyService, MyService>(); A simpler but very problematic strategy relies on directly exposing a public static property in the singleton class, like this.
public class MyClass
{
private static MyClass _instance;
// Bad practice:
// - fragile, because lacks proper locking
// - badly designed, because forces the caller to "know" the implementor class
public static MyClass Instance => _instance??=new MyClass();
}Self-contained static singleton properties, particularly when they involve lazy initialization within the property itself, can lead to several problems, especially in multithreaded environments. Due to their static nature, they are also difficult to test, and could manifest unpredictable results if the testing framework (or the tests) doesn't clean the static instances in-between sessions. Also, they force the caller to know the implementor class, instead of just an abstraction (i.e. an interface implemented by the singleton class).
This analyzer aims to find occurrences of this kind of ill-conceived implementations.
The following patterns are matched:
public class MyClass
{
private static MyClass _instance;
public static MyClass Instance
{
get
{
if (_instance == null)
_instance = new MyClass();
return _instance;
}
}
}public class MyClass
{
private static MyClass _instance;
public static MyClass Instance
{
get
{
if (_instance != null)
return _instance;
_instance = new MyClass();
return _instance;
}
}
}public class MyClass
{
private static MyClass _instance;
public static MyClass Instance => _instance??=new MyClass();
}Use an IoC/DI framework instead, or at least use proper locking when initializing the instance.
- Singletons Are Evil
- Singleton Pattern
- MITRE, CWE-543: Use of Singleton Pattern Without Proper Synchronization in a Multithreaded Context
- MITRE, CWE-362: Concurrent Execution using Shared Resource with Improper Synchronization
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA011: Avoid lazily initialized, self-contained, static singleton properties
dotnet_diagnostic.DSA011.severity = warning
Avoid the "if not exists, then insert" check-then-act antipattern on database types (TOCTOU).
This rule fires when the "if not exists, then insert" check-then-act pattern is used on database types (DbSet<T>, IQueryable<T>).
The pattern is a non-atomic sequence that first checks whether a record exists and then, based on the result, inserts a new one.
This pattern is not a bad thing per-se, but suggests (or at least gives the suspicion) that the coherence of the data is only handled by application-level logics which, if true, can lead to undesired effects.
For the same pattern on in-memory collections with atomic alternatives (e.g., Dictionary.TryAdd, HashSet.Add), see DSA017.
For the same pattern on collections without atomic alternatives (e.g., List<T>, ICollection<T>), see DSA018.
Since the database is usually the Single Source of Truth for the data, then the uniqueness and semantic consistency of such data must be guaranteed at database-level, not at application-level (or at least not only at application-level). If the DB is the Single Source of Truth and guarantees the semantic consistency of the data, then a "if not exists, then insert" pattern shouldn't be necessary at all, unless the developer wanted to be proactive and provide to the caller a user-friendly message.
If the DB is not the Single Source of Truth, this pattern leads to false confidence, because it's prone to TOCTOU (Time-of-Check to Time-of-Use) race conditions: between the moment the existence check completes and the insert executes, another thread or process could insert the same record, leading to duplicate entries and data corruption.
This is particularly dangerous in database operations when no UNIQUE constraint is in place. Without such a constraint, the only safeguard against duplication is the application-level check, which is inherently non-atomic and unreliable under concurrent access.
The following patterns are matched:
// Pattern A: negated existence check + insert
if (!items.Any(x => x.Id == id))
{
items.Add(newItem);
}
// Pattern B: existence check + throw, followed by insert
if (items.Any(x => x.Id == id))
throw new ConflictException("Already exists");
items.Add(newItem);
// Pattern C: existence check + else insert
if (items.Any(x => x.Id == id))
{
// update or other logic
}
else
{
items.Add(newItem);
}Variants using Count(...) == 0, FirstOrDefault(...) == null, Contains(...), and their async counterparts are also detected.
- TOCTOU - Time-of-Check to Time-of-Use
- MITRE, CWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition
- EAFP - Easier to Ask for Forgiveness than Permission
- OWASP: Race Conditions
If the if in the code is ONLY a precaution you added to proactively handle errors (e.g. to show a more user-friendly message), AND the database is protected by a UNIQUE constraint or other mechanisms that guarantee data consistency and uniqueness, then you can explicitly ignore this warning with #pragma warning disable DSA012.
Otherwise, it's almost guaranteed that this is an issue to handle, and you shouldn't ignore it.
In order to fix the problem, apply one of the following approaches depending on the situation:
- Atomic upsert: Use a database-level atomic operation such as SQL
MERGE,INSERT ... ON CONFLICT DO NOTHING/UPDATE(PostgreSQL), orINSERT ... ON DUPLICATE KEY UPDATE(MySQL). - UNIQUE constraint: Add a
UNIQUEconstraint to the database so that duplicate inserts are rejected at the database level, regardless of application-level checks. - EAFP approach: Attempt the insert directly and catch the resulting exception (e.g.,
DbUpdateExceptionfor a unique constraint violation) instead of checking beforehand.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA012: Avoid the "if not exists, then insert" check-then-act antipattern
dotnet_diagnostic.DSA012.severity = error
public class MyService
{
private readonly MyDbContext _dbContext;
public void AddItem_NotOk(string name)
{
// this WILL trigger the rule: non-atomic check-then-act (suspicious, leads
// into thinking that the DB is not taking care of the uniqueness of the data on its own).
if (!_dbContext.Items.Any(x => x.Name == name))
{
_dbContext.Items.Add(new Item { Name = name });
_dbContext.SaveChanges();
}
}
public void AddItem_Ok(string name)
{
// this WILL NOT trigger the rule: EAFP approach
try
{
_dbContext.Items.Add(new Item { Name = name });
// Assumes that there IS a UNIQUE constraint on Name column taking care of the uniqueness.
_dbContext.SaveChanges();
}
catch (DbUpdateException)
{
// Handle duplicate gracefully
}
}
}Use the collection's atomic operation instead of the check-then-act pattern.
This rule fires when the "if not exists, then insert" check-then-act pattern is used on a collection type that offers an atomic alternative. The check is redundant because the collection provides a built-in operation that combines existence verification and insertion atomically. Using the check-then-act pattern instead of the atomic operation is prone to TOCTOU race conditions in multithreaded code.
The following collection types and their suggested alternatives are covered:
| Collection type | Suggested atomic alternative |
|---|---|
Dictionary<K,V> |
TryAdd or indexer assignment [key] = value |
ConcurrentDictionary<K,V> |
GetOrAdd, AddOrUpdate, or TryAdd |
HashSet<T> |
Add (already returns a bool indicating whether the element was added) |
SortedSet<T> |
Add (already returns a bool) |
SortedDictionary<K,V> |
TryAdd or indexer assignment [key] = value |
SortedList<K,V> |
Indexer assignment [key] = value |
ImmutableHashSet<T> |
Add (already handles duplicates) |
ImmutableDictionary<K,V> |
SetItem (upsert semantics) |
ImmutableSortedSet<T> |
Add (already handles duplicates) |
ImmutableSortedDictionary<K,V> |
SetItem (upsert semantics) |
- DSA012: Check-then-act on database types
- DSA018: Check-then-act on collections without atomic alternatives
- MITRE, CWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition
// Dictionary: ContainsKey + Add
if (!dict.ContainsKey(key))
{
dict.Add(key, value); // ❌ use dict.TryAdd(key, value) or dict[key] = value
}
// HashSet: Contains + Add
if (!set.Contains(item))
{
set.Add(item); // ❌ just call set.Add(item) — it returns false if already present
}
// Dictionary: TryGetValue negated + Add
if (!dict.TryGetValue(key, out var existing))
{
dict.Add(key, value); // ❌ use dict.TryAdd(key, value)
}// List (no atomic alternative — handled by DSA018)
if (!list.Contains(item)) { list.Add(item); } // ✅ not flagged by DSA017
// DbSet (database type — handled by DSA012)
if (!dbSet.Any(x => x.Id == id)) { dbSet.Add(entity); } // ✅ not flagged by DSA017
// ContainsKey without Add in body
if (!dict.ContainsKey(key)) { Console.WriteLine("not found"); } // ✅ no insertReplace the check-then-act pattern with the collection's atomic operation:
// Dictionary: use TryAdd
dict.TryAdd(key, value); // ✅ atomic: returns false if key already exists
// Dictionary: use indexer (upsert — overwrites if exists)
dict[key] = value; // ✅ atomic upsert
// HashSet: just call Add
set.Add(item); // ✅ returns bool; the Contains check is redundant
// ConcurrentDictionary: use GetOrAdd
var val = concurrentDict.GetOrAdd(key, _ => ComputeValue()); // ✅ thread-safeIn order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA017: Use the collection's atomic operation instead of the check-then-act pattern
dotnet_diagnostic.DSA017.severity = error
public class RegistryService
{
private readonly Dictionary<string, int> _registry = new();
public void Register_NotOk(string key, int value)
{
// this WILL trigger the rule
if (!_registry.ContainsKey(key))
{
_registry.Add(key, value);
}
}
public void Register_Ok(string key, int value)
{
// this WILL NOT trigger the rule
_registry.TryAdd(key, value);
}
}Protect the check-then-act pattern with a lock or use a collection with built-in duplicate handling.
This rule fires when the "if not exists, then insert" check-then-act pattern is used on a collection type that does not offer an atomic alternative (e.g., List<T>, IList<T>, ICollection<T>, LinkedList<T>), or on an unknown type where the analyzer cannot determine whether an atomic operation exists.
Between the existence check and the insert, another thread could modify the collection, leading to duplicate entries or corruption. Since the collection type does not provide a built-in atomic operation, the check-then-act sequence must be protected externally.
- DSA012: Check-then-act on database types
- DSA017: Check-then-act on collections with atomic alternatives
- MITRE, CWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition
- MITRE, CWE-667: Improper Locking
// List: Any + Add
if (!items.Any(x => x == value))
{
items.Add(value); // ❌ TOCTOU: another thread could add between check and insert
}
// List: Contains + Add
if (!items.Contains(value))
{
items.Add(value); // ❌
}
// ICollection: Contains + Add
if (!collection.Contains(item))
{
collection.Add(item); // ❌
}// Dictionary (has atomic alternative — handled by DSA017)
if (!dict.ContainsKey(key)) { dict.Add(key, value); } // ✅ not flagged by DSA018
// HashSet (has atomic alternative — handled by DSA017)
if (!set.Contains(item)) { set.Add(item); } // ✅ not flagged by DSA018
// DbSet (database type — handled by DSA012)
if (!dbSet.Any(x => x.Id == id)) { dbSet.Add(entity); } // ✅ not flagged by DSA018Protect the sequence with a lock or SemaphoreSlim, or switch to a collection type with built-in duplicate handling:
// Fix 1: protect with lock
lock (_syncRoot)
{
if (!items.Contains(value))
{
items.Add(value); // ✅ protected by lock
}
}
// Fix 2: switch to HashSet (which handles duplicates inherently)
var set = new HashSet<string>();
set.Add(value); // ✅ returns false if already present, no check needed
// Fix 3: switch to ConcurrentDictionary for thread-safe keyed access
var dict = new ConcurrentDictionary<string, bool>();
dict.TryAdd(value, true); // ✅ thread-safe, atomicIf the code is guaranteed to run single-threaded (e.g., inside a single-threaded console application or a synchronization context that serializes access), the TOCTOU risk does not apply. Suppress with #pragma warning disable DSA018.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA018: Protect the check-then-act pattern with a lock or use a collection with built-in duplicate handling
dotnet_diagnostic.DSA018.severity = error
public class TagService
{
private readonly List<string> _tags = new();
private readonly object _lock = new();
public void AddTag_NotOk(string tag)
{
// this WILL trigger the rule
if (!_tags.Contains(tag))
{
_tags.Add(tag);
}
}
public void AddTag_Ok(string tag)
{
// this WILL NOT trigger the rule (lock protection)
lock (_lock)
{
if (!_tags.Contains(tag))
{
_tags.Add(tag);
}
}
}
}Minimal API endpoints should have an explicit authorization configuration.
This rule fires when a Minimal API endpoint (MapGet, MapPost, MapPut, MapDelete, MapPatch, MapMethods, or Map) is called on a local (non-parameter) IEndpointRouteBuilder without .RequireAuthorization() or .AllowAnonymous() in its fluent chain.
Without an explicit authorization configuration, the endpoint silently defaults to anonymous access, creating an unauthenticated attack surface. Every endpoint should make a conscious, reviewable authorization decision.
This rule has the highest confidence level among the three authorization rules: the builder is a local variable (not received from a caller), and it is not a RouteGroupBuilder (no group-level inheritance to consider). If auth is missing from the chain, it is almost certainly a bug.
For endpoints on RouteGroupBuilder, see DSA014.
For endpoints on IEndpointRouteBuilder received as a method parameter, see DSA015.
- MITRE, CWE-862: Missing Authorization
- ASP.NET Core Minimal APIs - Authorization
- OWASP: Broken Access Control
var app = WebApplication.Create();
// direct endpoint without auth
app.MapGet("/api/items", GetItemsAsync);
// fluent chain without auth
app.MapGet("/api/items", GetItemsAsync)
.WithName("GetItems")
.Produces<List<Item>>(StatusCodes.Status200OK);Add .RequireAuthorization() or .AllowAnonymous() directly to the endpoint:
// explicit auth
app.MapGet("/api/items", GetItemsAsync)
.RequireAuthorization(); // ✅
// explicit anonymous access (conscious decision)
app.MapGet("/public/health", HealthCheckAsync)
.AllowAnonymous(); // ✅In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA013: Minimal API endpoints should have an explicit authorization configuration
dotnet_diagnostic.DSA013.severity = error
public class Program
{
public static void Main()
{
var app = WebApplication.Create();
// this WILL trigger the rule
app.MapGet("/api/items", GetItems)
.WithName("GetItems")
.Produces<List<DataItem>>(StatusCodes.Status200OK);
// this WILL NOT trigger the rule
app.MapGet("/api/items", GetItems)
.WithName("GetItems")
.Produces<List<DataItem>>(StatusCodes.Status200OK)
.RequireAuthorization();
}
}Minimal API endpoints on route groups should have an explicit authorization configuration.
This rule fires when a Minimal API endpoint is called on a RouteGroupBuilder (obtained via MapGroup) and neither the endpoint's fluent chain nor the route group (or any ancestor group) carries .RequireAuthorization() or .AllowAnonymous().
The analyzer checks multiple levels:
- Endpoint chain:
.RequireAuthorization()or.AllowAnonymous()on theMapGet/MapPost/etc. call itself. - Local group auth: auth on the group's
MapGroup()chain or as a separate statement in the same method. - Nested group ancestry: auth inherited from a parent group (e.g., outer group has auth, inner group inherits).
- Cross-method tracing: when the
RouteGroupBuilderis received as a method parameter, the analyzer searches the compilation for all call sites and verifies that every caller passes a group with authorization configured.
- MITRE, CWE-862: Missing Authorization
- ASP.NET Core Minimal APIs - Authorization
- OWASP: Broken Access Control
// Pattern A: group without auth, endpoint without auth
var group = builder.MapGroup("/api");
group.MapGet("/items", GetItemsAsync); // ❌
// Pattern B: nested groups, neither has auth
var api = builder.MapGroup("/api");
var v1 = api.MapGroup("/v1");
v1.MapGet("/items", GetItemsAsync); // ❌
// Pattern C: group parameter, call site has no auth
public static void MapItems(RouteGroupBuilder group)
{
group.MapGet("/items", GetItemsAsync); // ❌ if caller doesn't authorize
}Add auth to the endpoint, the group, or ensure all callers pass an authorized group:
// Fix 1: auth on the endpoint itself
group.MapGet("/items", GetItemsAsync)
.RequireAuthorization(); // ✅
// Fix 2: auth on the group (inline)
var group = builder.MapGroup("/api").RequireAuthorization(); // ✅
group.MapGet("/items", GetItemsAsync); // covered
// Fix 3: auth on the group (separate statement)
var group = builder.MapGroup("/api");
group.RequireAuthorization(); // ✅
group.MapGet("/items", GetItemsAsync); // covered
// Fix 4: auth at the call site for parameterized groups
var api = builder.MapGroup("/api").RequireAuthorization();
MapItems(api); // ✅ caller provides authorized groupIf the RouteGroupBuilder is received as a parameter and the method is part of a public API consumed by external assemblies (where call sites are not visible to the analyzer), you may suppress this rule with #pragma warning disable DSA014 and document that callers are expected to provide an authorized group.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA014: Minimal API endpoints on route groups should have an explicit authorization configuration
dotnet_diagnostic.DSA014.severity = error
public class Startup
{
public void Configure(IEndpointRouteBuilder builder)
{
// this WILL trigger the rule: group has no auth
var unprotected = builder.MapGroup("/api");
unprotected.MapGet("/items", GetItemsAsync);
// this WILL NOT trigger the rule: group has auth
var secured = builder.MapGroup("/api").RequireAuthorization();
secured.MapGet("/items", GetItemsAsync);
}
}Minimal API endpoints on parameterized route builders should have an explicit authorization configuration.
This rule fires when a Minimal API endpoint is called on an IEndpointRouteBuilder received as a method parameter (not a RouteGroupBuilder — that is handled by DSA014) without .RequireAuthorization() or .AllowAnonymous() in its fluent chain.
Since the builder comes from the caller, authorization may be configured at the call site. The analyzer performs cross-method tracing: it searches the entire compilation for all invocations of the method and verifies that every call site passes a builder with authorization configured. This includes:
- Arguments with inline auth (e.g.,
group.RequireAuthorization()) - Local variables whose declaration or separate statements carry auth
- Nested group ancestry (auth inherited from parent groups)
- Recursive pass-through (parameter passed through multiple method layers)
If authorization cannot be confirmed at every call site, the endpoint is flagged.
Note: if no call sites are found in the compilation (e.g., the method is a public API consumed by an external assembly), the rule flags the endpoint. In this case, add auth directly to the endpoint or suppress with #pragma warning disable DSA015.
- MITRE, CWE-862: Missing Authorization
- ASP.NET Core Minimal APIs - Authorization
- OWASP: Broken Access Control
// Pattern A: extension method, no call sites or call sites lack auth
public static void MapItems(this IEndpointRouteBuilder builder)
{
builder.MapGet("/items", GetItemsAsync); // ❌
}
// Pattern B: call site passes unauthenticated builder
public static void MapItems(IEndpointRouteBuilder builder)
{
builder.MapGet("/items", GetItemsAsync); // ❌
}
var app = GetBuilder();
MapItems(app); // no auth on appAdd auth directly to the endpoint, or ensure all callers provide an authorized builder:
// Fix 1: auth on the endpoint itself
public static void MapItems(this IEndpointRouteBuilder builder)
{
builder.MapGet("/items", GetItemsAsync)
.RequireAuthorization(); // ✅
}
// Fix 2: ensure all callers pass authorized builders
var group = builder.MapGroup("/api").RequireAuthorization();
group.MapItems(); // ✅ group has authIf the method is part of a public API or a shared library consumed by external assemblies (where call sites are not visible to the analyzer), and you have ensured that all external callers provide an authorized builder, you may suppress this rule with #pragma warning disable DSA015.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA015: Minimal API endpoints on parameterized route builders should have an explicit authorization configuration
dotnet_diagnostic.DSA015.severity = error
public static class EndpointExtensions
{
public static IEndpointRouteBuilder MapDataItems(this IEndpointRouteBuilder builder)
{
// this WILL trigger the rule if any call site lacks auth
builder.MapGet("/api/items", GetItems)
.WithName("GetItems")
.Produces<List<DataItem>>(StatusCodes.Status200OK);
// this WILL NOT trigger the rule: explicit authorization
builder.MapGet("/api/items", GetItems)
.WithName("GetItems")
.Produces<List<DataItem>>(StatusCodes.Status200OK)
.RequireAuthorization();
return builder;
}
}Avoid repeated invocation of the same enumeration method with identical arguments.
- Category: Code Smells
- Severity: Warning ⚠
- Related rules: DSA005
This rule fires when the same LINQ/enumeration method is called multiple times on the same receiver with the same arguments within the same scope (method body, lambda body, or local function body).
Each redundant call re-enumerates the source, which causes two distinct problems:
- Performance: if the source is backed by a database query, a network stream, or a large in-memory collection, each call repeats the full scan. For N elements and K duplicate calls, the work becomes O(N * K) instead of O(N).
- Non-determinism: if the source is a deferred
IEnumerable<T>(e.g., a LINQ query, a generator, or a stream), consecutive enumerations may return different results. The duplicate calls could see different data, leading to inconsistent state within the same object or method.
The analyzer tracks the following method families:
- Element access:
First,FirstOrDefault,Single,SingleOrDefault,Last,LastOrDefault,ElementAt,ElementAtOrDefault,Find - Boolean checks:
Any,All,Contains,Exists - Counting:
Count,LongCount - Aggregation:
Min,Max,Sum,Average,Aggregate - Async variants: all of the above with the
Asyncsuffix
Each scope (method body, lambda, local function) is analyzed independently; invocations in nested lambdas are not compared with invocations in the outer scope.
Security-wise, repeated enumeration of a deferred IEnumerable backed by a database query or external data source can lead to non-deterministic behavior if the underlying data changes between enumerations, potentially causing inconsistent authorization decisions, data integrity violations, or information disclosure.
- MITRE, CWE-1049: Interaction Frequency — excessive repeated operations consuming unnecessary resources
- MITRE, CWE-834: Excessive Iteration — redundant enumeration cycles over the same data source
- CA1851: Possible multiple enumerations of IEnumerable collection
- DSA005: Potential non-deterministic point-in-time execution (similar concept for
DateTime.Now)
The following patterns are matched:
// Pattern A: same FirstOrDefault with same predicate called multiple times in a lambda
var result = orders.Select(o => new
{
Line = orderLines.FirstOrDefault(l => l.OrderId == o.OrderId)?.Description, // ❌
Qty = orderLines.FirstOrDefault(l => l.OrderId == o.OrderId)?.Quantity, // ❌
Unit = orderLines.FirstOrDefault(l => l.OrderId == o.OrderId)?.UnitOfMeasure, // ❌
});
// Pattern B: same Count() called twice in a method body
var count1 = items.Count(); // ❌
var count2 = items.Count(); // ❌
// Pattern C: same Any with same predicate
var exists1 = items.Any(x => x.Id == id); // ❌
var exists2 = items.Any(x => x.Id == id); // ❌
// Pattern D: same Min/Max/Sum/Average called twice
var min1 = values.Min(); // ❌
var min2 = values.Min(); // ❌
// Pattern E: conditional access ?.FirstOrDefault called twice
var name = items?.FirstOrDefault(x => x.Id == id)?.Name; // ❌
var code = items?.FirstOrDefault(x => x.Id == id)?.Code; // ❌
// Pattern F: chained receiver
var a = items.Where(x => x.Active).FirstOrDefault(x => x.Id == id); // ❌
var b = items.Where(x => x.Active).FirstOrDefault(x => x.Id == id); // ❌
// Pattern G: Contains with same argument
var has1 = items.Contains(value); // ❌
var has2 = items.Contains(value); // ❌The following patterns are NOT matched:
// Different predicates on the same receiver
var a = items.FirstOrDefault(x => x.Id == id);
var b = items.FirstOrDefault(x => x.Name == name); // ✅ different predicate
// Same predicate on different receivers
var a = items1.FirstOrDefault(x => x.Id == id);
var b = items2.FirstOrDefault(x => x.Id == id); // ✅ different receiver
// Different methods on the same receiver (Any vs FirstOrDefault)
var exists = items.Any(x => x.Id == id);
var item = items.FirstOrDefault(x => x.Id == id); // ✅ different method
// Same invocation in different scopes (method body vs nested lambda)
var a = items.FirstOrDefault(x => x.Id == 1); // scope: method body
var results = ids.Select(id =>
items.FirstOrDefault(x => x.Id == 1)); // ✅ scope: lambda (separate)
// Same invocation in two separate lambdas (each is its own scope)
Action a1 = () => { var x = items.FirstOrDefault(x => x.Id == 1); }; // scope: lambda 1
Action a2 = () => { var x = items.FirstOrDefault(x => x.Id == 1); }; // ✅ scope: lambda 2
// Count with vs without predicate (different argument signatures)
var total = items.Count();
var filtered = items.Count(x => x.Id > 0); // ✅ different arguments
// Non-tracked methods (ToString, custom methods, etc.)
var s1 = value.ToString();
var s2 = value.ToString(); // ✅ not a tracked enumeration method
// Called only once
var item = items.FirstOrDefault(x => x.Id == id); // ✅ single invocationExtract the result of the enumeration method into a local variable and reuse it:
// Fix for Pattern A:
var result = orders.Select(o =>
{
var line = orderLines.FirstOrDefault(l => l.OrderId == o.OrderId); // ✅ once
return new
{
Line = line?.Description,
Qty = line?.Quantity,
Unit = line?.UnitOfMeasure,
};
});
// Fix for Pattern B:
var count = items.Count(); // ✅ once
// use 'count' wherever needed
// Fix for Pattern C:
var exists = items.Any(x => x.Id == id); // ✅ once
// use 'exists' wherever neededIf the collection is known to be modified between the two calls and re-enumeration is intentional, you may suppress this rule with #pragma warning disable DSA016. However, in most cases, modifying a collection between two identical queries suggests a design issue that should be addressed.
In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA016: Avoid repeated invocation of the same enumeration method with identical arguments
dotnet_diagnostic.DSA016.severity = error
public class OrderService
{
public object BuildSummary(IEnumerable<Order> orders, IEnumerable<OrderLine> lines)
{
// this WILL trigger the rule: FirstOrDefault called 3 times
// with the same predicate on the same receiver
var summary = orders.Select(o => new
{
Description = lines.FirstOrDefault(l => l.OrderId == o.Id)?.Description,
Quantity = lines.FirstOrDefault(l => l.OrderId == o.Id)?.Quantity,
Price = lines.FirstOrDefault(l => l.OrderId == o.Id)?.UnitPrice,
});
// this WILL NOT trigger the rule: result extracted into a variable
var summaryFixed = orders.Select(o =>
{
var line = lines.FirstOrDefault(l => l.OrderId == o.Id);
return new
{
Description = line?.Description,
Quantity = line?.Quantity,
Price = line?.UnitPrice,
};
});
return summaryFixed;
}
}Avoid repeated deeply nested member access chains.
- Category: Code Smells
- Severity: Warning ⚠
- Related rules: DSA016
This rule fires when the same deeply nested member access chain (e.g., home.Rooms.Bathroom.Lights) appears multiple times in the same scope (method body, lambda, or local function). Repeated deep dereferencing reduces readability and may incur unnecessary overhead if the intermediate accesses involve computation, virtual dispatch, or property getters with side effects.
The depth threshold is configurable: only chains whose depth (number of member accesses and element accesses from the root) meets or exceeds the threshold are checked. The default threshold is 3.
Each scope is analyzed independently; chains in nested lambdas are not compared with chains in the outer scope.
Depth counting: each .Property, [index], and .Method access adds one level. InvocationExpression wrappers (e.g., .GetData()) are traversed transparently without adding depth. Expressions inside nameof() are excluded.
The threshold is configurable via .editorconfig:
# Default is 3. Set higher to allow deeper repeated chains; lower to be stricter.
dotnet_diagnostic.DSA019.max_repeated_dereferenciation_depth = 3
Security-wise, repeated deep member access chains can expose code to non-deterministic behavior if any intermediate property getter has side effects, performs lazy initialization, or reads from a volatile source. In security-sensitive code paths (e.g., authorization checks, input validation), evaluating the same chain multiple times could yield different values, leading to time-of-check to time-of-use vulnerabilities.
- MITRE, CWE-1049: Interaction Frequency — excessive repeated access operations
- MITRE, CWE-367: Time-of-check Time-of-use (TOCTOU) Race Condition — repeated evaluation may yield different values if the object graph is mutated concurrently
- DSA016: Avoid repeated enumeration method invocations (similar concept for LINQ method calls)
// Pattern A: same chain with different indexer
var primary = home.Rooms.Bathroom.Lights[0].IsOn(); // ❌ home.Rooms.Bathroom.Lights (depth 3)
var secondary = home.Rooms.Bathroom.Lights[1].IsOn(); // ❌ repeated
// Pattern B: same chain with different terminal property
var connStr = config.Settings.Infrastructure.Database.ConnectionString; // ❌ config.Settings.Infrastructure.Database (depth 3)
var timeout = config.Settings.Infrastructure.Database.Timeout; // ❌ repeated
// Pattern C: chain with method call in the middle
var count = provider.Service.GetReport().Summary.Count; // ❌ provider.Service.GetReport().Summary (depth 3)
var label = provider.Service.GetReport().Summary.Label; // ❌ repeated// Chain depth below threshold (depth 2, threshold 3)
var a = outer.Inner.Value;
var b = outer.Inner.Value; // ✅ depth 2, below threshold
// Chain appears only once
var v = a.B.C.D.Value; // ✅ single occurrence
// Inside nameof (compile-time, no actual dereferencing)
var n1 = nameof(A.B.C);
var n2 = nameof(A.B.C); // ✅ excluded
// Different scopes (method body vs nested lambda)
var v1 = a.B.C.D.Value;
Action act = () => { var v2 = a.B.C.D.Value; }; // ✅ separate scopes
// Chains that differ at the root
var v1 = a1.B.C.D.Value;
var v2 = a2.B.C.D.Value; // ✅ different root identifiers
// Already extracted into a variable
var lights = home.Rooms.Bathroom.Lights;
var primary = lights[0].IsOn();
var secondary = lights[1].IsOn(); // ✅ no deep chain repeatedExtract the repeated chain into a local variable:
// Before:
var primary = home.Rooms.Bathroom.Lights[0].IsOn();
var secondary = home.Rooms.Bathroom.Lights[1].IsOn();
// After:
var lights = home.Rooms.Bathroom.Lights;
var primary = lights[0].IsOn();
var secondary = lights[1].IsOn();In order to change the severity level of this rule, change/add this line in the .editorconfig file:
# DSA019: Avoid repeated deeply nested member access chains
dotnet_diagnostic.DSA019.severity = error
public class HomeAutomationService
{
public object GetLightStatus(Home home)
{
// this WILL trigger the rule: home.Rooms.Bathroom.Lights repeated
return new
{
Primary = home.Rooms.Bathroom.Lights[0].IsOn(),
Secondary = home.Rooms.Bathroom.Lights[1].IsOn(),
Tertiary = home.Rooms.Bathroom.Lights[2].IsOn(),
};
}
public object GetLightStatus_Fixed(Home home)
{
// this WILL NOT trigger the rule: extracted into a variable
var lights = home.Rooms.Bathroom.Lights;
return new
{
Primary = lights[0].IsOn(),
Secondary = lights[1].IsOn(),
Tertiary = lights[2].IsOn(),
};
}
}