Role Requirements
Procedure Scope: Administrators
Required Group Membership: Admin.SecurityOperator
Handbook Reference
Package: Identity Security
Domain: TBD
Modifies: TBD
When to Perform this Operation
Twice a day: Key times such as 8am and 2pm.
Analyst Description and Importance
The risky users queue plays a key role in validating the detections that could indicate a user account is compromised and can complement the automated remediation efforts in effect via identity protection’s extension of purview within conditional access. While conditional access and identity protection can handle the large majority threats on their own when setup effectively, certain user risk events may present subtleties that require an analyst’s informed judgment. By reviewing these entries daily, analysts validate unresolved or ambiguous indicators in a timely manner, ensuring the identity security control indicators stay accurate/effective and downstream user impacts are not left lingering unnoticed. This helps enhance the organization’s ability to safeguard both internal stakeholders and trusted external partners, ensuring no risky situation remains unexamined for long and control effectiveness is maintained.
Security Importance:
Monitoring and responding to the risky users queue ensures that identity protection controls can rely on accurate account compromise indicators, helping to mitigate the security risk of advanced threat actors exploiting inaccuracies or lack of analyst attention to evade detection and remediation for account compromise.
Business Importance:
Monitoring and responding to the risky users queue ensures legitimate user activity is validated, helping to mitigate the business risk of an errant risk classification from enforcing actions which would prevent a user with legitimate business need from accessing their account.
Covered in this Operation
Train
- Understand Risky User Security Information
- Understanding Risky User Detection Technologies
- Tuning User Risk Detection Technologies
- Recognize when to Block User
- Recognize when to Reset a Password
- Recognize when to Confirm User Compromised
- Recognize when to Dismiss User Risk
- Recognize when to Confirm User Safe
Monitor
- Review the Risky Users Queue
Respond
- Respond with Confirm User Compromised Action
- Respond with Confirm User Safe Action
- Respond with Dismiss User Risk Action
- Respond with Block User Action
- Respond with Password Reset Action
In the context of managing the risky user identities queue, training is more than accepting events at face value. Analysts need the ability to interpret and verify complex user risk signals, which may not always align with initial indicators or the user’s perceived cause. Gaining proficiency in evaluating in-depth risk details, comprehending how detection methods interrelate, and recognizing when a suspicious pattern is legitimate versus anomalous business activity are central competencies. Mastery in assessing Basic Info, interpreting detection data, and correlating these insights with the organization’s normal usage patterns ensures the analyst’s perspective moves beyond basic acceptance and toward discernment that can be used to accurately guide the decisions of the identity protection security control.
This skillset ensures that analysts can apply nuanced judgment to a wide range of potential risks, enabling them to distinguish true threats more accurately from legitimate business operations. By aligning their decisions with how the identity protection controls operate, analysts help minimize false positives, improving overall detection quality and accuracy of real-time automated measures. This informed decision-making and a thorough understanding of the queue helps analysts contribute to a more effective, resilient security posture that keeps genuine threats in focus.
Understanding Risky User Security Information
When a unique risky user entry is clicked into, useful information becomes available. This can be used for the analysis of primary indicators.
Basic Info section:
- User: Shows the display name of the user associated with the risk, such as “John Doe”. This helps analysts confirm the specific user impacted by the detection.
- Roles: Indicates the roles assigned to the user, such as “Global Administrator”, “User Administrator”, or “Helpdesk Admin”. Understanding roles provides context on the user’s level of access.
- Username: Lists the user’s login identifier, such as “jdoe@example.com”. This assists analysts in identifying the account associated with the risk.
- User ID: A unique identifier for the user, such as a GUID or directory object ID. This helps analysts locate the account in logs or directory searches.
- Risk State: Describes the risk status of the user, such as “At Risk”, “Dismissed”, or “Remediated”. This state allows analysts to determine if the risk is active or already resolved.
- Risk Level: Indicates the severity of the user’s risk, such as “Low”, “Medium”, or “High”. This level helps prioritize the investigation based on the potential threat.
- Details: Provides a summary of actions taken, such as “Admin dismissed all risk for user”. This allows analysts to track remediation or dismissal efforts.
- Risk Last Updated: Shows the last time the user’s risk state was updated, such as “December 24th, 2024, at 10:00 AM”. This ensures analysts work with up-to-date information.
- Office Location: Identifies the office associated with as shown in user properties, such as “New York HQ” or “Remote Worker”. This detail helps analysts correlate risks with physical locations.
- Department: Indicates the department the user belongs to as shown in user properties, such as “Finance”, “IT”, or “Human Resources”. This provides organizational context for the impacted account.
- Mobile Phone: Displays the user’s mobile number as shown in user properties, such as “+1-565-123-4567”. This could help investigations to contact the user to verify suspicious activity.
Recent Risky Sign-ins section:
- Application: Lists the application accessed during the sign-in, such as “SharePoint”, “Outlook”, or “Teams”. This helps analysts determine which service was targeted.
- Status: Describes the success or failure of the sign-in attempt, such as “Success”, “Failed”, or “Interrupted”. This provides insight into whether the attack succeeded.
- Date: Shows when the sign-in occurred, such as “December 24, 2024, at 9:15 AM”. This helps analysts pinpoint the timeframe of the risky event.
- IP Address: Displays the IP address used during the sign-in, such as “192.168.1.1”. This helps analysts identify suspicious or unrecognized locations.
- Location: Indicates the geographical location of the sign-in, such as “New York, USA” or “Lagos, Nigeria”. This helps analysts detect anomalous or high-risk regions.
- Risk State: Describes the risk status of the user, such as “At Risk”, “Dismissed”, or “Remediated”. This state allows analysts to determine if the risk is active or already resolved.
- Risk Level (Aggregate): Reflects the overall risk severity, such as “Low”, “Medium”, or “High”. This rating helps prioritize investigations.
- Risk Level (Real-Time): Shows the real-time risk level of the sign-in, such as “Low”, “Medium”, or “High”. This provides dynamic feedback for analysts.
- Conditional Access Failure: Highlights any policy failures by policy name, such as “MFA not satisfied” or “Blocked location”. This identifies if security measures have prevented the risk or allowed an unauthorized flow.
Detections Not Linked to a Sign-in section:
- Detection Type: Specifies the type of detection, such as “Leaked Credentials”, “Suspicious API Traffic”, or “Adversary in the Middle”. This identifies the nature of the risk, as can be used to lookup in the detection table.
- Time Detected: Indicates when the detection occurred, such as “December 3, 2024, at 11:00 PM”. This helps analysts track the timeline of the suspicious activity.
- Detection Risk State: Describes the risk status, such as “At Risk”, “Dismissed”, or “Remediated”. This state allows analysts to determine if the risk is active or already resolved.
- Detection Risk Level: Displays the severity of the detection, such as “Low”, “Medium”, or “High”. This helps analysts prioritize responses to high-risk detections.
- Detection Risk Details: Provides additional context for the detection, such “Unusual API enumeration”. This provides additional details and specifics about the detection type.
Risk History section:
- Date: Logs the date of the activity, such as “December 24th, 2024”. This helps analysts track the evolution of the user’s risk over time.
- Activity: Specifies the type of detection, such as “Leaked Credentials”, “Suspicious API Traffic”, or “Adversary in the Middle”. This identifies the nature of the risk, as can be used to lookup in the detection table.
- Actor: Specifies which identity provider or source performed the activity, such as “Microsoft Entra ID”. This provides data for analysts on which system the risk was detected in.
- Risk State: Describes the risk status, such as “At Risk”, “Dismissed”, or “Remediated”. This state allows analysts to determine if the risk is active or already resolved.
- Risk Level: Displays the severity of the detection, such as “Low”, “Medium”, or “High”. This helps analysts prioritize responses to high-risk detections.
Understanding Risky User Detection Technologies
Understanding that a user is generating possible risk is the starting point for understanding the scope. The risk history section specifies the risk detection technology which triggered the entry. Use this section to determine if the activity aligns with being a feasible allegation and determine investigation goals for different detections. For Microsoft’s verbatim definitions of these, view the knowledge article here.
Tuning User Risk Detection Technologies
Some detection technologies can create large amounts of false positives, resulting in alert fatigue and unnecessary analyst workload. By refining what sanctioned behaviors look like—defining known-good patterns and trusted baselines—or employing compensating controls such as conditional access policies, organizations can reduce these non-actionable alerts. This allows analysts to focus their efforts where it truly improves the security posture.
Recognize when to Block a User
Blocking a user may be the best action if evidence suggests the account is currently under unauthorized control. In most cases, this option should be delegated for response to a user account compromise. If the user’s activities trigger multiple high-value signals like “Attacker in the Middle” and “Activity from anonymous IP address”, an analyst may want to consider a block. The situations where an analyst may see this in play are as such:
- Detections Signaling Successful Logins: “Attacker in the Middle” combined with malicious or anonymous IP addresses may suggest blocking the user, as that detection technology means an account could be compromised.
- Persistent Adversary Activity: Ongoing “Suspicious API Traffic” plus repeated MFA failures may guide an analyst toward blocking the account, as it indicates the attacker has gained a foothold.
- High-Value Account Exposure: Access to an account with valuable credentials such as “Global Administrator” could influence a decision to block, as the account could do more damage quickly and establish persistence.
Recognize when to Reset a Password
Resetting a password can remove known compromised credentials, especially when leaks or suspicious patterns hint at attacker access. This action is evaluated after blocking since it is geared more at reducing risk, rather than triggering other controls or processes. It can be a strategic measure if automatic risk remediation is not set up, or was not effective. If events like “Leaked Credentials” or “Suspicious API Traffic” persist, an analyst may want to consider a forced password reset. The situations where an analyst may see this in play are as such:
- Confirmed Credential Exposure: “Leaked Credentials” combined with unusual sign-in patterns may prompt a password reset, as this indicates credentials are in unauthorized hands and resetting forces re-authentication.
- Sustained High-Risk Attempts: Recurrent “Attacker in the Middle” detections plus user-reported MFA issues might influence resetting the password, as it disrupts ongoing attempts to exploit compromised credentials.
- Ineffectiveness of Automatic Remediation: If other methods of risk reduction in the past such as MFA completion did not reduce ongoing anomalies, adding a password reset may help, as it removes the known compromised credential from attacker control.
Recognize when to Dismiss User Risk
Dismissing user risk may be considered when initial suspicions prove unfounded or are easily explained by normal business activities – AKA a false positive. With the design of an auto-remediating identity control system, some anomalies resolve naturally through user self-remediation or verified legitimate behavior. If events (e.g., a single unusual sign-in) have a clear, benign explanation, an analyst may want to dismiss the risk. Importantly, this action does not indicate that any risk has been effectively addressed or reduced for a user – simply that no action is to be taken to reduce risk, as suspicions were inaccurate. The situations where an analyst may see this in play are as such:
- User-Verified Benign Behavior: If the user’s travel or VPN usage explains unusual sign-ins, dismissing risk may be suitable, as this confirms that abnormal activity aligns with legitimate user behavior.
- Successful Self-Remediation: A password reset that ends all alerts could support dismissing risk, as it indicates the detected anomaly was a transient issue resolved by the user’s action.
- Legitimate Organizational Context: Known business patterns that align with flagged activities might justify dismissal, as it ensures security teams focus only on true threats rather than normal operational deviations.
Recognize when to Confirm User Compromised
Confirming a user as compromised means acknowledging the same level of threat as blocking a user would, with the addition that an analyst is intending for more actions to be taken outside the queue by marking a user as high risk. The reasoning for why this check occurs later in the check process is because it only applies to organizations with specific procedures for responding to user risk that are not contained within the above 3 options or auto remediation – such as SOAR playbooks being triggered to remediate high user risk. However, as it pertains to the goal of controlling user risk, unless this external SOAR or remediation action system is stood up, this action does not have a material benefit over the block user or reset password actions. The situations where an analyst may see this in play are as such:
- Detections Signaling Successful Logins: “Attacker in the Middle” combined with malicious or anonymous IP addresses may suggest blocking the user, as that detection technology means an account could be compromised.
- Persistent Adversary Activity: Ongoing “Suspicious API Traffic” plus repeated MFA failures may guide an analyst toward blocking the account, as it indicates the attacker has gained a foothold.
- High-Value Account Exposure: Access to an account with valuable credentials such as “Global Administrator” could influence a decision to block, as the account could do more damage quickly and establish persistence.
Recognize when to Confirm User Safe
Confirming a user as safe means acknowledging the same level of threat as dismissing user risk a user would, with the addition that an analyst signifying that external factors beyond the scope of how identity protection would automatically reduce risk have been performed and the risk rating can be effectively removed. In the same was as confirming a user compromised, it only applies to organizations with specific procedures for responding to user risk that are not contained within the above 3 options or auto remediation – such as SOAR playbooks being triggered to remediate high user risk. However, as it pertains to the goal of controlling user risk, unless this external SOAR or remediation action system is stood up, this action does not have a material benefit over the dismiss user risk action. The situations where an analyst may see this in play are as such:
- Normal Usage Patterns Resuming: If IP addresses, locations, and usage revert to known baselines, confirming safety may be reasonable, as it shows no signs of ongoing compromise or abnormal activity.
- Cleared Suspicious Events: Once anomalies have proper explanations or are mitigated, safety confirmation could follow, as it demonstrates that all previously detected issues have been addressed.
- Successful Manual Remediation: When the risk has been remediated outside of identity protection, as this activity cannot be automatically quantified by the risk rating system in identity protection and must be manually approved.
In managing the Identity Protection risky users queue, monitoring focuses on validating potential user identity security risks, and determining the correct response.
By evaluating relevant security data alongside any available contextual information, analysts can accurately decide on the validity of user identity risks. This approach ensures the instances where response is warranted are identified and actioned upon, while also ensuring that the wider range of false positives are dismissed to keep user risk accurately maintained.
Review the Risky Users Queue
Identifying risks requiring validation which have not yet reviewed by an analyst is the primary goal of monitoring.
In the context of risky users queue management, a response action is always warranted.
This fact is due to the nature of how risky user detections can affect login ability, wherein, an item in the queue indicates the system has assigned a user risk which was not automatically remediated and requires validation to maintain the efficacy of complementary identity security controls. Although the core scope of this operation is focused the validation and potential response actions to these entries within the queue, specific organizational policies may warrant specialized response to these detections, opting to leverage the “Confirm User Compromised” and “Confirm User Safe” actions that abstract away the risk reduction abilities within identity protection queue and ensure organizational requirements/procedures are leveraged to reduce these risks.
Respond with Block User Action
This action applies to risky user entries which have been confirmed by an analyst as having a compromised user and should be blocked from access to mitigate the ongoing threat.
Respond with Reset Password Action
This action applies to risky user entries which have been confirmed by an analyst as having a potentially compromised password, and a password should be used to revoke threat access and force reauthentication.
Respond with Dismiss User Risk Action
This action applies to risky user entries which have been confirmed by an analyst as being a legitimate use case (AKA false positive), and the user risk to be dismissed with no remediation actions being performed.
Respond with Confirm User Compromised
This action applies to risky user entries which have been confirmed by an analyst as being compromised, and the course of action for response is delegated to remediation actions not contained within the risky users queue.
Respond with Confirm User Safe
This action applies to risky user entries which have been confirmed by an analyst as being safe, as the result from due course of remediation required for the “Confirm User Compromised” action has successfully remediated the user risk.
Need Assistance?
Reach out to your Customer Success Manager to discuss how a Sittadel cybersecurity analyst can assist in managing these tasks for you. New to our services? Inquire about arranging a consultation to explore optimizing your Azure environment for painless management.