The application lifecycle doesn’t finish with app deployment. Feedback is an important element of refining an application, whether that’s exception detection and diagnoses or improving the user experience. In this course, we will look at a suite of services that capture a vast array of feedback data, ranging from exceptions to client and server telemetry. This data can be turned into easily digestible information that can be used to trigger alerts and feedback into the development lifecycle as work items.
This course begins by describing what feedback is, and the types of feedback for improving application performance and usability. It then moves on to how we can integrate feedback into the software development lifecycle and what tools we can use to simplify that task. Finally, we will look at optimizing feedback mechanisms to get meaningful data from feedback noise.
For any feedback and questions relating to this course, please contact us at firstname.lastname@example.org.
- Designing application and user feedback loops
- Setting up crash and event notifications for App Center
- Setting up work item integration from App Center
- Making sense of App Center’s analytic and diagnostic information
- Adding Application Insights Telemetry to an application
- Setting up Application Insights alerts
- Work item integration from Application Insights
- Designing feedback dashboards
- Viewing Application Insights Telemetry data
- Discussing types of user feedback and how they can be captured
- Ways to baseline and filter feedback data
This course is intended for:
- People preparing for Microsoft’s AZ-400 exam
- App developers
- Project managers
To get the most from this course, you should have some experience with Microsoft Azure and application development, as well as knowledge of software project management concepts.
I’m sure you are aware of the concept of baselines. A known state, preferably good, that we can measure against to decide if system performance is optimal. In the case of Application Insights alerts we can view a summary of generated alerts over a specified time span within alert monitoring. There is a breakdown of alerts by severity. Clicking on a severity line will take you to a detail screen showing individual alert instances. Within this page there are various drop-down lists to filter the alert list. Smart alert groups have recently been introduced where machine learning algorithms are used to group alerts by similarity or correlated occurrence
To establish a baseline of your telemetry data you can review the apps analytics log. Under monitoring click Logs Analytics. Raw log data is vast and descriptive, basically there is a lot of it, and with that in mind Azure provides the Kusto query language to interrogate the logs. I can hear you, not another syntax to learn. Good news is that it’s really simple and the intellisense within the browser works a treat. In the left schema pane there are your tables, and if you expand one, as I have done here for requests you see the fields. A query starts with the table name – like a from clause without the from. You add additional clauses with the pipe character. Each clause pipes into the next one so their order is important. You can have multiple where clauses or join conditions with ands and ors as you would in SQL. The project clause is analogous to select in SQL – I guess these are the fields you are projecting into the result set. Finally, in this example we have a summarize clause which is analogous to a group by statement. So, this query is a count of requests to urls by result code where I’ve excluded urls that have ‘lib’, ‘css’ and ‘js’ in the path, and have occurred in the last 14 days. The results can be viewed as a table or a graph. These queries can be saved and easily executed at a latter date, making this an excellent tool for telemetry comparison over time.
Depending on the size of your application you could be looking at quite a lot of feedback data coming in, but hopefully not too many crash reports. There is definitely the possibility of being swamped by data resulting in paralysis by indecision or hyper responsiveness and consequential fatigue - the boy who cried wolf scenario. To avoid this, you need to establish baselines where you can decide what is real and what is just noise.
Within Application Insights you have several options for filtering alert messages. Firstly, there is the basic static alert that I demonstrated earlier where you can set the threshold value. This is fine when dealing with absolutes or critical events, but for most cases has limited usefulness. For more complex and changeable situations we can set dynamic alerts. These alerts are dynamic on several levels. There is the obvious scalability factor as your user base grows. In many cases there will be a seasonal component, whether that is that the app is used inside or outside of business hours, or usage patterns change as your primary markets get up or go to bed. According to Microsoft none of these things matter as, and I paraphrase “Thresholds detection leverages advanced machine learning from historical data”. They do mention the word deviation more than once in their literature, which leads one to think of standard deviations from normally distributed data. However, in my experience not all data comes in the form of a bell curve, so it must be a bit more sophisticated than that. Instead of setting a threshold value you set a threshold sensitivity. High sensitivity will give the most alerts as the smallest departure from historical behavior will trigger an alert. Medium sensitivity, which is the default is more balanced. While low sensitivity will require more and larger deviations from past data to trigger an alert.
When you select dynamic logic, you get the ability to set the number of violations that occur within a time frame before an alert is triggered. I recently had a situation with a service communicating with a supplier’s service every minute. If the communications went down, it was critical for operational staff to know. But for whatever, probably network related, reason there were occasional glitches, resulting in sporadic outage alerts. By setting the aggregation type to count, the sensitivity to high, and in Advanced settings, the number of violations to 6 in the last 10 minutes, an alert will be sent when communication has really gone down and not just spurious failures.
Hallam is a software architect with over 20 years experience across a wide range of industries. He began his software career as a Delphi/Interbase disciple but changed his allegiance to Microsoft with its deep and broad ecosystem. While Hallam has designed and crafted custom software utilizing web, mobile and desktop technologies, good quality reliable data is the key to a successful solution. The challenge of quickly turning data into useful information for digestion by humans and machines has led Hallam to specialize in database design and process automation. Showing customers how leverage new technology to change and improve their business processes is one of the key drivers keeping Hallam coming back to the keyboard.