Posts

  • My GPT-3 AI copilot needs an AI copilot

    I had this interesting conversation with Genie AI that resulted in it changing its recommendation based on feedback I gave it from GitHub Copliot. Genie is using the same GPT-3.5 AI model used by ChatGPT, whereas GitHub Copilot is a highly customized GPT-3 model.

    The following sections of this post is the conversation I had with Genie, exported directly from the Genie AI VS Code extension using their “export [conversation] to markdown” option.

    What’s notable about my interaction with these two models, is how GitHub copilot intuited the exact block of code I needed on its first try without any input from me. Whereas I had to have a conversation with Genie that resulted in a similar block of code that originally did not work due to my use of Docker Desktop WSL2 integration. GitHub Copilot generates correct code snippets for me about 80% of the time, and I can cycle through alternatives using the alt + ] shortcut. But if it gets it wrong, I have no way to refine the generated code, other than taking the closest match and refining it in such a way that it generates further recommendations. Genie AI and ChatGPT on the other hand, only generate correct code for me about 10% of the time. But I’m able to refine the code through the chat mechanism which can result in some great code.

    So in my opinion, a developer should be using both of these tools because GitHub Copilot is surprisngly accurate and has seemless integration with VS through the same mechanism we’re used to with Intellisense. But when we’re in the ideation stage of building software, or if GitHub Copilot doesn’t understand our intent, then the chat feature of Genie or ChatGPT pays off.

    Oh, as a side note, what I was trying to implement was a way to give my Kubernetes hosted asp.net services access to my local Azure CLI so that they could use my dev credentials to access the MS Azure services they needed. This way they’d work the same way as they do when running them from Visual Studio. Unfortunately, what I found is that the WSL compatible Azure CLI executable is some sort of virtual file that Kubernetes doesn’t recognize as a valid file. I got this error when I attempted to deploy this solution to my Docker Desktop Kubernetes cluster:

    MountVolume.SetUp failed for volume “az-cli” : hostPath type check failed: /run/desktop/mnt/host/c/Program\ Files\ (x86)/Microsoft\ SDKs/Azure/CLI2/wbin/az is not a file

    It indeed is reported as a file by Windows Explorer and WSL2 Ubuntu. But Microsoft has some special virtual files it uses to blend the Windows host environment with the WSL environment. So I’m guessing this is one of those situations, and despite Docker Desktop using the WSL environment, it doesn’t recognize the virtual file as a real file.

    If you’re wondering about the path being right, I copied that straight from my WSL hosted Ubuntu environment. Kubernetes also successfully mounted the az-cli (renamed az-config) volume given to me by GitHub Copilot.

    You

    Does the DefaultAzureCredential class provided by the Azure.Identity library for dotnet use the Azure CLI credentials if available?

    Genie

    Yes, the DefaultAzureCredential class provided by the Azure.Identity library for dotnet does use the Azure CLI credentials if available. When the DefaultAzureCredential class attempts to authenticate a given Azure service client object, it first tries to authenticate using the local development environment’s VSCode or Visual Studio authenticated user account. If that doesn’t work, it will try to authenticate using credentials provided by Azure PowerShell modules. Finally, if neither of those approaches works, the DefaultAzureCredential class will check for the presence of an Azure CLI installation and use its credentials to authenticate with Azure.

    You

    Find problems with the following code

    volumes:
      - name: az-cli
        hostPath:
        path: ~/.azure
    

    Genie

    The YAML code provided seems to be creating a volume for Kubernetes. However, it contains an issue that may cause errors.

    • The tilde (~) is used as shorthand to represent the current user’s home directory path in shell environments like bash, but it may not work outside of them. Therefore, using ~/.azure in the hostPath section of the YAML file may cause issues when running on a non-shell environment.

    To fix this issue, you can replace ~/.azure with the absolute path to the host directory where Azure CLI credentials are stored. For example:

    volumes:
      - name: az-cli
        hostPath:
          path: /home/user/.azure #replace with actual absolute path
    

    Make sure to replace /home/user/.azure with the actual absolute path to your Azure CLI credentials directory.

    You

    Is that the correct path when using Docker Desktop for Windows?

    Genie

    No, the correct path to reference in your YAML file when using Docker Desktop for Windows would be different.

    On Windows machines, the default path for Azure CLI credentials is C:\Users\<username>\.azure. Therefore, you should use the following path in your YAML file:

    volumes:
      - name: az-cli
        hostPath:
          path: C:\Users\<username>\.azure #replace with actual username
    

    Make sure to replace <username> with your actual Windows username in the path value of the YAML file.

    You

    Why did GitHub copilot recommend: /run/desktop/mnt/host/c/Users//.azure

    Genie

    GitHub Copilot recommended /run/desktop/mnt/host/c/Users/<username>/.azure as the path for the Azure CLI credentials because it is the location where Docker Desktop for Windows maps the C: drive of the host machine, i.e., your Windows machine.

    Docker Desktop for Windows runs in a lightweight virtual machine (VM) that has its file system isolated from the host machine’s file system. To provide access to files and directories on the host machine, Docker Desktop for Windows uses a special feature called “bind mounts” that map directories from the host file system to directories inside containers.

    When you mount a bind mount in Docker Desktop for Windows by referencing a path like /run/desktop/mnt/host/c/Users/<username>/.azure, Docker will automatically map this path to the correct location on the host machine.

    So, while /run/desktop/mnt/host/c/Users/<username>/.azure might look strange, it would work as expected when used in a Docker container running on Docker Desktop for Windows.

  • Microsoft Edge Browser Sleepy Tabs

    I’ve been using the new(ish) Microsoft Edge Browser for the last couple months, and I must admit I like it more than the competition due its performance and power consumption.

    However, I’ve been having trouble with using Microsoft Teams for Web within this browser due to one of the performance and power saving features: Sleeping Tabs.

    This feature frees up resources for a tab I have not interacted with in the last two hours, pausing notifications from my MS Teams tab. This is of course a huge problem, because I need to be always available on Teams during work hours, and I’ve missed about a dozen messages as a result. But no longer, for I have found the solution to my problem.

    Disable Sleeping Tabs for a site

    You can disable Sleeping Tabs for individual sites, or more specifically domains, by following this handy guide: https://www.tenforums.com/tutorials/165266-add-remove-sites-sleeping-tabs-block-list-microsoft-edge.html

  • Using App Icons with Windows

    I just learned how to add an icon to my Windows application, and I was surprised how hard it was to put the pieces together from various resources. So here’s my one-stop guide to application icons for Windows developers.

    Application Icon (Windows Forms)

    Each component with a window can have its icon set in the InitializeComponent or constructor methods. But there are several ways to add the icon file to the project.

    MyComponent.cs InitializeComponent()
    // myicon.ico added to project as embedded resource
    this.Icon = new System.Drawing.Icon(typeof(MyEntrypointClass), "myicon.ico");
    
    // myicon.ico added to project resource file
    this.Icon = new System.Drawing.Icon(MyEntrypointClass.Properties.Resources.MyIconKey);
    
    // myicon.ico added to component resource file
    this.Icon = new System.Drawing.Icon(typeof(MyComponentClass), Resources.MyIconKey);
    
    // myicon.ico added to project as "copy to output directory"
    this.Icon = new System.Drawing.Icon("myicon.ico");
    

    Any of these methods work fine, but I’ve listed them in order of my preference. I prefer not using a resource file because if I’m not supporting multiple languages, they add unecesary overhead to compilation and probably negatively impact runtime performance.

    Normally, I would prefer leaving the file external to the DLL to reduce time it takes to load the DLL; meaning the last option I listed. However, in this case, the icon will most definitely be needed at startup, so embedding it makes the most sense as it’s simpler to manage the deployment with less files.

    Add/Remove Programs Icon (MSI)

    You can set the icon used in Add/Remove Programs in your WiX project WSX file:

    Product.wsx
    <Wix>
      <Product>
        <Icon Id="icon.ico" SourceFile="$(var.SolutionDir)\myicon.ico"/>
        <Property Id="ARPPRODUCTICON" Value="icon.ico" />
    

    The installer icon is more complicated to set because it needs to be a banner bitmap, so I didn’t bother figuring that out.

  • Windows Hyper-V DNS Resolution

    I’ve been wondering why I can’t reference my guest OS by DNS name, as well as my host OS. At first I thought it was my DNS server, and there was one oddity there, but mostly it was an issue with the “default” virtual switch provided by Hyper-V. Eventually I learned through reading a dozen forum posts, that I needed to use a “public” virtual switch.

    Problem

    Reference Guest/Host by DNS name.

    Solution

    1. Create a public virtual switch:
      1. Open the Hyper-V Manager
      2. Under Actions on the right, select “Virtual Switch Manager”
      3. You should only see one or two Switches:
        • Default Switch - an internal switch which allows internet access but uses a private subnet for the VMs
        • WSL (optional) - An internal switch which doesn’t allow internew access. Use by the Windows Subsystem for Linux 2 (WSL2)
      4. Select “New virutal network switch”
      5. Choose External for the type
      6. Click “Create Virtual Switch”
      7. OK
    2. Assign the VM to the public virtual switch:
      1. Select the VM
      2. Settings…
      3. Network Adapter
      4. Virtual switch: External
      5. OK

    That’s it! You can now make network requests to the host and guest using their hostname.

    Still Didn’t Work

    If this didn’t work for you, then you’re going to need to ensure that your network has its own DNS server and that it is set as the primary DNS for your network.

    Additionally, the DNS server itself should not be configured to forward local DNS queries. It must handle those itself.

  • Tax Preparation Services

    Are there any tax preparation services that actually provide useful services? I’ve used five, and none of them import more than my name and address from my previous tax filings. It seems like all they do is provide a user interface on top of the IRS tax forms that’s marginally better than filling them out myself. I think I might be better off making my own Excel or Google spreadsheets for the tax forms I use, so that I can just copy and update them each year.

    I will continue to use the tax prep services because of their audit protection insurance. But come on, can’t they pre-fill some of my tax return?

    Features I’d like to see

    Asset Depreciation History

    The most useful and easy to implement feature that I’d like to see from any of the tax services, is importing my depreciating assets from previous tax returns. I have to enter their depreciation history each year, so it’s a simply a clone of previous years, plus what I’m going to depreciate this year. And the tax service can suggest the expected depreciation for this year while giving me the option to modify it.

    Automatic Imports

    Most of the info we fill in on our taxes, is available online on various websites. I’ve seen some progress on this front from TurboTax, but it’s pretty minimal. Last time I checked, they only imported a W2 from ADP. And that process was more complicated than just entering the info from the W2 I was sent in the mail.

    Services they could import data from:

    • Investment and retirement account providers
      • Services: Fidelity, Vanguard, Prudential
      • Tax related accounts: 401k, IRA, Roth IRA, Realized gains/losses from personal investments
    • Banks
      • Basically all banks have online account access, and many provide expense classifications. So they could import all my charitable donations and deductible expenses and depreciable asset purchases.
      • They could also run some analyses to determine if I’m forgetting to include misc income or if I missed classifying an expense as tax deductible.
    • W2 from ADP and Workplace
    • Sales
      • Income from Etsy, RedBubble, Threaded, etc

    Services I’ve Used

    • Credit Karma taxes / CashApp Taxes
      are free, and have a pretty good interface. But they don’t provide as much help and extra services as the two paid service I’ve tried. I do like their interface better than TaxACT though.
    • TaxACT
    • TurboTax
      has the best features, but they’re also the most expensive by a large amount; they also use deceptive, almost scam like marketing tactics, and they’re a major reason why the IRS doesn’t just send us a bill.
  • HTTPS Redirects

    I’ve had to deal with the issue of redirecting a domain on multiple platforms, and it was a pain until today.

    Despite the silly name, Redirect Pizza is amazing. It does just this one thing, and it does it flawlessly. I had my domain redirected in under 5 minutes.

    Redirect Pizza

    This service is simply elegant. Their landing page asks where you want to redirect from, and where to. It’s just two input boxes and a submit button. They then ask you to create an account using a variety of popular platforms, or your email address, and then you’re presented with instructions for modifying your DNS record to complete the redirect setup.

    It even verified that I had finished my DNS configuration within a minute of my making the change; no page refresh required. Although I did have to refresh the page to see the overall domain status update to completed.

    I’m using this service for the blog you’re looking at right now. It used to be hosted on Blogger at https://blog.carlinscott.com, but thanks to Redirect Pizza, it’s hosted here on GitHub Pages.

    GitHub Pages

    This site is hosted on GitHub Pages, which only provides apex domain and https redirect:

    1. http://carlinscott.com > httpS://carlinscott.com
    2. https://carlinscott.com > https://www.carlinscott.com

    AWS

    You can do http > https redirects on most AWS services, but the other kinds of redirects are more difficult.

    The biggest issue with AWS is that they have no solution for redirecting an apex (aka bare, naked) domain that doesn’t require you to transfer DNS control to them from your domain registrar (GoDaddy, NameCheap, etc).

    I worked around that issue by deploying a public EC2 server to handle the redirect using ngnx. But it has been a pain to keep the Let’s Encrypt cert up to date. I have a cron-job set up to renew the cert every 2 months, but it runs, throws no errors, and doesn’t renew the cert. But when I run it manually, it works perfectly.

    That solution costs $6/mo.

    If you can forward DNS to AWS, then you can use a Load Balancer Listener Rule to perform whatever redirects you need.

    This costs minimum $18/mo, but if you’re using this solution, you probably already needed a load balancer. So it’s basically free.

    Naked SSL

    This service looks if simpler than Redirect Pizza, but it’s not as easy to use, and they provide little to no info about their service without signing up. I used their domain redirect tester without signing up, and it said that it didn’t support my domain without providing an explanation.

    I don’t recommend this service.

    Redirection IO

    This service is really complicated to use. I spent about an hour trying to set up my free redirect from blog.carlinscott.com to www.carlinscott.com, and I gave up. They require you to write rules to match the entire URL using their weird syntax that I couldn’t figure out. They could have just let me use Regex, or provided a catch-all example, but they didn’t. I also wasn’t sure how to deploy my solution after I figured out the matchers.

    I think this service could be useful for admins who are managing huge and complex networks of related websites. Their redirect matching engine is sophisticated and provides custom verification scenarios so that you can ensure that it will work the way you want.

    I only recommend looking at this service for enterprise IT architects.

    301 Redirect Website

    This is a free service that only provides HTTP redirects. So not terribly useful in the age of HTTPS Everywhere.

  • Executing raw SQL with EntityFramework

    Throughout the interwebs, you will find a solution for executing raw SQL using EntityFramework:

    dbContext.DbSet.FromSql(queryString)
    

    This method does not actually execute raw SQL. It interprets the SQL query and then attempts to generate a LINQ query from it. This will fail if you have SQL methods in your query string. It can also fail for some parameter types. Finally, even if it works, it will wrap your query in another query in order to make it compatible with LINQ, whether or not you actually use LINQ.

    In general, this is not a useful method in my opinion.

  • Hiding pre-production ASP.NET sites from robots (search engines, web crawlers)

    As software developers, we love to put things out into the world for people to see and play with. Sometimes there’s a manager behind us, or a business analyst in front of us asking us to hurry up.

    This drive to deploy can lead to us skipping some important steps, such as preventing search engines from publishing our test and demo environments. To avoid this, we can employ two different tactics.

    1. robots.txt

    This is a file that has been around for ages with a singular purpose; to tell robots what to do when they encounter your website. Not all robots listen to our instructions, but we don’t necessarily need to worry about those ones. The most problematic robots behave well but have a big impact on us. These robots are search engine web crawlers.

    When a search engine crawls your website, it publishes your website to people searching for things related to your site. This may not impact you, but you could end up with a real customer accidentally visiting your beta or test site on accident. You may also end up with a lot more curious people visiting your test site than you’d like.

    ASP.NET robots.txt.cshtml

    If you’re using a modern version of ASP.NET, you can create a simple Razor Page that will tell robots to go elsewhere for your non-production sites.

    This file should be placed with your other Razor Pages in the Pages directory in your ASP.NET website project. You only need to have MVC enabled in Startup.cs to use this, and it doesn’t require any controller logic because Razor Pages are magical.

    What this Razor Page does, is generate a robots.txt file in your website’s root folder. For non-production environments, the robots.txt file tells robots to avoid interacting with the entire site. For production, it tells them to avoid crawling the /hidden/path, which is just a placeholder for any routes on your site that you don’t want to be indexed by a search engine.

    2. Web Access Firewall (WAF)

    You may be more worried about security bugs on your non-production websites, than them showing up in search results. You may also have an admin portal that you only want accessible from your coworkers and business partners.

    The best way to protect sensitive areas on your website is definitely not a robots.txt. For one thing, it doesn’t prevent a person or a robot from entering the restricted area, rather it just tells them that they souldn’t. It’s like a realy low-key keep out sign. The other problem with highlighting a restricted route with robots.txt is that it highlights the sensitive area for hackers to exploit.

    A WAF can restrict access to sections of your website to specific blocks of IPs. It is quite common for companies to have a section of their website only available to people on their network. To accomplish this, they create a WAF routing rule that only allows their block of IP addresses from accessing certain routes.

    I will not go into the details of doing this as it is dependent on where you’re hosting your website, and the web host platform you are using. However, I think the terminology I have provided will help you find what you need on the internet.

  • Check the boundaries of your function output

    I recently spent about 6 hours trying to figure out why my answer to the Count Triplets problem on Hacker Rank was failing 4/12 of their test cases.

    After failing to create a test case that could fail my solution, I decided to spend five Hackos to reveal one of the failing test cases.

    It turned out that my return type wasn’t large enough to accommodate the correct answer. I should have known based on the problem statement that the output would exceed what int32 can hold since I was calculating combinations of values where the count of values was almost touching the boundary of int32. I also ignored a clue given to me by the skeleton code Hacker Rank provided.

    The skeleton code they provide for the solution in C# uses the long variable type everywhere instead of int, even though all the input value are constrained within int numeric space; all the input constraints were below 10^9 (1 billion) and int supports roughly up to 2x10^9 (2 billion). I changed all of the instances of long to int because I thought that would save considerable memory allocation space and some execution time. I however should have noticed that the output could be much larger than int due to what is being calculated.

    The problem was basically find all sequential sets of three values within an array that could have up to 100,000 (10^5) values in it. That means an upper limit on the answer is roughly: 10 ^ 5 ^ 3 = 10 ^ 15. An int can only hold 2x10^9, but a long can hold 2x10^18. So a long is the proper data type to use.

    All I had to do to get my solution to pass the remaining four test cases was to change the type of any variable holding the return value or an intermediate value to long.

    What I learned from this issue is that I need to make sure that I’ve definitely figured out the output bounds for a function I’m testing based on the most extreme situation possible for the code. I’ve known this for many years as it applies to input values, but I forgot to apply it to the output.

    Also, my reasoning behind “optimizing” the code was wrong. An int is 32 bits on both 64 and 32 bit processors, but the 64-bit processor performs operations using 64 bits of precision. So using an int rather than a long saves memory but not execution time.

  • Programming: Sanitizing your inputs

    Sometimes the users of our applications manage to enter invalid data. Other times, we create bugs that introduce invalid data. Whatever the case may be of how it was introduced, there are a series of precautionary measures we can use to prevent invalid data from affecting application functionality and performance.

    The first line of defense is of course “form validation”. Ideally, all user entry mistakes are caught at this stage. Form validation involves configuring rules for your UI architecture (Angular/React/etc) to interpret, or writing your own validation functions. Form validation should always describe the issue to the user so that they can fix their mistake. If it doesn’t do this well, then expect to receive many support phone calls and have your manager breathing down your neck about unhappy customers and expensive customer support costs.

    The second line of defense is “backend validation”. This should include all security focused frontend validation, plus any additional validation the backend can do; The backend has access to more information about the state of the system, such as other data records that can inform further validation of the entered data. Your service architecture should provide a framework for this type of validation, but you may also end up writing your own code if your framework doesn’t provide it, or it is not capable of handling certain types of validation, such as cross-referencing other records in the database.

    The final line of defense is “data access layer validation”. This type of validation occurs right before writing a record or records to the database. It is the lightest and most rudimentary form of validation. The only concern at this layer, is whether fields that are required for properly storing the record are present and valid. The errors caught at this stage are always dev team errors. This is because the earlier validation layers failed to catch a user error, or a developer made some other mistake earlier in the call stack.

    You may have noticed that I made no mention of data validation-on-read. This is because you shouldn’t do this. You should catch bad data before it reaches your database, or else you can expect a costly customer support incident that requires a developer to fix. Also, fixing data in place is a delicate procedure that may result in further damage to the data in the database.

    But don’t we want to know about bad data in the database? Yes, we do. However, if you perform data validation-on-read you will prevent your users from being able to use the system or fix the issue themselves. Yes, your users are intelligent humans and might be able to fix the problem entirely on their own, but only if you let them. Also, customer support may be able to fix the issue, but only if they can retrieve the data to update it. Finally, if you have a way to detect the issue on read, then why can’t you detect it on write instead? So put that data validation logic before writing to the db so that someone besides a developer can fix the problem when and if it arises.

    your users are intelligent humans and might be able to fix the problem entirely on their own, but only if you let them.

    An example of validation on read that I’ve seen in C# code is the use of the LINQ methods Single() and First(). Don’t use these methods when reading or returning data to the end user. These methods throw exceptions and prevent the data from making it to the end user, such as when your assumption about the data turns out to be wrong. It would be better to send the user incomplete data than no data at all. They will know that there’s a problem if some data is missing, and either re-enter it or call customer support to fix the issue. So use (Single/First)OrDefault instead and smooth over any potential null reference issues that might arise from that.

    It would be better to send the user incomplete data than no data at all.

    It is my hope that this article will lead to less hot database fixes, and system downtime. Maybe it will also get software developers thinking a little more in terms of how their users might be able to dig their way out of their own messes, or even perhaps your mess.

subscribe via RSS