Tuesday, April 23, 2019

Subscribe Android phone to Facebook calendar/events

What you'll need:
  1. On a non-mobile device, open a browser that supports sending links to your mobile device.
    • I use Firefox, but Chrome and probably Opera have this feature as well.
  2. Navigate to https://www.facebook.com/events/
  3. In the lower right corner of the page you'll see a box with links for "Upcoming Events" and "Birthdays".
  4. Right click on the "Upcoming Events" link and choose the "Send to device" option.
  5. On your phone you should have a notification to open the link. Use it.
  6. Tap and hold on the address bar to copy the address.
  7. Open the ICSx⁵ app
  8. Click the + button
  9. Paste the link into the URL field
  10. You don't need authentication as Facebook provided an api key in the link for you
  11. Finish the process
  12. Depending on the calendar app you use, you may need to enable showing the Facebook Events calendar. 
    • I use Simple Calendar, which required me opening the app's settings to enable the calendar.
  13. You may need to manually refresh the calendar after enabling it. I had to refresh it several times before it showed my events.
Note that if you change your Facebook password, you will have to redo this process as the api key provided in the link from Facebook will become invalid.

Programming: Sanitizing your inputs

Sometimes the users of our applications manage to enter invalid data. Other times, we create bugs that introduce invalid data. Whatever the case may be of how it was introduced, there are a series of precautionary measures we can use to prevent invalid data from affecting application functionality and performance.

The first line of defense is of course "form validation". Ideally, all user entry mistakes are caught at this stage. Form validation involves configuring rules for your UI architecture (Angular/React/etc) to interpret, or writing your own validation functions. Form validation should always describe the issue to the user so that they can fix their mistake. If it doesn't do this well, then expect to receive many support phone calls and have your manager breathing down your neck about unhappy customers and expensive customer support costs.

The second line of defense is "backend validation". This should include all security focused frontend validation, plus any additional validation the backend can do; The backend has access to more information about the state of the system, such as other data records that can inform further validation of the entered data. Your service architecture should provide a framework for this type of validation, but you may also end up writing your own code if your framework doesn't provide it, or it is not capable of handling certain types of validation, such as cross-referencing other records in the database.

The final line of defense is "data access layer validation". This type of validation occurs right before writing a record or records to the database. It is the lightest and most rudimentary form of validation. The only concern at this layer, is whether fields that are required for properly storing the record are present and valid. The errors caught at this stage are always dev team errors. This is because the earlier validation layers failed to catch a user error, or a developer made some other mistake earlier in the call stack.

You may have noticed that I made no mention of data validation-on-read. This is because you shouldn't do this. You should catch bad data before it reaches your database, or else you can expect a costly customer support incident that requires a developer to fix. Also, fixing data in place is a delicate procedure that may result in further damage to the data in the database.

But don't we want to know about bad data in the database? Yes, we do. However, if you perform data validation-on-read you will prevent your users from being able to use the system or fix the issue themselves. Yes, your users are intelligent humans and might be able to fix the problem entirely on their own, but only if you let them. Also, customer support may be able to fix the issue, but only if they can retrieve the data to update it. Finally, if you have a way to detect the issue on read, then why can't you detect it on write instead? So put that data validation logic before writing to the db so that someone besides a developer can fix the problem when and if it arrises.
your users are intelligent humans and might be able to fix the problem entirely on their own, but only if you let them.
An example of validation on read that I've seen in C# code is the use of the LINQ methods Single() and First(). Don't use these methods when reading or returning data to the end user. These methods throw exceptions and prevent the data from making it to the end user, such as when your assumption about the data turns out to be wrong. It would be better to send the user incomplete data than no data at all. They will know that there's a problem if some data is missing, and either re-enter it or call customer support to fix the issue. So use (Single/First)OrDefault instead and smooth over any potential null reference issues that might arise from that.
It would be better to send the user incomplete data than no data at all.
It is my hope that this article will lead to less hot database fixes, and system downtime. Maybe it will also get software developers thinking a little more in terms of how their users might be able to dig their way out of their own messes, or even perhaps your mess.

Thursday, September 27, 2018

Sending an email with many embedded images

I recently needed to send an email that needed to include many (~10) high-resolution images, and I wanted to have thumbnails of those images inline with the email text so that I could comment on them.

The images could not be embedded directly because they exceeded the email size limit of most email servers, and simply inserting links makes it difficult to understand the email.

So to draft an email with inline thumbnails linking to high-resolution images, I did the following:

 1. Create thumbnails of the images in the desired size for embedding in the email.
 2. Find a service with static link support to host all the images.
 3. Upload all the images (original and thumbnails) to the service.
 4. For each image in the email, insert an image tag sourcing the thumbnail surrounded by an anchor tag referencing the full sized image.


 Detailed Steps

Below you will find detailed steps of how I carried out the steps listed above.

1. Create Thumbnails

I used a handy node.js package that provide a command line interface. It's called node-thumbnail and works great.

Pre-Requisites: node.js
Steps:
  1. Install node-thumbnail: npm install --global node-thumbnail
  2. Ensure only images are in the directory containing your images; node-thumbnail can't handle other file types.
  3. Open the image directory in a command prompt
  4. Generate thumbnails for the current directory and put them in the current directory with _thumb append to the names, using 250 pixels for the width: thumb .\ .\ -w 250

 

 2. Setup Static Linking Host

For this I used Azure Blob Storage with the Static Website feature enabled. Here are the step to do that:
  1. Sign up or log in to https://portal.azure.com
  2. Click "Storage Accounts" on the left menu
  3. Click Add or select an existing account
  4. Use default settings for everything and pick a name that will be part of the domain where your files are accessible to the internet.
  5. After creating and opening the storage account, click on the "static website (preview)" setting.
  6. Enable it
  7. Open the $web Storage Container linked to on the same page
  8. This is where you'll upload your images
I wouldn't worry too much about cost here since blob storage costs on the order of cents per mb of data transfer. So likely it'll be covered by the free credits Azure offers to new signups.

 

3. Upload Images

I just used the Azure website for uploading the files but there are probably other ways.
  1. Open the $web Storage Container inside the Storage Account you set up earlier
  2. Click Upload
  3. blah blah blah

 

4. Embedding Images in an Email

This part should be pretty easy but varies wildly between email clients. Here's how to do it in Thunderbird:
  1. Start a new email
  2. Click into the body of the email
  3. Click the image icon in the upper right directly above the body
  4. Choose "image" from the drop-down menu
  5. Go into the Azure $web storage container
  6. Select the thumbnail image you'd like to insert into the email
  7. Click the copy url button
  8. Paste that into the Image Location box in Thunderbird
  9. Enter "alternate text" describing the image
  10. Open the Link tab
  11. Repeat steps 6-8 but select the full sized image instead
  12. Repeat steps 3-11 for each image you want to insert
This process can be quite a bit faster if you're not afraid of HTML:
  1. Start a new email
  2. Click into the body of the email
  3. Click Insert>HTML in the menus at the top of the window
  4. Insert a series of statements like the following but replacing the links with what you get from Azure:   
<a href="https://blobname.blob.core.windows.net/$web/imagename.jpg">
  <img src="https://blobname.blob.core.windows.net/$web/imagename_thumb.jpg" />
</a>

Monday, July 23, 2018

Secure HTTPS Web Interface for ASUS Routers

ASUS is kind enough to provide Let's Encrypt to their newer routers but for those of us that have an older model we can still use it after quite a bit of hackery.

Pre-Requisites

  • You own a domain
  • You have DDNS enabled for that domain and it's pointed at your network
  • You have Linux, Windows Subsystem for Linux, or MacOSX installed on your computer
  • You have git installed
  • You have telnet installed

Getting The Certificate

I recomend using a CA that has longer lived certs than LetsEncrypt due to the renewal process taking about 30 minutes of your time, each time; LetsEncrypt certs only last three months. By comparison, you can buy a year long cert from NameCheap.com for $9. So, is 2 hours of your time worth $9? When it gets close to time for renewing my LE cert I'll update this post for using Namecheap instead, unless someone provides a better option.
  1. Configure router to forward HTTP/S connections to your computer
    1. Navigate to http://router.asus.com/Advanced_VirtualServer_Content.asp; that's your router btw.
    2. Login
    3. Add two entries to the port forward list:
      1. HTTP,,80,your computer's ip address,80,tcp
      2. HTTPS,,443,your computer's ip address,443,tcp
    4. Apply changes
  2. Configure your computer's firewall to allow inbound connections to the HTTP/S ports
    1. Windows
      1. Press the Windows key or click on the icon in the lower left of the screen
      2. Type: Advanced Security
      3. Press enter or click on the firewall option in the search results
      4. Click Inbound Rules on the left
      5. Click New Rule.. on the right
      6. Fill in the same info as in step 1.3 above without specifying your computer obviously.
  3. Install letsencrypt:
    1. git clone https://github.com/letsencrypt/letsencrypt
    2. sudo ~/letsencrypt/letsencrypt-auto --test-cert -d your.domain.address
    3. Fix any errors that come up, like installing apache if you don't have it installed
  4. Request a real certificate from LetsEncrypt
    1. sudo ~/letsencrypt/letsencrypt-auto -d your.domain.address
  5. Enable Telnet while you're in here
  6. Stop accepting HTTP/S connections to your computer
  7. Stop forwarding HTTP/S connections to your computer through your router

 Installing The Certificate

  1. Open a terminal, comand prompt, or whatever
  2. telnet router.asus.com
  3. enter your usual credentials for accessing the router web interface
  4. Enable certificate persistance by running this command: nvram set https_crt_save=1
  5. Copy the certs to your router by using the text editor vi and good old fashioned copy/paste
    1. local: vi  /etc/letsencrypt/live/your.domain.address/privkey.pem
    2. telnet: vi /etc/key.pem
    3. local: vi /etc/letsencrypt/live/your.domain.address/fullchain.pem
    4. telnet: vi /etc/cert.pem
  6. Restart the router's web server: service restart_httpd

 Using The Certificate

  1. Enable HTTPS Local Access Config (aka Web Interface) on your router if you have not already done so.
  2. Forward HTTPS connections to your router's web interface by adding this entry to your port forward list:
    1. HTTPS,,443,192.168.1.1,443,tcp.

You should now be able to securely access your router's web interface from anywhere in the world using https://your.domain.address.

This post was inspired by https://www.snbforums.com/threads/howto-use-a-lets-encrypt-ssl-certificate-on-https-web-interface.31322/

Sunday, October 1, 2017

How I'm protecting my credit after the Equifax hack

What and Why

If you're American, your private information required for identity verification, obtaining credit and providing proof of trustworthiness to businesses, has been stolen from Equifax. You should freeze or lock your credit reports from all three credit reporting bureaus to mitigate many risks associated with identity theft.

Consumer Reports recommends freezing your credit but I recommend a combination of locks and freezes.

How

You should freeze your Experian credit report because their lock is considerably more expensive than a freeze. You should lock your TransUnion and Equifax credit reports because of the convenience and price (free!). This is the cheapest, most convenient path to protecting your credit from hackers.

Note that Equifax had included an arbitration clause in their credit lock and monitoring service agreement but they removed that after public outcry.

Tuesday, October 11, 2016

Using GUI apps from docker on Windows host

So, you're looking for a way to use GUI apps from your Linux docker machine running on your Windows host? I bet you thought this would be really hard or impossible but it's actually incredibly easy.

Instructions

  1. Install xming
    1. but don't launch xming
  2. Run your docker image with these extra parameters: -e DISPLAY={ip of docker virtual adapter}:0.0
    1. eg. docker run -e DISPLAY=192.168.99.1:0.0 -it quay.io/travisci/travis-
      node-js /bin/bash
  3. Find your xming folder and open X0.hosts
  4. Add the ip of the docker machine on a new line and save
    • docker should have displayed the ip when you opened your docker console 
    • alternatively you can simply try running the docker image and then locating the ip address at the bottom of the Xming log which can be found by right clicking on the Xming taskbar icon and choosing "View logs".
  5. Launch xming
  6. Run your GUI app from docker!

Disclaimer

I've only tested this with Firefox using a Windows 10 host.

Sources

windows 7 - How to use x11 forwarding with PuTTY
Running Linux GUI Apps in Windows (using cygwin/X)