<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[kleypot]]></title><description><![CDATA[software development]]></description><link>https://kleypot.com/</link><generator>Ghost 5.10</generator><lastBuildDate>Mon, 13 Apr 2026 13:17:51 GMT</lastBuildDate><atom:link href="https://kleypot.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Home Assistant - Better Proxmox Monitoring with InfluxDB]]></title><description><![CDATA[In this post I will show how I used the External Metric Server feature in Proxmox to set up the best possible integration between Proxmox and Home Assistant. ]]></description><link>https://kleypot.com/better-proxmox-dashboards-with-influxdb/</link><guid isPermaLink="false">663d06c83ecc781c5505837e</guid><category><![CDATA[proxmox]]></category><category><![CDATA[home-assistant]]></category><category><![CDATA[alexa]]></category><category><![CDATA[home-automation]]></category><category><![CDATA[influxdb]]></category><category><![CDATA[grafana]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Mon, 13 May 2024 17:35:16 GMT</pubDate><media:content url="https://kleypot.com/content/images/2024/05/graphs.png" medium="image"/><content:encoded><![CDATA[<img src="https://kleypot.com/content/images/2024/05/graphs.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB"><p>The default <a href="https://www.home-assistant.io/integrations/proxmoxve/">Proxmox VE</a> integration in Home Assistant basically only tells you which containers are running. But what if you want an automation to trigger if storage is running low, or if a specific VM&apos;s CPU is pegged?</p><p>Proxmox actually provides a way to expose its metrics to an external database. In this post I will show how I used the<a href="https://pve.proxmox.com/wiki/External_Metric_Server"> External Metric Server</a> feature in Proxmox to set up the best possible integration between Proxmox and Home Assistant. </p><h2 id="influxdb-setup">InfluxDB Setup</h2><p>InfluxDB will serve as the database for Proxmox metrics which can be accessed by Home Assistant. The quickest way to get up and running with InfluxDB is to install it as an add-on in Home Assistant. The add-on comes bundled with Chronograf which makes this setup much easier to manage.</p><!--kg-card-begin: markdown--><ol>
<li>Install the <a href="https://github.com/hassio-addons/addon-influxdb">InfluxDB add-on</a> in Home Assistant.</li>
<li>Adjust the configuration if needed, then start the add-on.</li>
<li>Once the add-on is running, use the OPEN WEB UI button to ingress into the Chronograf web interface to confirm that it is up and running</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-09-130545.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"><br>
<em>Cronograf Web Interface</em></p>
<ol start="4">
<li>Now you need to add a database for Proxmox to write to. Go to the InfluxDB Admin panel and create a new database called <code>proxmox</code>. Adjust the duration of the default retention policy to a reasonable limit.</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-09-131707.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"><br>
<em>New database</em></p>
<ol start="5">
<li>Finally, you need a user with permissions to access the database. Go to the users tab and create a user called <code>proxmox</code>. On the next page toggle on the WRITE and READ permissions for the <code>proxmox</code> database.</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-09-131922.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"><br>
<em>User permissions</em></p>
<!--kg-card-end: markdown--><h2 id="proxmox-setup">Proxmox Setup</h2><p>Now that the database is in place, you can configure it as an <a href="https://pve.proxmox.com/wiki/External_Metric_Server">External Metrics Server</a> in Proxmox. </p><!--kg-card-begin: markdown--><ol>
<li>Log into your Proxmox web interface</li>
<li>Under Datacenter, go to Metric Server and Add a new InfluxDB connection.</li>
<li>Give the connection a meaningful name, like <code>homeassistant</code>. For the Server enter the IP of your Home Assistant installation. Set the port and protocol to match the configuration of the InfluxDB add-on.</li>
<li>Set the Token to <code>username:password</code> replacing username and password with the user you set up in InfluxDB.</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-09-133252.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"></p>
<ol start="5">
<li>Save the database connection, then return to the InfluxDB interface to test it out.</li>
<li>Go to the Explore tab and start a new Query. You should see the <code>proxmox.autogen</code> retention policy. Drill down into the <code>proxmox.autogen</code> and go to <code>system &gt; vmid</code> and tick one of your VMs. Choose a field from the right and you should see some data.</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-09-134757.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"><br>
<em>vmid 100 cpu query</em></p>
<ol start="7">
<li>Try to familiarize yourself with the different measurements and fields available, and how the queries are constructed. In the next section will use these queries to build out sensors in Home Assistant.</li>
</ol>
<!--kg-card-end: markdown--><p>Now that you have data flowing into InfluxDB, you can start setting up your sensors in Home Assistant.</p><h2 id="home-assistant-setup">Home Assistant Setup</h2><p>As of when this guide was written, InfluxDB is still configured manually with YAML.</p><!--kg-card-begin: markdown--><ol>
<li>
<p>Optional step: Add another user in InfluxDB with read-only permissions on the <code>proxmox</code> database.</p>
</li>
<li>
<p>Add the InfluxDB integration to your configuration.yaml. This config simply enables the integration and blacklists all entities from being recorded.</p>
</li>
</ol>
<pre><code>influxdb:
  exclude:
    entity_globs: &quot;*&quot;
</code></pre>
<ol start="3">
<li>Add the sensor platform to your configuration.yaml. Change the values below to match your setup.</li>
</ol>
<pre><code>sensor:
  - platform: influxdb
    host: a0d7b954-influxdb
    username: someusername
    password: somepassword
    scan_interval:
      seconds: 15
    queries:
      - name: InfluxDB Proxmox VM 100 CPU
        unique_id: &apos;42946dc7-c9ac-4f7d-abb0-198f9435b738&apos;
        database: proxmox
        measurement: proxmox.autogen.system
        where: &quot;vmid=&apos;100&apos;&quot;
        field: &apos;&quot;cpu&quot;&apos;
        group_function: last
        value_template: &quot;{{ (value|float * 100) | round(1) }}&quot;
        unit_of_measurement: &apos;%&apos;
</code></pre>
<blockquote>
<p>Notes:</p>
<ul>
<li><code>group_function</code> decides how the data is aggregated. <code>last</code> simply chooses the latest value in the selection, which is appropriate for a real-time sensor.</li>
<li><code>value_template</code> is handy for transforming the data. The template above multiplies fractional value by 100 to get a percentage.</li>
<li><code>scan_interval</code> is set to be slightly longer than the update interval in InfluxDB which is 10 seconds.</li>
</ul>
</blockquote>
<ol start="4">
<li>Restart Home Assistant and locate the new entity</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-09-141503.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"><br>
<em>CPU usage sensor</em></p>
<ol start="5">
<li>If you do not see the new sensor, check your system logs and search for &apos;influxdb&apos;</li>
</ol>
<!--kg-card-end: markdown--><h2 id="advanced-sensors">Advanced Sensors</h2><p>Now you can start getting more creative with your sensors. If you are not yet familiar with the InfluxQL query language, I recommend using the query builder in the add-on to construct your query, then translate it over to your YAML config. I will walk through this process with an example below.</p><h3 id="high-cpu-usage-sensor">High CPU Usage Sensor</h3><p>Suppose you want a sensor to drive an automation that notifies you when your CPU is pegged. It may not be unusual for the CPU to occassionally spike, so you should consider taking the <code>mean</code> value over a period of time.</p><!--kg-card-begin: markdown--><ol>
<li>
<p>Go to the Explore tab again in the InfluxDB add-on and start a new query.</p>
</li>
<li>
<p>Choose the <code>proxmox.autogen</code> retention policy.</p>
</li>
<li>
<p>Under Measurements, expand <code>cpustat</code> and tick your proxmox host.</p>
</li>
<li>
<p>Under Fields, tick <code>cpu</code> and apply the <code>mean</code> function.</p>
</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-09-145211.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"></p>
<ol start="5">
<li>Use the Group by dropdown to adjust the range of time over which the mean is computed. Notice that as you increase the duration of the mean, the resulting graph will smooth out the sharp peaks and valleys. This is exactly what we want to make sure those momentary CPU spikes don&apos;t trigger our automation.</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-09-145704.png" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"></p>
<ol start="6">
<li>Once the query is set up to your liking, you can translate it over to your configuration.yaml. Your code may differ from the below.</li>
</ol>
<pre><code>- name: InfluxDB Proxmox Host CPU 5m Average Usage
  unique_id: &apos;c9e2ca8b-48fa-4856-84ad-7a1f671735fa&apos;
  database: proxmox
  measurement: proxmox.autogen.cpustat
  field: &quot;cpu&quot;
  group_function: mean
  where: &quot;host=&apos;mini-neptune&apos; AND time &gt; now() - 5m AND time &lt; now()&quot;
  value_template: &quot;{{ (value|float * 100) | round(1) }}&quot;
  unit_of_measurement: &apos;%&apos;
</code></pre>
<blockquote>
<p>Note that the home assistant yaml syntax does not provide an easy way to set the group-by interval and will therefore group by the entire selected range of values. We adjust for this by adding a 5 minute time window to the where clause.</p>
</blockquote>
<ol start="7">
<li>Restart home assistant and check for the new sensor. Now you can trigger an automation if the sensor hits a certain while filtering out any momentary spikes!</li>
</ol>
<p><img src="https://kleypot.com/content/images/2024/05/signal-2024-05-09-153544_002.jpeg" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy"><br>
<em>High CPU notification</em></p>
<!--kg-card-end: markdown--><h2 id="conclusion">Conclusion</h2><p>This InfluxDB integration gives you the best insight possible into the status of your Proxmox system by hooking directly into your system&apos;s metrics. This setup gives the most accurate real-time insight into your PVE host and all of your VMs and containers so that you can build meaningful automations and dashboards.</p><p>You can even take this a step further by installing the Grafana addon to build even more advanced visualizations. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2024/05/Screenshot-2024-05-13-122244.png" class="kg-image" alt="Home Assistant - Better Proxmox Monitoring with InfluxDB" loading="lazy" width="530" height="825"><figcaption>Grafana visualization embeded in HA Dashboard</figcaption></figure><p>Thanks for reading!</p>]]></content:encoded></item><item><title><![CDATA[Paperless-ngx Push Notifications]]></title><description><![CDATA[Here is a guide on how I set up mobile push notifications to my android phone whenever a document is consumed in paperless-ngx.]]></description><link>https://kleypot.com/paperless-ngx-push-notifications/</link><guid isPermaLink="false">662155ba3ecc781c550581e2</guid><category><![CDATA[paperless-ngx]]></category><category><![CDATA[home-assistant]]></category><category><![CDATA[home-automation]]></category><category><![CDATA[node-red]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Thu, 18 Apr 2024 20:48:01 GMT</pubDate><media:content url="https://kleypot.com/content/images/2024/04/signal-2024-04-18-144033_002-.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://kleypot.com/content/images/2024/04/signal-2024-04-18-144033_002-.jpeg" alt="Paperless-ngx Push Notifications"><p></p><p>Here is a guide on how I set up mobile push notifications to my android phone whenever a document is consumed in <a href="https://docs.paperless-ngx.com/">paperless-ngx</a>. This saves me the hassle of logging in to make sure documents were processed successfully, especially when those documents are coming from less-than-reliable clients like all-in-one printers.</p><p>For this setup I am using MQTT for messaging, Home Assistant for the push notifications, and Node-RED to glue it all together. You could easily substitute other protocols if needed (HTTP instead of MQTT, Telegram instead of Home Assistant, ...).</p><p>I have paperless-ngx hosted in docker on a Debian VM. This will not work for a bare-metal installation of paperless-ngx.</p><h3 id="how-it-works">How it works</h3><p>Paperless-ngx exposes hooks into the document consumption process. In particular, we will use the <a href="https://docs.paperless-ngx.com/advanced_usage/#post-consume-script">post-consumption</a> hook to run a bash script. The process works like this:</p><ol><li>Upload document to paperless-ngx</li><li>Document is queued and processed by paperless-ngx</li><li>Post-consumption script executes, publishes a JSON summary of the document to the MQTT broker.</li><li>Node-RED consumes the message from the MQTT broker and uses the Home Assistant notify service to push out the notification.</li></ol><h3 id="set-up-mqtt-client">Set up MQTT Client</h3><p>First, we need to add the MQTT client so that paperless-ngx can publish messages to our broker.</p><p>Create a new file <code>Dockerfile</code> at the same location as <code>docker-compose.env</code>. This allows us to build our own container with Mosquitto installed on top of the latest paperless-ngx image.</p><figure class="kg-card kg-code-card"><pre><code>FROM ghcr.io/paperless-ngx/paperless-ngx:latest

RUN apt-get update &amp;&amp; apt-get install -y mosquitto mosquitto-clients</code></pre><figcaption>Dockerfile</figcaption></figure><p>Now, edit <code>docker-compose.yml</code> so that it builds from the Dockerfile instead of using the paperless-ngx image.</p><figure class="kg-card kg-code-card"><pre><code>version: &quot;3.4&quot;
services:

  webserver:
    build: .
    restart: unless-stopped
    depends_on:
      - db
      - broker
      - gotenberg
      - tika
    ports:
      - &quot;8000:8000&quot;
    volumes:
      - data:/usr/src/paperless/data
      - media:/usr/src/paperless/media
      - ./export:/usr/src/paperless/export
    env_file: docker-compose.env
    environment:
      PAPERLESS_REDIS: redis://broker:6379
      PAPERLESS_DBHOST: db
      PAPERLESS_TIKA_ENABLED: 1
      PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
      PAPERLESS_TIKA_ENDPOINT: http://tika:9998</code></pre><figcaption>Sample docker-compose.yml which builds from Dockerfile</figcaption></figure><h3 id="post-consumption-script">Post-consumption Script</h3><p>Now we can add our script to publish new document messages to the MQTT broker. Make a new <code>scripts</code> directory next to <code>docker-compose.yml</code> and add a new file <code>post-consumption.sh</code>. Replace the host, username, and password arguments according to your configuration.</p><figure class="kg-card kg-code-card"><pre><code>#!/usr/bin/env bash

mosquitto_pub -h &lt;mqtt_host&gt; --username &lt;mqtt_username&gt; --pw &lt;mqtt_password&gt; -t /paperless/post-consumption -m &apos;{
    &quot;id&quot;:&quot;&apos;&quot;${DOCUMENT_ID}&quot;&apos;&quot;,
    &quot;file_name&quot;:&quot;&apos;&quot;${DOCUMENT_FILE_NAME}&quot;&apos;&quot;,
    &quot;created&quot;:&quot;&apos;&quot;${DOCUMENT_CREATED}&quot;&apos;&quot;,
    &quot;modified&quot;:&quot;&apos;&quot;${DOCUMENT_MODIFIED}&quot;&apos;&quot;,
    &quot;added&quot;:&quot;&apos;&quot;${DOCUMENT_ADDED}&quot;&apos;&quot;,
    &quot;source_path&quot;:&quot;&apos;&quot;${DOCUMENT_SOURCE_PATH}&quot;&apos;&quot;,
    &quot;archive_path&quot;:&quot;&apos;&quot;${DOCUMENT_ARCHIVE_PATH}&quot;&apos;&quot;,
    &quot;thumbnail_path&quot;:&quot;&apos;&quot;${DOCUMENT_THUMBNAIL_PATH}&quot;&apos;&quot;,
    &quot;download_url&quot;:&quot;&apos;&quot;${DOCUMENT_DOWNLOAD_URL}&quot;&apos;&quot;,
    &quot;thumbnail_url&quot;:&quot;&apos;&quot;${DOCUMENT_THUMBNAIL_URL}&quot;&apos;&quot;,
    &quot;correspondent&quot;:&quot;&apos;&quot;${DOCUMENT_CORRESPONDENT}&quot;&apos;&quot;,
    &quot;tags&quot;:&quot;&apos;&quot;${DOCUMENT_TAGS}&quot;&apos;&quot;,
    &quot;original_filename&quot;:&quot;&apos;&quot;${DOCUMENT_ORIGINAL_FILENAME}&quot;&apos;&quot;,
    &quot;task_id&quot;:&quot;&apos;&quot;${TASK_ID}&quot;&apos;&quot;
}&apos;</code></pre><figcaption>./scripts/post-consumption.sh</figcaption></figure><p>After saving the script, make sure to make the file executable.</p><pre><code>$ chmod ug+x post-consumption.sh</code></pre><p>Now, we need to mount the scripts folder so that the post-consumption script is available to the container. We do this by adding a new volume to <code>docker-compose.env</code>. Then we tell paperless about the script by setting the <code>PAPERLESS_POST_CONSUME_SCRIPT</code> environment variable.</p><figure class="kg-card kg-code-card"><pre><code>version: &quot;3.4&quot;
services:

  webserver:
    build: .
    restart: unless-stopped
    depends_on:
      - db
      - broker
      - gotenberg
      - tika
    ports:
      - &quot;8000:8000&quot;
    volumes:
      - data:/usr/src/paperless/data
      - media:/usr/src/paperless/media
      - ./export:/usr/src/paperless/export
      - ./scripts:/opt/scripts
    env_file: docker-compose.env
    environment:
      PAPERLESS_REDIS: redis://broker:6379
      PAPERLESS_DBHOST: db
      PAPERLESS_TIKA_ENABLED: 1
      PAPERLESS_TIKA_GOTENBERG_ENDPOINT: http://gotenberg:3000
      PAPERLESS_TIKA_ENDPOINT: http://tika:9998
      PAPERLESS_POST_CONSUME_SCRIPT: /opt/scripts/post-consumption.sh</code></pre><figcaption>Sample docker-compose.yml with post-consumption script</figcaption></figure><h3 id="node-red-subscriber">Node-RED Subscriber</h3><p>Next, we can jump over to Node-RED and see the messages flowing in on our new MQTT topic. Add a new <code>mqtt in</code> node and set the topic to <code>/paperless/post-consumption</code>, then run that node into a <code>debug</code> node.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2024/04/image.png" class="kg-image" alt="Paperless-ngx Push Notifications" loading="lazy" width="502" height="96"><figcaption>Simple test flow</figcaption></figure><figure class="kg-card kg-code-card"><pre><code>[{&quot;id&quot;:&quot;f1c99a4c36ff8e41&quot;,&quot;type&quot;:&quot;mqtt in&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;&quot;,&quot;topic&quot;:&quot;/paperless/post-consumption&quot;,&quot;qos&quot;:&quot;2&quot;,&quot;datatype&quot;:&quot;json&quot;,&quot;broker&quot;:&quot;671331cca0db9c1a&quot;,&quot;nl&quot;:false,&quot;rap&quot;:true,&quot;rh&quot;:0,&quot;inputs&quot;:0,&quot;x&quot;:530,&quot;y&quot;:220,&quot;wires&quot;:[[&quot;a9fd6de7a6d58f9a&quot;]]},{&quot;id&quot;:&quot;a9fd6de7a6d58f9a&quot;,&quot;type&quot;:&quot;debug&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;debug 45&quot;,&quot;active&quot;:true,&quot;tosidebar&quot;:true,&quot;console&quot;:false,&quot;tostatus&quot;:false,&quot;complete&quot;:&quot;payload&quot;,&quot;targetType&quot;:&quot;msg&quot;,&quot;statusVal&quot;:&quot;&quot;,&quot;statusType&quot;:&quot;auto&quot;,&quot;x&quot;:780,&quot;y&quot;:220,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;671331cca0db9c1a&quot;,&quot;type&quot;:&quot;mqtt-broker&quot;,&quot;name&quot;:&quot;192.168.1.153&quot;,&quot;broker&quot;:&quot;192.168.1.153&quot;,&quot;port&quot;:&quot;1883&quot;,&quot;clientid&quot;:&quot;&quot;,&quot;autoConnect&quot;:true,&quot;usetls&quot;:false,&quot;protocolVersion&quot;:&quot;4&quot;,&quot;keepalive&quot;:&quot;60&quot;,&quot;cleansession&quot;:true,&quot;birthTopic&quot;:&quot;&quot;,&quot;birthQos&quot;:&quot;0&quot;,&quot;birthPayload&quot;:&quot;&quot;,&quot;birthMsg&quot;:{},&quot;closeTopic&quot;:&quot;&quot;,&quot;closeQos&quot;:&quot;0&quot;,&quot;closePayload&quot;:&quot;&quot;,&quot;closeMsg&quot;:{},&quot;willTopic&quot;:&quot;&quot;,&quot;willQos&quot;:&quot;0&quot;,&quot;willPayload&quot;:&quot;&quot;,&quot;willMsg&quot;:{},&quot;userProps&quot;:&quot;&quot;,&quot;sessionExpiry&quot;:&quot;&quot;}]</code></pre><figcaption>Simple test flow (JSON)</figcaption></figure><p>Now, try uploading a document to paperless and check the debug output. You should get a JSON payload populated with data from the new document.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2024/04/image-1.png" class="kg-image" alt="Paperless-ngx Push Notifications" loading="lazy" width="576" height="490"><figcaption>MQTT payload</figcaption></figure><p>This payload should be all you need to set up all kinds of automations in Home Assistant. Note that you can also use the <a href="https://docs.paperless-ngx.com/api/">REST API</a> to pull in even more details from paperless.</p><h3 id="mobile-app-push-notifications">Mobile App Push Notifications</h3><p>Here is how I automated push notifications to my Android device.</p><figure class="kg-card kg-image-card kg-width-full kg-card-hascaption"><img src="https://kleypot.com/content/images/2024/04/image-2.png" class="kg-image" alt="Paperless-ngx Push Notifications" loading="lazy" width="1643" height="102" srcset="https://kleypot.com/content/images/size/w600/2024/04/image-2.png 600w, https://kleypot.com/content/images/size/w1000/2024/04/image-2.png 1000w, https://kleypot.com/content/images/size/w1600/2024/04/image-2.png 1600w, https://kleypot.com/content/images/2024/04/image-2.png 1643w"><figcaption>Push Notifications flow</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-JSON">[{&quot;id&quot;:&quot;f1c99a4c36ff8e41&quot;,&quot;type&quot;:&quot;mqtt in&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;&quot;,&quot;topic&quot;:&quot;/paperless/post-consumption&quot;,&quot;qos&quot;:&quot;2&quot;,&quot;datatype&quot;:&quot;json&quot;,&quot;broker&quot;:&quot;671331cca0db9c1a&quot;,&quot;nl&quot;:false,&quot;rap&quot;:true,&quot;rh&quot;:0,&quot;inputs&quot;:0,&quot;x&quot;:180,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;c0d020cf51bfbced&quot;]]},{&quot;id&quot;:&quot;6c84dfb517eeebb8&quot;,&quot;type&quot;:&quot;function&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;Build Notification Payload&quot;,&quot;func&quot;:&quot;msg.payload = {\n    \&quot;data\&quot;: {\n        \&quot;title\&quot;: &apos;Paperless-ngx&apos;,\n        \&quot;message\&quot;: &apos;New document: &apos; + msg.file_name,\n        \&quot;data\&quot;: {\n            \&quot;image\&quot;: \&quot;/local/paperless/thumb.webp\&quot;,\n            \&quot;when\&quot;: msg.added,\n            \&quot;ttl\&quot;: 0,\n            \&quot;priority\&quot;: \&quot;high\&quot;,\n            \&quot;channel\&quot;: \&quot;Paperless\&quot;,\n            \&quot;clickAction\&quot;: msg.paperless_url_base + \&quot;/documents/\&quot; + msg.id\n        }\n    }\n}\nreturn msg;&quot;,&quot;outputs&quot;:1,&quot;timeout&quot;:&quot;&quot;,&quot;noerr&quot;:0,&quot;initialize&quot;:&quot;&quot;,&quot;finalize&quot;:&quot;&quot;,&quot;libs&quot;:[],&quot;x&quot;:1230,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;baa5324ed5b6970c&quot;]]},{&quot;id&quot;:&quot;6f75de0eadaf83b4&quot;,&quot;type&quot;:&quot;http request&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;GET thumbnail&quot;,&quot;method&quot;:&quot;GET&quot;,&quot;ret&quot;:&quot;bin&quot;,&quot;paytoqs&quot;:&quot;ignore&quot;,&quot;url&quot;:&quot;{{{paperless_url_base}}}/api/documents/{{{payload.id}}}/thumb/&quot;,&quot;tls&quot;:&quot;&quot;,&quot;persist&quot;:false,&quot;proxy&quot;:&quot;&quot;,&quot;insecureHTTPParser&quot;:false,&quot;authType&quot;:&quot;basic&quot;,&quot;senderr&quot;:false,&quot;headers&quot;:[{&quot;keyType&quot;:&quot;other&quot;,&quot;keyValue&quot;:&quot;Content-Type&quot;,&quot;valueType&quot;:&quot;other&quot;,&quot;valueValue&quot;:&quot;image/png&quot;}],&quot;x&quot;:740,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;9e97698091c19a1b&quot;]]},{&quot;id&quot;:&quot;9e97698091c19a1b&quot;,&quot;type&quot;:&quot;file&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;Write thumbnail to local storage&quot;,&quot;filename&quot;:&quot;/homeassistant/www/paperless/thumb.webp&quot;,&quot;filenameType&quot;:&quot;str&quot;,&quot;appendNewline&quot;:true,&quot;createDir&quot;:false,&quot;overwriteFile&quot;:&quot;true&quot;,&quot;encoding&quot;:&quot;none&quot;,&quot;x&quot;:970,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;6c84dfb517eeebb8&quot;]]},{&quot;id&quot;:&quot;c0d020cf51bfbced&quot;,&quot;type&quot;:&quot;change&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;when&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;payload.when&quot;,&quot;tot&quot;:&quot;msg&quot;},{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;id&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;payload.id&quot;,&quot;tot&quot;:&quot;msg&quot;},{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;file_name&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;payload.file_name&quot;,&quot;tot&quot;:&quot;msg&quot;}],&quot;action&quot;:&quot;&quot;,&quot;property&quot;:&quot;&quot;,&quot;from&quot;:&quot;&quot;,&quot;to&quot;:&quot;&quot;,&quot;reg&quot;:false,&quot;x&quot;:400,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;ddd1fa4831958921&quot;]]},{&quot;id&quot;:&quot;baa5324ed5b6970c&quot;,&quot;type&quot;:&quot;api-call-service&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;&quot;,&quot;server&quot;:&quot;d1ddf9d.c530808&quot;,&quot;version&quot;:5,&quot;debugenabled&quot;:false,&quot;domain&quot;:&quot;notify&quot;,&quot;service&quot;:&quot;mobile_app_galaxy_s10&quot;,&quot;areaId&quot;:[],&quot;deviceId&quot;:[],&quot;entityId&quot;:[],&quot;data&quot;:&quot;&quot;,&quot;dataType&quot;:&quot;jsonata&quot;,&quot;mergeContext&quot;:&quot;&quot;,&quot;mustacheAltTags&quot;:false,&quot;outputProperties&quot;:[],&quot;queue&quot;:&quot;none&quot;,&quot;x&quot;:1490,&quot;y&quot;:300,&quot;wires&quot;:[[]]},{&quot;id&quot;:&quot;ddd1fa4831958921&quot;,&quot;type&quot;:&quot;credentials&quot;,&quot;z&quot;:&quot;0b8e56ca785432e9&quot;,&quot;name&quot;:&quot;&quot;,&quot;props&quot;:[{&quot;value&quot;:&quot;paperless_url_base&quot;,&quot;type&quot;:&quot;msg&quot;}],&quot;x&quot;:570,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;6f75de0eadaf83b4&quot;]]},{&quot;id&quot;:&quot;671331cca0db9c1a&quot;,&quot;type&quot;:&quot;mqtt-broker&quot;,&quot;name&quot;:&quot;192.168.1.153&quot;,&quot;broker&quot;:&quot;192.168.1.153&quot;,&quot;port&quot;:&quot;1883&quot;,&quot;clientid&quot;:&quot;&quot;,&quot;autoConnect&quot;:true,&quot;usetls&quot;:false,&quot;protocolVersion&quot;:&quot;4&quot;,&quot;keepalive&quot;:&quot;60&quot;,&quot;cleansession&quot;:true,&quot;birthTopic&quot;:&quot;&quot;,&quot;birthQos&quot;:&quot;0&quot;,&quot;birthPayload&quot;:&quot;&quot;,&quot;birthMsg&quot;:{},&quot;closeTopic&quot;:&quot;&quot;,&quot;closeQos&quot;:&quot;0&quot;,&quot;closePayload&quot;:&quot;&quot;,&quot;closeMsg&quot;:{},&quot;willTopic&quot;:&quot;&quot;,&quot;willQos&quot;:&quot;0&quot;,&quot;willPayload&quot;:&quot;&quot;,&quot;willMsg&quot;:{},&quot;userProps&quot;:&quot;&quot;,&quot;sessionExpiry&quot;:&quot;&quot;},{&quot;id&quot;:&quot;d1ddf9d.c530808&quot;,&quot;type&quot;:&quot;server&quot;,&quot;name&quot;:&quot;Home Assistant&quot;,&quot;addon&quot;:true}]</code></pre><figcaption>Push Notifications flow (JSON)</figcaption></figure><p>This works by subscribing to our new MQTT topic, downloading the thumbnail to Home Assistant local storage, then publishing the notification using the <code>notify</code> service.</p><figure class="kg-card kg-image-card"><img src="https://kleypot.com/content/images/2024/04/image-3.png" class="kg-image" alt="Paperless-ngx Push Notifications" loading="lazy" width="970" height="2048" srcset="https://kleypot.com/content/images/size/w600/2024/04/image-3.png 600w, https://kleypot.com/content/images/2024/04/image-3.png 970w" sizes="(min-width: 720px) 720px"></figure><p>Tapping the notification opens the document in the paperless-ngx web interface where I can quickly review the upload and set up the tags.</p><p>Thanks for reading!</p>]]></content:encoded></item><item><title><![CDATA[Home Assistant: Add Camera Snapshots to Push Notifications]]></title><description><![CDATA[In this post I will share an easy way to add real-time camera snapshots to your Home Assistant push notifications. This is a great way to level up your push notifications, allowing you to actually see what is happening at the instant a notification was pushed.]]></description><link>https://kleypot.com/hass-camera-snapshots-push-notifications/</link><guid isPermaLink="false">6308e6f83ecc781c55057d58</guid><category><![CDATA[home-assistant]]></category><category><![CDATA[home-automation]]></category><category><![CDATA[node-red]]></category><category><![CDATA[blue-iris]]></category><category><![CDATA[fully-kiosk]]></category><category><![CDATA[home-security]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Fri, 26 Aug 2022 19:37:30 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1549109926-58f039549485?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHNlY3VyaXR5JTIwY2FtZXJhfGVufDB8fHx8MTY2MTUzMDgyMg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1549109926-58f039549485?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHNlY3VyaXR5JTIwY2FtZXJhfGVufDB8fHx8MTY2MTUzMDgyMg&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Home Assistant: Add Camera Snapshots to Push Notifications"><p>In this post I will share an easy way to add real-time camera snapshots to your Home Assistant push notifications. This is a great way to level up your push notifications, allowing you to actually see what is happening at the instant a notification was pushed. For example, you can send an image from your garage camera when the cover is opened, or from your wall mounted tablet when the security system is disarmed.</p><p>Home Assistant does allow you to <a href="https://companion.home-assistant.io/docs/notifications/notification-attachments#automatic-snapshots">send snapshots from a camera entity</a>, but the advantage of my approach is that it reaches directly into the camera to get a snapshot. This eliminates the need to use camera entities, which are often plagued with latency, compression, and <a href="https://kleypot.com/home-assistant-blue-iris-ui3-player-in-lovelace-ui/#bandwidth-issues">bandwidth issues</a>. All you need to make this work is a camera with an HTTP endpoint for pulling the snapshots. I will show some specific examples using my Blue Iris cameras and my Fully Kiosk-enabled tablets, and I will share a reusable flow that you can plug in to your Node-RED config.</p><h2 id="http-endpoint">HTTP Endpoint</h2><p>The first step is to figure out the HTTP endpoint to grab a snapshot from your camera. Most IP cameras and NVR systems will provide a way to pull a snapshot, so check with your manufacturer. Test the endpoint by opening it in a browser and ensure you get an image with the current timestamp.</p><p>The URL for a BlueIris camera looks something like this:</p><pre><code>http://&lt;blue_iris_ip&gt;:&lt;port&gt;/image/&lt;camera_name&gt;?s=50</code></pre><p>And the URL for a Fully Kiosk-enabled Android tablet looks like this:</p><pre><code>http://&lt;tablet_ip&gt;:2323/?cmd=getCamshot&amp;password=&lt;remote_admin_password&gt;</code></pre><blockquote>Note: you have to set up Remote Admin in Fully Kiosk and turn on a setting called &quot;Enable Camshot on Remote Admin&quot;</blockquote><h2 id="example-1-garage-notifications">Example #1: Garage Notifications</h2><p>The flow below triggers when my garage door starts to open, only when I am away from the house. Next, it calls my Blue Iris HTTP endpoint using the user and password stored by in the credentials node. If the HTTP request succeeds, the image is saved to local storage and sent out as a push notification. If the request fails, an error is logged and the request is sent out with no image. This way, I always get a notification even if the images are not working for some reason.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/08/image-1.png" class="kg-image" alt="Home Assistant: Add Camera Snapshots to Push Notifications" loading="lazy" width="1627" height="328" srcset="https://kleypot.com/content/images/size/w600/2022/08/image-1.png 600w, https://kleypot.com/content/images/size/w1000/2022/08/image-1.png 1000w, https://kleypot.com/content/images/size/w1600/2022/08/image-1.png 1600w, https://kleypot.com/content/images/2022/08/image-1.png 1627w" sizes="(min-width: 1200px) 1200px"><figcaption>Garage Notification Flow</figcaption></figure><p>Below is a screenshot of the result. The notifications are impressively quick, and there is almost no latency between the trigger and the image. I actually use this same approach for my <a href="https://kleypot.com/fully-offline-video-doorbell-for-home-assistant/">Video Doorbell</a> and the images always line up with the moment that you press the chime button.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/08/Screenshot_20220826-123714_Home-Assistant.jpg" class="kg-image" alt="Home Assistant: Add Camera Snapshots to Push Notifications" loading="lazy" width="1080" height="2280" srcset="https://kleypot.com/content/images/size/w600/2022/08/Screenshot_20220826-123714_Home-Assistant.jpg 600w, https://kleypot.com/content/images/size/w1000/2022/08/Screenshot_20220826-123714_Home-Assistant.jpg 1000w, https://kleypot.com/content/images/2022/08/Screenshot_20220826-123714_Home-Assistant.jpg 1080w" sizes="(min-width: 720px) 720px"><figcaption>Android Notification</figcaption></figure><p>And here is the code:</p><figure class="kg-card kg-code-card"><pre><code>[{&quot;id&quot;:&quot;da1e2737d73816ac&quot;,&quot;type&quot;:&quot;trigger-state&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;When Garage Door Is Opening&quot;,&quot;server&quot;:&quot;a86c4410.e2a568&quot;,&quot;version&quot;:2,&quot;exposeToHomeAssistant&quot;:false,&quot;haConfig&quot;:[{&quot;property&quot;:&quot;name&quot;,&quot;value&quot;:&quot;&quot;},{&quot;property&quot;:&quot;icon&quot;,&quot;value&quot;:&quot;&quot;}],&quot;entityid&quot;:&quot;cover.garage_door&quot;,&quot;entityidfiltertype&quot;:&quot;exact&quot;,&quot;debugenabled&quot;:false,&quot;constraints&quot;:[{&quot;targetType&quot;:&quot;this_entity&quot;,&quot;targetValue&quot;:&quot;&quot;,&quot;propertyType&quot;:&quot;current_state&quot;,&quot;propertyValue&quot;:&quot;new_state.state&quot;,&quot;comparatorType&quot;:&quot;is&quot;,&quot;comparatorValueDatatype&quot;:&quot;str&quot;,&quot;comparatorValue&quot;:&quot;opening&quot;}],&quot;inputs&quot;:0,&quot;outputs&quot;:2,&quot;customoutputs&quot;:[],&quot;outputinitially&quot;:false,&quot;state_type&quot;:&quot;str&quot;,&quot;enableInput&quot;:false,&quot;x&quot;:230,&quot;y&quot;:200,&quot;wires&quot;:[[&quot;b6a00b357813dc27&quot;],[]]},{&quot;id&quot;:&quot;b6a00b357813dc27&quot;,&quot;type&quot;:&quot;api-current-state&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;And Andrew Is Not Home&quot;,&quot;server&quot;:&quot;a86c4410.e2a568&quot;,&quot;version&quot;:3,&quot;outputs&quot;:2,&quot;halt_if&quot;:&quot;home&quot;,&quot;halt_if_type&quot;:&quot;str&quot;,&quot;halt_if_compare&quot;:&quot;is_not&quot;,&quot;entity_id&quot;:&quot;person.andrew&quot;,&quot;state_type&quot;:&quot;str&quot;,&quot;blockInputOverrides&quot;:false,&quot;outputProperties&quot;:[{&quot;property&quot;:&quot;payload&quot;,&quot;propertyType&quot;:&quot;msg&quot;,&quot;value&quot;:&quot;&quot;,&quot;valueType&quot;:&quot;entityState&quot;},{&quot;property&quot;:&quot;data&quot;,&quot;propertyType&quot;:&quot;msg&quot;,&quot;value&quot;:&quot;&quot;,&quot;valueType&quot;:&quot;entity&quot;}],&quot;for&quot;:&quot;0&quot;,&quot;forType&quot;:&quot;num&quot;,&quot;forUnits&quot;:&quot;minutes&quot;,&quot;override_topic&quot;:false,&quot;state_location&quot;:&quot;payload&quot;,&quot;override_payload&quot;:&quot;msg&quot;,&quot;entity_location&quot;:&quot;data&quot;,&quot;override_data&quot;:&quot;msg&quot;,&quot;x&quot;:510,&quot;y&quot;:200,&quot;wires&quot;:[[&quot;7523d56d77849f46&quot;],[]]},{&quot;id&quot;:&quot;912b86806fded094&quot;,&quot;type&quot;:&quot;inject&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Send Now&quot;,&quot;props&quot;:[{&quot;p&quot;:&quot;payload&quot;},{&quot;p&quot;:&quot;topic&quot;,&quot;vt&quot;:&quot;str&quot;}],&quot;repeat&quot;:&quot;&quot;,&quot;crontab&quot;:&quot;&quot;,&quot;once&quot;:false,&quot;onceDelay&quot;:0.1,&quot;topic&quot;:&quot;&quot;,&quot;payload&quot;:&quot;&quot;,&quot;payloadType&quot;:&quot;date&quot;,&quot;x&quot;:180,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;7523d56d77849f46&quot;]]},{&quot;id&quot;:&quot;7074fde8ae85eb77&quot;,&quot;type&quot;:&quot;comment&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Trigger&quot;,&quot;info&quot;:&quot;&quot;,&quot;x&quot;:150,&quot;y&quot;:160,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;91d547e9b9e54664&quot;,&quot;type&quot;:&quot;comment&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Condition&quot;,&quot;info&quot;:&quot;&quot;,&quot;x&quot;:460,&quot;y&quot;:160,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;7523d56d77849f46&quot;,&quot;type&quot;:&quot;credentials&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;&quot;,&quot;props&quot;:[{&quot;value&quot;:&quot;endpoint&quot;,&quot;type&quot;:&quot;msg&quot;},{&quot;value&quot;:&quot;user&quot;,&quot;type&quot;:&quot;msg&quot;},{&quot;value&quot;:&quot;password&quot;,&quot;type&quot;:&quot;msg&quot;},{&quot;value&quot;:&quot;local_storage_path&quot;,&quot;type&quot;:&quot;msg&quot;},{&quot;value&quot;:&quot;notify_service&quot;,&quot;type&quot;:&quot;msg&quot;}],&quot;x&quot;:370,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;3a65534795823a90&quot;]]},{&quot;id&quot;:&quot;40a64f748838890e&quot;,&quot;type&quot;:&quot;http request&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;GET Image&quot;,&quot;method&quot;:&quot;GET&quot;,&quot;ret&quot;:&quot;bin&quot;,&quot;paytoqs&quot;:&quot;ignore&quot;,&quot;url&quot;:&quot;&quot;,&quot;tls&quot;:&quot;&quot;,&quot;persist&quot;:false,&quot;proxy&quot;:&quot;&quot;,&quot;insecureHTTPParser&quot;:false,&quot;authType&quot;:&quot;&quot;,&quot;senderr&quot;:false,&quot;headers&quot;:[],&quot;x&quot;:750,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;b4da7befe3738159&quot;]]},{&quot;id&quot;:&quot;f29e35e66db1823f&quot;,&quot;type&quot;:&quot;file&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Write File&quot;,&quot;filename&quot;:&quot;\&quot;/config/www/\&quot; &amp; $$.local_storage_path&quot;,&quot;filenameType&quot;:&quot;jsonata&quot;,&quot;appendNewline&quot;:false,&quot;createDir&quot;:false,&quot;overwriteFile&quot;:&quot;true&quot;,&quot;encoding&quot;:&quot;none&quot;,&quot;x&quot;:1120,&quot;y&quot;:280,&quot;wires&quot;:[[&quot;83e1d989aa1b4855&quot;]]},{&quot;id&quot;:&quot;3a65534795823a90&quot;,&quot;type&quot;:&quot;change&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Set Request Headers&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;url&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;endpoint&quot;,&quot;tot&quot;:&quot;msg&quot;},{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;headers&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;{\t   \&quot;Authorization\&quot;: &apos;Basic &apos; &amp; $base64encode(\t      $$.user &amp; &apos;:&apos; &amp; $$.password\t   )\t}&quot;,&quot;tot&quot;:&quot;jsonata&quot;}],&quot;action&quot;:&quot;&quot;,&quot;property&quot;:&quot;&quot;,&quot;from&quot;:&quot;&quot;,&quot;to&quot;:&quot;&quot;,&quot;reg&quot;:false,&quot;x&quot;:560,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;40a64f748838890e&quot;]]},{&quot;id&quot;:&quot;b4da7befe3738159&quot;,&quot;type&quot;:&quot;switch&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Check Response&quot;,&quot;property&quot;:&quot;statusCode&quot;,&quot;propertyType&quot;:&quot;msg&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;btwn&quot;,&quot;v&quot;:&quot;200&quot;,&quot;vt&quot;:&quot;num&quot;,&quot;v2&quot;:&quot;299&quot;,&quot;v2t&quot;:&quot;num&quot;},{&quot;t&quot;:&quot;else&quot;}],&quot;checkall&quot;:&quot;true&quot;,&quot;repair&quot;:false,&quot;outputs&quot;:2,&quot;x&quot;:930,&quot;y&quot;:300,&quot;wires&quot;:[[&quot;f29e35e66db1823f&quot;],[&quot;6b04ee837f845b56&quot;]]},{&quot;id&quot;:&quot;83e1d989aa1b4855&quot;,&quot;type&quot;:&quot;change&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;image_path&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;\&quot;local/\&quot; &amp; $$.local_storage_path&quot;,&quot;tot&quot;:&quot;jsonata&quot;}],&quot;action&quot;:&quot;&quot;,&quot;property&quot;:&quot;&quot;,&quot;from&quot;:&quot;&quot;,&quot;to&quot;:&quot;&quot;,&quot;reg&quot;:false,&quot;x&quot;:1300,&quot;y&quot;:280,&quot;wires&quot;:[[&quot;4e5c2ea1dadbe317&quot;]]},{&quot;id&quot;:&quot;77db453cf42002c8&quot;,&quot;type&quot;:&quot;function&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Throw Exception&quot;,&quot;func&quot;:&quot;throw msg.statusCode + \&quot; status returned from http request node.\&quot;\nreturn msg;&quot;,&quot;outputs&quot;:1,&quot;noerr&quot;:0,&quot;initialize&quot;:&quot;&quot;,&quot;finalize&quot;:&quot;&quot;,&quot;libs&quot;:[],&quot;x&quot;:1230,&quot;y&quot;:360,&quot;wires&quot;:[[]]},{&quot;id&quot;:&quot;f3fcf7e0d93aa899&quot;,&quot;type&quot;:&quot;comment&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Credentials (see example here)&quot;,&quot;info&quot;:&quot;endpoint = http://1.2.3.4/snapshot\nuser = someuser (if needed)\npassword = secret (if needed)\nlocal_storage_path = images/snapshots/driveway.jpg\nnotify_service = mobile_app_galaxy_phone&quot;,&quot;x&quot;:430,&quot;y&quot;:340,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;dd65180c2215a6bf&quot;,&quot;type&quot;:&quot;debug&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;debug 3&quot;,&quot;active&quot;:true,&quot;tosidebar&quot;:true,&quot;console&quot;:false,&quot;tostatus&quot;:false,&quot;complete&quot;:&quot;true&quot;,&quot;targetType&quot;:&quot;full&quot;,&quot;statusVal&quot;:&quot;&quot;,&quot;statusType&quot;:&quot;auto&quot;,&quot;x&quot;:1200,&quot;y&quot;:400,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;4e5c2ea1dadbe317&quot;,&quot;type&quot;:&quot;api-call-service&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;name&quot;:&quot;Notify&quot;,&quot;server&quot;:&quot;a86c4410.e2a568&quot;,&quot;version&quot;:5,&quot;debugenabled&quot;:false,&quot;domain&quot;:&quot;notify&quot;,&quot;service&quot;:&quot;{{ flow.notify_service }}&quot;,&quot;areaId&quot;:[],&quot;deviceId&quot;:[],&quot;entityId&quot;:[],&quot;data&quot;:&quot;{\t   \&quot;title\&quot;:\&quot;Garage Door Opened\&quot;,\t   \&quot;message\&quot;:\&quot;The Garage Door was opened while you were away!\&quot;,\t   \&quot;data\&quot;:{\t       \&quot;clickAction\&quot;:\&quot;/\&quot;,\t       \&quot;tag\&quot;:\&quot;garage_opened_while_away\&quot;,\t       \&quot;image\&quot;:$$.image_path,\t       \&quot;ttl\&quot;:0,\t       \&quot;priority\&quot;:\&quot;high\&quot;,\t       \&quot;channel\&quot;:\&quot;Doorbell\&quot;,\t       \&quot;actions\&quot;:[\t           {\t               \&quot;action\&quot;:\&quot;snooze_alerts_1_hour\&quot;,\t               \&quot;title\&quot;:\&quot;Snooze 1hr\&quot;\t           },\t           {\t               \&quot;action\&quot;:\&quot;close_garage_door\&quot;,\t               \&quot;title\&quot;:\&quot;Close Now\&quot;\t           }\t       ]\t   }\t}&quot;,&quot;dataType&quot;:&quot;jsonata&quot;,&quot;mergeContext&quot;:&quot;&quot;,&quot;mustacheAltTags&quot;:false,&quot;outputProperties&quot;:[],&quot;queue&quot;:&quot;none&quot;,&quot;x&quot;:1470,&quot;y&quot;:320,&quot;wires&quot;:[[]]},{&quot;id&quot;:&quot;6b04ee837f845b56&quot;,&quot;type&quot;:&quot;junction&quot;,&quot;z&quot;:&quot;b0ef49e9e773bb6d&quot;,&quot;x&quot;:1080,&quot;y&quot;:320,&quot;wires&quot;:[[&quot;77db453cf42002c8&quot;,&quot;dd65180c2215a6bf&quot;,&quot;4e5c2ea1dadbe317&quot;]]},{&quot;id&quot;:&quot;a86c4410.e2a568&quot;,&quot;type&quot;:&quot;server&quot;,&quot;name&quot;:&quot;Home Assistant&quot;,&quot;version&quot;:4,&quot;addon&quot;:true,&quot;rejectUnauthorizedCerts&quot;:true,&quot;ha_boolean&quot;:&quot;y|yes|true|on|home|open&quot;,&quot;connectionDelay&quot;:true,&quot;cacheJson&quot;:true,&quot;heartbeat&quot;:false,&quot;heartbeatInterval&quot;:30,&quot;areaSelector&quot;:&quot;friendlyName&quot;,&quot;deviceSelector&quot;:&quot;friendlyName&quot;,&quot;entitySelector&quot;:&quot;friendlyName&quot;,&quot;statusSeparator&quot;:&quot;at: &quot;,&quot;statusYear&quot;:&quot;hidden&quot;,&quot;statusMonth&quot;:&quot;short&quot;,&quot;statusDay&quot;:&quot;numeric&quot;,&quot;statusHourCycle&quot;:&quot;h23&quot;,&quot;statusTimeFormat&quot;:&quot;h:m&quot;}]</code></pre><figcaption>Garage Notification Code</figcaption></figure><ol><li>Copy the flow above into your editor</li><li>Change the trigger and conditions to suit your needs</li><li>Fill out the credentials node with your endpoint, username, password, and so on</li><li>Set up your notification in the Notify node</li></ol><blockquote>Note: this code depends on the <a href="https://flows.nodered.org/node/node-red-contrib-credentials">Credentials</a> plugin. Make sure to install this plugin first.</blockquote><h2 id="example-2-alarm-panel-notifications">Example #2: Alarm Panel Notifications</h2><p>Here is another example where I enhanced the push notifications for my home alarm system. The flow is very similar to the previous example, all I had to do was change the settings as outlined in the previous example so that it pulls images from my kiosk tablet.</p><figure class="kg-card kg-image-card"><img src="https://kleypot.com/content/images/2022/08/Screenshot_20220826-132120_Home-Assistant.jpg" class="kg-image" alt="Home Assistant: Add Camera Snapshots to Push Notifications" loading="lazy" width="1080" height="2280" srcset="https://kleypot.com/content/images/size/w600/2022/08/Screenshot_20220826-132120_Home-Assistant.jpg 600w, https://kleypot.com/content/images/size/w1000/2022/08/Screenshot_20220826-132120_Home-Assistant.jpg 1000w, https://kleypot.com/content/images/2022/08/Screenshot_20220826-132120_Home-Assistant.jpg 1080w" sizes="(min-width: 720px) 720px"></figure><h2 id="conclusion">Conclusion</h2><p>With this template you can now enhance all of your push notifications to pull images from relevant cameras in your home. &#xA0;You can even extend this concept to push images out to other services, like a file share or a Telegram channel. Real-time images add valuable context to your notifications and make your Home Assistant even more useful. Thanks for reading!</p><!--kg-card-begin: html--><script type="text/javascript" src="https://cdnjs.buymeacoffee.com/1.0.0/button.prod.min.js" data-name="bmc-button" data-slug="akmolina28" data-color="#5F7FFF" data-emoji="&#x1F37A;" data-font="Cookie" data-text="Buy me a beer" data-outline-color="#000000" data-font-color="#ffffff" data-coffee-color="#FFDD00"></script><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Laravel Development on Windows in 2022]]></title><description><![CDATA[In this post, I will show how I set up VS Code to streamline Laravel development on Windows.]]></description><link>https://kleypot.com/laravel-development-windows-2022/</link><guid isPermaLink="false">63052aeb3ecc781c55057cb1</guid><category><![CDATA[laravel]]></category><category><![CDATA[visual-studio-code]]></category><category><![CDATA[php]]></category><category><![CDATA[vue.js]]></category><category><![CDATA[software-development]]></category><category><![CDATA[docker]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Tue, 26 Jul 2022 14:26:16 GMT</pubDate><media:content url="https://kleypot.com/content/images/2022/07/Untitled-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://kleypot.com/content/images/2022/07/Untitled-1.png" alt="Laravel Development on Windows in 2022"><p>Microsoft has come a long way in recent years with tech like Windows Subsystem for Linux (WSL) and Visual Studio Code. The official Laravel docs now even favor WSL over alternatives like Vagrant. But while the Laravel docs give you enough to get up and running, they do not give you any guidance on how to set up VS Code for the best possible experience. Without some very important extensions and packages, you will find yourself constantly flipping between your editor, doc pages, error stacks, and terminal windows. Trying to keep everything in your head &#x2013; Laravel syntax, artisan commands, phpunit tests, project structure, and so on &#x2013; can be a significant cognitive burden which slows you down and pulls you out of your dev flow.</p><p>In this post, I will show how I set up VS Code to streamline Laravel development. The goal is to reduce as much as possible the need to look things up, manually run commands, or parse through arcane error messages. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/akmolina28/laravel-vscode-example/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - akmolina28/laravel-vscode-example: Laravel starter project pre-configured for VSCode (WSL2)</div><div class="kg-bookmark-description">Laravel starter project pre-configured for VSCode (WSL2) - GitHub - akmolina28/laravel-vscode-example: Laravel starter project pre-configured for VSCode (WSL2)</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Laravel Development on Windows in 2022"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">akmolina28</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/1c95ad1ad2690e4836a31dc4b7dd8a2b5a304f7e11f14e3af1ad2b58894d3a43/akmolina28/laravel-vscode-example" alt="Laravel Development on Windows in 2022"></div></a></figure><p>Some of the highlights:</p><ul><li>Full intellisense and autocomplete for Laravel framework, including Facades, Eloquent, and custom model classes.</li><li>&quot;Quick fixes&quot; to automatically fix common syntax errors.</li><li>Automatic code formatting according to predefined standards.</li><li>Snippets to generate commonly used code blocks.</li><li>Code navigation, quickly jump from caller to declaration.</li><li>ESLint and Hot Module Reloading (HMR).</li><li>Integrated Bash shell.</li><li>Inline Git history/blame.</li><li>Shortcuts and palette commands for PHPUnit, Artisan, and more.</li></ul><h2 id="installing-wsl2">Installing WSL2</h2><p>The first thing you should do is install Windows Subsystem for Linux (WSL). Even if this is the only advice you take from this post, just install WSL and start playing with it. WSL has completely changed the way I develop in Windows, as it finally provides a really fast and robust way to run Linux distributions.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.microsoft.com/en-us/windows/wsl/about"><div class="kg-bookmark-content"><div class="kg-bookmark-title">What is Windows Subsystem for Linux</div><div class="kg-bookmark-description">Learn about the Windows Subsystem for Linux, including the different versions and ways you can use them.</div><div class="kg-bookmark-metadata"><span class="kg-bookmark-author">Microsoft Docs</span><span class="kg-bookmark-publisher">craigloewen-msft</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.microsoft.com/en-us/media/logos/logo-ms-social.png" alt="Laravel Development on Windows in 2022"></div></a></figure><p>I am using Ubuntu 20.04, but any distro that supports Docker would work. If you are new to WSL, here are two things to help get you started after you install your distro of choice:</p><ol><li>You can start up a shell from the Start Menu by typing the name of your distro. From a Windows point-of-view, WSL looks like any other App. You may need to run as Administrator for all features to work properly.</li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-1.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="799" height="643" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-1.png 600w, https://kleypot.com/content/images/2022/07/image-1.png 799w" sizes="(min-width: 720px) 720px"><figcaption>Launching WSL2 from the Start Menu</figcaption></figure><p>2. The WSL file system is mounted as virtual drive which you can access with the file explorer at <code>\\wsl$</code></p><figure class="kg-card kg-image-card"><img src="https://kleypot.com/content/images/2022/07/image-2.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="581" height="347"></figure><h2 id="visual-studio-code-remote-development">Visual Studio Code Remote Development</h2><p>Next, install VS Code if you haven&apos;t already, and make sure you have the <strong>Remote Development</strong> extension. Unlike Visual Studio which uses Solution files and Project files to organize projects, Visual Studio code uses workspace folders. You simply open a folder from your file system, and that becomes your workspace in VS Code. </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://code.visualstudio.com/docs/remote/wsl"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Developing in the Windows Subsystem for Linux with Visual Studio Code</div><div class="kg-bookmark-description">Using Visual Studio Code Remote Development with the Windows Subsystem for Linux (WSL)</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://code.visualstudio.com/favicon.ico" alt="Laravel Development on Windows in 2022"><span class="kg-bookmark-author">Microsoft</span><span class="kg-bookmark-publisher">Microsoft</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://code.visualstudio.com/opengraphimg/opengraph-docs.png" alt="Laravel Development on Windows in 2022"></div></a></figure><p>Remote Development is a very powerful, first-party extension which allows you to open workspaces on remote machines over SSH, or within WSL or Docker containers. VS Code will automatically run a server within your WSL instance, giving it full access to the Linux terminal and file system. </p><h2 id="docker-desktop-and-laravel-sail">Docker Desktop and Laravel Sail</h2><p>The next dependency is Docker Desktop for WSL2. Docker allows you to run Laravel Sail, a portable development environment which uses containers to run all of Laravel&apos;s dependencies like php, mysql, and redis. This saves you from having to manage all of those dependencies yourself, and it makes your development environment portable.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.docker.com/desktop/install/windows-install/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Install Docker Desktop on Windows</div><div class="kg-bookmark-description">How to install Docker Desktop for Windows</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.docker.com/favicons/docs@2x.ico" alt="Laravel Development on Windows in 2022"><span class="kg-bookmark-author">Docker Documentation</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.docker.com/favicons/docs@2x.ico" alt="Laravel Development on Windows in 2022"></div></a></figure><p>Once Docker is installed and running, follow the Laravel documentation to set up Sail.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://laravel.com/docs/9.x#getting-started-on-windows"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Laravel - The PHP Framework For Web Artisans</div><div class="kg-bookmark-description">Laravel is a PHP web application framework with expressive, elegant syntax. We&#x2019;ve already laid the foundation &#x2014; freeing you to create without sweating the small things.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://laravel.com/img/favicon/apple-touch-icon.png" alt="Laravel Development on Windows in 2022"><span class="kg-bookmark-author">The PHP Framework For Web Artisans</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://laravel.com/img/og-image.jpg" alt="Laravel Development on Windows in 2022"></div></a></figure><pre><code>$ curl -s https://laravel.build/laravel-vscode-example | bash

$ cd laravel-vscode-example

$ ./vendor/bin/sail up -d

$ code .</code></pre><p>The final command above will start VS Code in your WSL workspace. You should also be able to view the website at <code>http://localhost/</code>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-4.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1180" height="726" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-4.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-4.png 1000w, https://kleypot.com/content/images/2022/07/image-4.png 1180w" sizes="(min-width: 720px) 720px"><figcaption>Laravel application default page</figcaption></figure><p>This is the bare minimum to get started with Laravel development on Windows, and this is where you can start bringing in more extensions to make working in VS Code much more efficient.</p><h2 id="scaffolding-with-laravel-breeze">Scaffolding with Laravel Breeze</h2><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://laravel.com/docs/9.x/starter-kits#laravel-breeze"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Laravel - The PHP Framework For Web Artisans</div><div class="kg-bookmark-description">Laravel is a PHP web application framework with expressive, elegant syntax. We&#x2019;ve already laid the foundation &#x2014; freeing you to create without sweating the small things.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://laravel.com/img/favicon/apple-touch-icon.png" alt="Laravel Development on Windows in 2022"><span class="kg-bookmark-author">The PHP Framework For Web Artisans</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://laravel.com/img/og-image.jpg" alt="Laravel Development on Windows in 2022"></div></a></figure><p>This is an optional step which I am including in order to have a more fully-fledged project to work with. Laravel Breeze will scaffold up a basic application with authentication and and user dashboard, plus it will install a front-end framework (I am using Vue.js in this example). Breeze will also configure Hot Module Reloading (HMR) which is critical for rapid front-end development.</p><p>Note that you can now use the WSL terminal in VS Code! Open the terminal with Ctrl+`.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-6.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1516" height="885" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-6.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-6.png 1000w, https://kleypot.com/content/images/2022/07/image-6.png 1516w" sizes="(min-width: 1200px) 1200px"><figcaption>Installing Laravel Breeze using the WSL terminal in VS Code</figcaption></figure><pre><code>$ sail composer require laravel/breeze --dev

$ sail artisan breeze:install vue

$ sail artisan migrate

$ sail npm install

$ sail npm run dev</code></pre><p>The final command above will start the Vite development server which serves up all of the client side assets. You should see some output in the console with links to where the site is running (Ctrl+click to open).</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-9.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1040" height="389" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-9.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-9.png 1000w, https://kleypot.com/content/images/2022/07/image-9.png 1040w"><figcaption>Laravel Vite</figcaption></figure><p>As of the time I am writing this, there are two issues that aren&apos;t addressed in the official documentation for Sail users on Windows.</p><ol><li>The default <code>APP_URL</code> will not be accessible because Windows cannot resolve it. You can either edit the hosts file to point <code>APP_URL</code> to <code>127.0.0.1</code>, or you can change the <code>APP_URL</code> setting in your <code>.env</code> file to <code>http://localhost</code>. I prefer the latter approach.</li><li>When using Sail, Vite will try to serve assets from <code>0.0.0.0</code>, which will not resolve outside of the Docker containers. To fix this, manually configure the host for HMR in <code>vite.config.js</code>.</li></ol><pre><code class="language-JavaScript">export default defineConfig({
    server: {
        hmr: {
            host: &apos;localhost&apos;,
        },
    },
    // ...
});</code></pre><p>Vite should automatically restart after saving the files above, giving you a new link which should open now. Make sure you can also access the routes that were added by Breeze like <code>/register</code> and <code>/login</code>.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-11.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="871" height="732" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-11.png 600w, https://kleypot.com/content/images/2022/07/image-11.png 871w" sizes="(min-width: 720px) 720px"><figcaption>Laravel Breeze Register page</figcaption></figure><p>While the Register page is open, go to the file <code>resources/js/Pages/Auth/Register.vue</code> and make a small change. For example, change &quot;Confirm Password&quot; to &quot;Re-type Password&quot;. As long as Vite is still running, you should see your changes instantly in the browser.</p><h2 id="test-and-build-tasks">Test and Build Tasks</h2><p>Now that Sail is up and running, we can configure the default Tasks for running Vite. While you can easily run these tasks manually in the terminal, I prefer the convenience of using hotkeys rather than switching to the terminal and running a sail command. </p><figure class="kg-card kg-code-card"><pre><code>{
  &quot;version&quot;: &quot;2.0.0&quot;,
  &quot;tasks&quot;: [
    {
      &quot;label&quot;: &quot;Run Laravel&quot;,
      &quot;type&quot;: &quot;shell&quot;,
      &quot;command&quot;: &quot;./vendor/bin/sail npm run dev&quot;,
      &quot;group&quot;: {
        &quot;kind&quot;: &quot;test&quot;,
        &quot;isDefault&quot;: true
      },
      &quot;presentation&quot;: {
        &quot;reveal&quot;: &quot;always&quot;,
        &quot;panel&quot;: &quot;new&quot;
      }
    },
    {
      &quot;label&quot;: &quot;Build for Production&quot;,
      &quot;type&quot;: &quot;shell&quot;,
      &quot;command&quot;: &quot;./vendor/bin/sail npm run build&quot;,
      &quot;group&quot;: {
        &quot;kind&quot;: &quot;build&quot;,
        &quot;isDefault&quot;: true
      },
      &quot;presentation&quot;: {
        &quot;reveal&quot;: &quot;always&quot;,
        &quot;panel&quot;: &quot;new&quot;
      }
    }
  ]
}</code></pre><figcaption>.vscode/tasks.json</figcaption></figure><p>After adding the <code>tasks.json</code> file to the project, you can open the command palette with <code>ctrl+shift+P</code> and...</p><ul><li>Type &quot;Run Test Task&quot; to start Laravel test container, or</li><li>Type &quot;Run Build Task&quot; to build for production</li></ul><p>You can also set up keyboard shortcuts to access these commands more quickly.</p><figure class="kg-card kg-code-card"><pre><code>[
  {
    &quot;key&quot;: &quot;ctrl+shift+r&quot;,
    &quot;command&quot;: &quot;workbench.action.tasks.test&quot;
  },
  {
    &quot;key&quot;: &quot;ctrl+shift+b&quot;,
    &quot;command&quot;: &quot;workbench.action.tasks.build&quot;
  }
]</code></pre><figcaption>keybindings.json</figcaption></figure><h2 id="extensions-for-php-and-laravel">Extensions for PHP and Laravel</h2><p>Visual Studio code comes with very little PHP support out of the box. You get basic syntax highlighting and some auto-complete support. Open any PHP file and write some code, and you&apos;ll quickly see how limited these features are.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-12.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1203" height="743" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-12.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-12.png 1000w, https://kleypot.com/content/images/2022/07/image-12.png 1203w" sizes="(min-width: 720px) 720px"><figcaption>No autocomplete for basic Laravel library functions</figcaption></figure><h3 id="php-intelephense">PHP Intelephense</h3><p>The first extension to grab is <a href="https://marketplace.visualstudio.com/items?itemName=bmewburn.vscode-intelephense-client">PHP Intelephense</a>, a feature-rich PHP language server which juices up VS Code&apos;s syntax highlighting and code navigation.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-15.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1299" height="1001" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-15.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-15.png 1000w, https://kleypot.com/content/images/2022/07/image-15.png 1299w" sizes="(min-width: 1200px) 1200px"><figcaption>Quick fix for missing use statement</figcaption></figure><p>Right away you should see that errors are highlighted in the editor and in the explorer. There are also many &quot;Quick fixes&quot; available for common issues like missing imports. Intelephense can also format your code according to the latest PSR standards. I recommend turning on Format on Save.</p><h3 id="laravel-ide-helper">laravel-ide-helper</h3><p>Now we have a good backbone for PHP, but we lack support for Laravel development. This problem is not unique to VS Code, and there is actually a <a href="https://github.com/barryvdh/laravel-ide-helper">composer package</a> to fix it.</p><pre><code>$ sail composer require --dev barryvdh/laravel-ide-helper

$ sail artisan ide-helper:generate

$ sail artisan ide-helper:models</code></pre><p>These artisan commands will generate PHPDocs for everything in Laravel. Now, Intelephense should give you full autocomplete for almost everything you need to do in php.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-16.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1245" height="904" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-16.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-16.png 1000w, https://kleypot.com/content/images/2022/07/image-16.png 1245w" sizes="(min-width: 720px) 720px"><figcaption>Full intellisense for Laravel</figcaption></figure><h3 id="intellisense-for-user-generated-code">Intellisense for User-generated code</h3><p>If you want intellisense for your own code, you will need to correctly decorate your functions and classes with PHPDocs. This is not very well documented in the PHP Intelephense docs, but there is a snippet available to do most of the work for you. If you type <code>/**</code> and press enter above a declaration, Intelephense will set up the PHPDoc with blanks for you to fill in and tab through (use the tab key).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-20.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1039" height="451" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-20.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-20.png 1000w, https://kleypot.com/content/images/2022/07/image-20.png 1039w" sizes="(min-width: 720px) 720px"><figcaption>Example PHPDoc</figcaption></figure><p>Once your PHPDoc is filled out, you should the info reflected by intellisense.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-21.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1971" height="597" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-21.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-21.png 1000w, https://kleypot.com/content/images/size/w1600/2022/07/image-21.png 1600w, https://kleypot.com/content/images/2022/07/image-21.png 1971w" sizes="(min-width: 1200px) 1200px"><figcaption>Intellisense for user-generated code</figcaption></figure><h3 id="laravel-artisan">Laravel Artisan</h3><p>While you can use the terminal to run artisan commands (e.g. <code>sail artisan make:model MyModel</code>), the <a href="https://marketplace.visualstudio.com/items?itemName=ryannaddy.laravel-artisan">Laravel Artisan</a> extension makes life easier by adding commands to the command palette. Each command&apos;s options are explicitly shown, for example when you create a model, the extension will ask if you also want to create a factory, a seeder, a controller, and so on. You can (and should) also assign keyboard shortcuts to frequently used commands, like <code>migrate</code> or <code>cache:clear</code>. </p><p>To make this extension work with Sail, add two settings to your workspace:</p><figure class="kg-card kg-code-card"><pre><code>{
  &quot;artisan.docker.command&quot;: &quot;./vendor/bin/sail&quot;,
  &quot;artisan.docker.enabled&quot;: true,
}</code></pre><figcaption>.vscode/settings.json</figcaption></figure><h3 id="laravel-extension-pack">Laravel Extension Pack</h3><p><a href="https://marketplace.visualstudio.com/items?itemName=onecentlin.laravel-extension-pack">Laravel Extension Pack</a> is a package of several extensions which are useful for Laravel development. Along with Laravel Artisan, it includes several extensions for snippets and blade support. I recommend looking at the list of included extensions and manually installing whichever seem useful for your project. For example, I prefer Vue/React for my view layer, so I do not bother with Blade extensions.</p><h2 id="extensions-for-vue-and-javascript">Extensions for Vue and JavaScript</h2><p>Now that PHP is set up, we need support for our JavaScript framework. If you open a Vue file, right away you will notice that VS Code has no out-of-the-box support for Vue.js. First we will grab an add-on for language support, followed by additional addons for snippets, code formatting, and linting.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-17.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1551" height="886" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-17.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-17.png 1000w, https://kleypot.com/content/images/2022/07/image-17.png 1551w" sizes="(min-width: 720px) 720px"><figcaption>Vue.js file with no language support (formatted as Plain Text)</figcaption></figure><h3 id="volar">Volar</h3><p><a href="https://marketplace.visualstudio.com/items?itemName=Vue.volar">Volar </a>is the standard extension for vue 3 (for Vue 2, check out Vetur). The extension reads the <code>jsconfig.json</code> file that ships with Laravel 9, so no additional configuration should be needed. Once installed, you should have syntax highlighting and basic intellisense.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-18.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1328" height="883" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-18.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-18.png 1000w, https://kleypot.com/content/images/2022/07/image-18.png 1328w" sizes="(min-width: 720px) 720px"><figcaption>Vue.js file with Volar language support</figcaption></figure><h3 id="prettier-and-eslint">Prettier and ESLint</h3><p>Now that we have language support, we can bring in extensions for code formatting and style enforcement. <a href="https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode">Prettier </a>is a code formatter with support for Vue.js, and <a href="https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint">ESLint </a>is the standard tool for linting JavaScript code and enforcing code standards. Together, these plugins can detect syntax and style errors, and offer quick fixes for common issues. In my opinion, these are must-have plugins for Vue.js development. </p><p>Prettier and ESLint require a few additional dependencies:</p><pre><code>$ sail npm install prettier@^2.5.1 --save-dev

$ sail npm install eslint@^8.7.0 --save-dev

$ sail npm install vue-eslint-parser@^8.0.0 --save-dev

$ sail npm install eslint-plugin-vue --save-dev

$ sail npm install eslint-config-prettier --save-dev

$ sail npm install vite-plugin-eslint --save-dev

$ echo {}&gt; .prettierrc.json

$ touch .eslintrc.json

$ touch .eslintignore</code></pre><p>Then, edit <code>.eslintrc.json</code> as follows:</p><pre><code>{
  &quot;root&quot;: true,
  &quot;extends&quot;: [
    &quot;eslint:recommended&quot;,
    &quot;plugin:vue/vue3-recommended&quot;,
    &quot;prettier&quot;
  ],
  &quot;rules&quot;: {
    &quot;vue/multi-word-component-names&quot;: &quot;off&quot;
  }
}
</code></pre><p>Next, edit .eslintignore to ignore vendor scripts and autogenerated files:</p><pre><code>node_modules
public
vendor
**/dist
**/components.d.ts
!/docs/.vitepress
resources/js/ziggy.js</code></pre><p>Finally, add ESLint to the Vite config so that lint errors are detected and displayed on the development server:</p><figure class="kg-card kg-code-card"><pre><code>import eslint from &quot;vite-plugin-eslint&quot;;

export default defineConfig({
  plugins: [
    eslint(),
  ],
});
</code></pre><figcaption>Snippet from vite.config.js</figcaption></figure><p>Now, when you open a Vue or JavaScript file, VS Code should highlight syntax errors and warnings in your code. When you save the file, Prettier should automatically format it to comply with the lint rules. Any issues that cannot be fixed by the formatter can either be fixed by the &quot;Quick fixes&quot;, or otherwise must be fixed by hand. ESLint will also run when the Vite build runs, and you will see any errors and warnings rendered in the browser as well as the terminal.</p><h3 id="vue-vscode-snippets">Vue VSCode Snippets</h3><p>For Vue development, <a href="https://marketplace.visualstudio.com/items?itemName=sdras.vue-vscode-snippets">Vue VSCode Snippets</a> is a must-have extension which offers tons of help when you can&apos;t remember syntax, and boilerplate templates to fill out new components.</p><h2 id="phpunit-extensions">PHPUnit Extensions</h2><p>If you are doing Laravel right, you are writing tests for all of your new modules. Sail includes a solid implementation of PHPUnit, but you are still required to run your tests manually in the terminal. VS Code fortunately has several community extensions for running PHPUnit tests in a more seamless way. I prefer the extension <a href="https://marketplace.visualstudio.com/items?itemName=emallin.phpunit">PHPUnit</a> for ease of configuration. </p><p>After installing, you will have to configure it to use Sail. </p><figure class="kg-card kg-code-card"><pre><code>{
  &quot;phpunit.command&quot;: &quot;./vendor/bin/sail&quot;,
  &quot;phpunit.paths&quot;: {
    &quot;${workspaceFolder}&quot;: &quot;/var/www/html&quot;
  },
  &quot;phpunit.phpunit&quot;: &quot;test&quot;
}</code></pre><figcaption>.vscode/settings.json</figcaption></figure><p>Now you can use the command palette to run your test suites. You can also run individual tests based on the position of your cursor. You should consider setting up keyboard shortcuts so you can easily run the test you are currently working on. These shortcuts will save you from having to manually run tests in the terminal, which can be very tedious and disruptive to your workflow.</p><figure class="kg-card kg-code-card"><pre><code>[
  {
    &quot;key&quot;: &quot;ctrl+alt+t&quot;,
    &quot;command&quot;: &quot;phpunit.Test&quot;,
    &quot;when&quot;: &quot;editorFocus&quot;
  },
  {
    &quot;key&quot;: &quot;ctrl+shift+alt+t&quot;,
    &quot;command&quot;: &quot;phpunit.TestSuite&quot;
  }
]</code></pre><figcaption>keybindings.json</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image-19.png" class="kg-image" alt="Laravel Development on Windows in 2022" loading="lazy" width="1716" height="1260" srcset="https://kleypot.com/content/images/size/w600/2022/07/image-19.png 600w, https://kleypot.com/content/images/size/w1000/2022/07/image-19.png 1000w, https://kleypot.com/content/images/size/w1600/2022/07/image-19.png 1600w, https://kleypot.com/content/images/2022/07/image-19.png 1716w" sizes="(min-width: 720px) 720px"><figcaption>Unit test executed from editor focus hotkey</figcaption></figure><h2 id="php-debugging">PHP Debugging</h2><p>The latest version of Sail now includes Xdebug, making it easier than ever to set up native PHP debugging in VS Code. The <a href="https://marketplace.visualstudio.com/items?itemName=xdebug.php-debug">PHP Debug</a> extension allows you to set breakpoints and evaluate your variables and call stacks during runtime. The Debug Console also allows you to run PHP statements using the debugger&apos;s context.</p><ol><li>Add new value to <code>.env</code></li></ol><figure class="kg-card kg-code-card"><pre><code>SAIL_XDEBUG_MODE=develop,debug</code></pre><figcaption>.env</figcaption></figure><p>2. Add</p><h2 id="other-useful-extensions">Other Useful Extensions</h2><p>If you are new to VS Code, make sure to check out other popular extensions on the marketplace. Here are some other recommendations which I use in all of my projects, not just Laravel:</p><ul><li><a href="https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-docker">Docker</a> - manage Sail containers through the VS Code UI</li><li><a href="https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer-2">Bracket Pair Colorizer</a> - make it easier to match opening and closing brackets</li><li><a href="https://marketplace.visualstudio.com/items?itemName=naumovs.color-highlight">Color Highlight</a> - highlight hex codes and rgb colors</li><li><a href="https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens">GitLens</a> - inline blame and advanced Git features</li><li><a href="https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-close-tag">Auto Close Tag</a> - automatically create HTML close tags, e.g. <code>&lt;/div&gt;</code></li><li><a href="https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-rename-tag">Auto Rename Tag</a> - automatically rename HTML close tag when open tag changes</li><li><a href="https://marketplace.visualstudio.com/items?itemName=Janne252.fontawesome-autocomplete">Font Awesome Auto-complete</a> - search and preview icons</li><li><a href="https://marketplace.visualstudio.com/items?itemName=lacroixdavid1.vscode-format-context-menu">Format in context menus</a> - add right click format option to file explorer</li></ul><p>Thank you for reading! Feel free to post any questions or issues to <a href="https://github.com/akmolina28/laravel-vscode-example/issues">Github</a>.</p>]]></content:encoded></item><item><title><![CDATA[Vue.js Single Page Application with ASP.NET MVC 5]]></title><description><![CDATA[In this post I will share how I set up an ASP.NET MVC 5 project as a SPA using Vue.js.]]></description><link>https://kleypot.com/vue-js-single-page-application-asp-net-mvc-5/</link><guid isPermaLink="false">63052aeb3ecc781c55057cad</guid><category><![CDATA[software-development]]></category><category><![CDATA[vue.js]]></category><category><![CDATA[c#]]></category><category><![CDATA[asp.net]]></category><category><![CDATA[mvc]]></category><category><![CDATA[mvc-5]]></category><category><![CDATA[visual-studio]]></category><category><![CDATA[visual-studio-code]]></category><category><![CDATA[webpack]]></category><category><![CDATA[node.js]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Fri, 08 Jul 2022 15:09:24 GMT</pubDate><media:content url="https://kleypot.com/content/images/2022/07/1200px-Vue.js_Logo_2.svg-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://kleypot.com/content/images/2022/07/1200px-Vue.js_Logo_2.svg-1.png" alt="Vue.js Single Page Application with ASP.NET MVC 5"><p>In this post I will share how I set up an ASP.NET MVC 5 project as a SPA using <a href="https://vuejs.org/">Vue.js</a>. I will walk through each step of constructing this template so you can see what each piece does and how you may need to modify it for your tastes. I will also include some tips on how to set up your environment for rapid development.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://github.com/akmolina28/mvc5-vuejs-template"><div class="kg-bookmark-content"><div class="kg-bookmark-title">GitHub - akmolina28/mvc5-vuejs-template</div><div class="kg-bookmark-description">Contribute to akmolina28/mvc5-vuejs-template development by creating an account on GitHub.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://github.githubassets.com/favicons/favicon.svg" alt="Vue.js Single Page Application with ASP.NET MVC 5"><span class="kg-bookmark-author">GitHub</span><span class="kg-bookmark-publisher">akmolina28</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://opengraph.githubassets.com/742c3da28d4731e25bc27e1d4e86f1ac0e634f05860a9d5214f3b39ff605429d/akmolina28/mvc5-vuejs-template" alt="Vue.js Single Page Application with ASP.NET MVC 5"></div></a></figure><p>The completed template is available now on Github. If you&apos;re already comfortable with npm and webpack, feel free to jump straight to the source code.</p><p><strong>Key Features:</strong></p><ul><li>Responsive design using <a href="https://bulma.io/">Bulma</a> (or the framework of your choice via npm).</li><li>True SPA with client-side routing.</li><li>Dependency injection using Ninject.</li><li>Hot module reloading for rapid development (browser is automatically refreshed when you change a file).</li><li>Browser-sync for testing in multiple browsers or viewports at once.</li><li>Bundling, minification, cache-busting, and source-maps for all static files.</li><li>MSBuild events to run the webpack build when building the MVC project or publishing the project manually or via CI/CD</li><li>ESLint integration for Vue and JS files.</li></ul><h2 id="model-vue-controller">Model-Vue-Controller</h2><p>The basic idea is to have the MVC application function as a headless API for your basic CRUD operations. Then, instead of using Razor and jQuery for the View layer, we will use Vue. The routing is also handled on the client-side using Vue Router to create a true single-page application.</p><h4 id="strip-down">Strip Down</h4><p>The first thing I did after creating a new MVC 5 application in VS2019 was to start stripping out all of the default client-side tooling. </p><ol><li>Uninstall nuget packages: bootstrap, Microsoft.jQuery.Unobtrusive.Validation, jQuery.Validation, and jQuery.</li><li>Remove folders ~/Content, ~/fonts, and ~/Scripts</li><li>Remove BundleConfig.cs (bundling will be handled by webpack)</li><li>Remove entire Views folder (razor views will be replaced by Vue components)</li></ol><p>This clears the way to begin pulling dependencies in via npm and setting up our SPA.</p><h4 id="spa-routing">SPA Routing</h4><p>Next, we need to set up our default route. Instead of the traditional routes for controller actions, we will create one catch-all route to return our single page. Start by removing the HomeController, and adding a new controller called SpaController.</p><figure class="kg-card kg-code-card"><pre><code class="language-C#">public class SpaController : Controller
{
    public ActionResult Index()
    {
    	return File(&quot;~/dist/index.html&quot;, &quot;text/html&quot;);
    }
}</code></pre><figcaption>~/Controllers/SpaController.cs</figcaption></figure><p>The SpaController has a single action which returns <code>index.html</code>, which will be our single page. </p><p>Finally we just need to update the routes so that all routes go to our new SPA action. Remove the default route and map a new route like this:</p><figure class="kg-card kg-code-card"><pre><code class="language-C#">public class RouteConfig
{
    public static void RegisterRoutes(RouteCollection routes)
    {
        routes.IgnoreRoute(&quot;{resource}.axd/{*pathInfo}&quot;);

        routes.MapRoute(
            name: &quot;SPA&quot;,
            url: &quot;{*catchall}&quot;,
            defaults: new { controller = &quot;Spa&quot;, action = &quot;Index&quot; }
        );
    }
}</code></pre><figcaption>~/App_Start/RouteConfig.cs</figcaption></figure><h2 id="hello-vue">Hello Vue</h2><p>Now that the project has been stripped down, we can start fresh with npm. Our first goal is to get a simple hello world working with Vue. </p><h4 id="source-code">Source Code</h4><p>All of our client-side source code will go into a new folder called <code>src</code>. Let&apos;s start by adding three files there:</p><p>1. Create <code>~/src/index.html</code>. This is the &quot;single page&quot; which is served up for our SPA. Vue will hook into the div in the body.</p><figure class="kg-card kg-code-card"><pre><code class="language-HTML">&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
    &lt;meta charset=&quot;utf-8&quot; /&gt;
    &lt;title&gt;MVC 5 Vue.js Template&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;div id=&quot;app&quot;&gt;
    &lt;hello-world&gt;&lt;/hello-world&gt;
  &lt;/div&gt;
&lt;/body&gt;
&lt;/html&gt;</code></pre><figcaption>index.html</figcaption></figure><p>2. Create <code>~/src/components/HelloWorld.vue</code>. This is a simple <a href="https://vuejs.org/v2/guide/single-file-components.html">Vue component</a> with basic reactivity.</p><figure class="kg-card kg-code-card"><pre><code class="language-VUE">&lt;template&gt;
  &lt;div&gt;
    &lt;h1&gt;{{ message }}&lt;/h1&gt;
    &lt;input v-model=&quot;message&quot; type=&quot;text&quot; /&gt;
  &lt;/div&gt;
&lt;/template&gt;

&lt;script&gt;
export default {
  data() {
    return {
      message: &apos;Hello from Vue!&apos;
    };
  },
};
&lt;/script&gt;</code></pre><figcaption>HelloWorld.vue</figcaption></figure><p>3. Create <code>~/src/js/app.js</code>. This is the entry-point for the application which initializes the HelloWorld component into our div in <code>index.html</code>.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">import Vue from &apos;vue&apos;;

import HelloWorld from &quot;../components/HelloWorld&quot;;

new Vue({
  el: &apos;#app&apos;,
  components: {
    HelloWorld
  }
});
</code></pre><figcaption>app.js</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-1.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="463" height="616"><figcaption>Source code</figcaption></figure><h4 id="build-configuration">Build Configuration</h4><p>Next, we have to set up webpack to build the assets above into something that our web browser can understand. First we need to initialize our npm project and pull in our dependencies.</p><pre><code class="language-BASH">$ npm init
$ npm i -s vue vue-template-compiler
$ npm i -D webpack webpack-cli
$ npm i -D vue-loader css-loader postcss
$ npm i -D html-webpack-plugin</code></pre><ul><li>vue and vue-template-compiler are provided by Vue.js for processing Vue files</li><li>webpack and webpack-cli are used to set up our build process</li><li>vue-loader, css-loader, and postcss are to help webpack make sense of our source code</li><li>html-webpack-plugin is to inject the script tags into index.html. This is important because the script tags due to cache busting.</li></ul><p>The final step is to add the file <code>~/webpack.config.js</code>. If you are not familiar with webpack, this is the most overwhelming part of this setup. Here is a boilerplate config to start with, but note that this might need to be tweaked for your specific environment and/or package versions.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">const HtmlWebpackPlugin = require(&apos;html-webpack-plugin&apos;);
const { VueLoaderPlugin } = require(&apos;vue-loader&apos;);
const path = require(&apos;path&apos;);

module.exports = {
  mode: &apos;development&apos;,
  devtool: &apos;eval&apos;,
  entry: [
    &apos;./src/js/app.js&apos;,
  ],
  output: {
    clean: true,
    path: path.resolve(__dirname, &apos;dist&apos;),
    publicPath: &apos;/dist&apos;,
    filename: &apos;[name].bundle.[contenthash].js&apos;,
  },
  resolve: {
    // point bundler to the vue template compiler
    alias: {
      &apos;vue$&apos;: &apos;vue/dist/vue.esm.js&apos;,
    },
    // allow imports to omit file exensions, 
    // e.g. &quot;import foo from &apos;foobar&apos;&quot; instead of &quot;import foo from &apos;foobar.js&apos;&quot;
    extensions: [&apos;.js&apos;, &apos;.vue&apos;],
  },
  module: {
    rules: [
      // use vue-loader plugin for .vue files
      {
        test: /\.vue$/,
        use: &apos;vue-loader&apos;
      },
    ],
  },
  plugins: [
    new VueLoaderPlugin(),
    new HtmlWebpackPlugin({
      template: &apos;src/index.html&apos;,
      inject: true,
      // favicon: &apos;src/images/favicon.ico&apos;,
      publicPath: &apos;/dist&apos;
    }),
  ],
};
</code></pre><figcaption>webpack.config.js</figcaption></figure><p>Now we can execute our build by running wepback:</p><pre><code class="language-BASH">$ npx webpack --config=webpack.config.js</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-5.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1078" height="451" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-5.png 600w, https://kleypot.com/content/images/size/w1000/2021/09/image-5.png 1000w, https://kleypot.com/content/images/2021/09/image-5.png 1078w" sizes="(min-width: 720px) 720px"><figcaption>Succesful webpack build</figcaption></figure><p>The build emits two files into the output folder, the script bundle and the modified index.html file. If you open <code>~/dist/index.html</code> you will see that webpack has inject the script tags.</p><figure class="kg-card kg-image-card"><img src="https://kleypot.com/content/images/2021/09/image-4.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1020" height="348" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-4.png 600w, https://kleypot.com/content/images/size/w1000/2021/09/image-4.png 1000w, https://kleypot.com/content/images/2021/09/image-4.png 1020w" sizes="(min-width: 720px) 720px"></figure><p>Finally, if we run the debugger in Visual Studio, we should see the working hello world demo.</p><figure class="kg-card kg-image-card"><img src="https://kleypot.com/content/images/2021/09/F91689E5-CEAE-4AFB-BE2F-57EB27F99AB7.GIF" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="328" height="240"></figure><p></p><h2 id="styles-and-layout">Styles and Layout</h2><p>Now we have Vue and webpack working, but our app looks pretty ugly. Let&apos;s add some style by installing a CSS framework and setting up a layout. For this demo I will use <a href="https://bulma.io/">Bulma</a>, but at this point you could something else like Bootstrap or Tailwind.</p><p><em>Refer to the latest Bulma documentation for complete setup. This guide may be outdated.</em></p><h4 id="bulma-webpack-setup">Bulma + Webpack Setup</h4><pre><code class="language-BASH">$ npm i -D bulma
$ npm i -D extract-text-webpack-plugin@next mini-css-extract-plugin node-sass sass-loader style-loader
$ npm i -s @fortawesome/fontawesome-svg-core @fortawesome/free-solid-svg-icons @fortawesome/vue-fontawesome@latest</code></pre><ul><li>bulma &#x2013; responsive CSS framework</li><li>extract-text-webpack-plugin, mini-css-extract-plugin, node-sass, sass-loader, style-loader &#x2013; webpack plugin for bundling the styles</li><li>fortawesome packages &#x2013; free icon pack</li></ul><p>Now we can hook up the styles by adding our application&apos;s main sass file, <code>~/src/sass/app.scss</code>:</p><figure class="kg-card kg-code-card"><pre><code class="language-SASS">@charset &quot;utf-8&quot;;
@import &quot;~bulma/bulma&quot;;
</code></pre><figcaption>app.scss</figcaption></figure><p>And we can import app.scss and the icon pack into our bundle.</p><p><em>Remember, app.js is the only defined entry point in our webpack configuration. We either need to import the styles into app.js, or add the styles as a separate entry point.</em></p><figure class="kg-card kg-code-card"><pre><code class="language-JS">import Vue from &apos;vue&apos;;

// load all solid icons
// modify here to load individual icons as needed to reduce bundle size
import { fas } from &apos;@fortawesome/free-solid-svg-icons&apos;;
import { library } from &apos;@fortawesome/fontawesome-svg-core&apos;;
import { FontAwesomeIcon } from &apos;@fortawesome/vue-fontawesome&apos;;
library.add(fas);

import HelloWorld from &quot;../components/HelloWorld&quot;;
import Layout from &quot;../components/Layout&quot;;

// pull in main stylesheet
require(&apos;../sass/app.scss&apos;);

new Vue({
  el: &apos;#app&apos;,
  components: {
    HelloWorld,
    Layout
  }
});</code></pre><figcaption>app.js updated to import app.scss</figcaption></figure><p>The final setup is to update our webpack config with some new rules for our styles. I am using the optional mini-css-extract-plugin to extract the styles into a separate file with its own content-hash.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">const HtmlWebpackPlugin = require(&apos;html-webpack-plugin&apos;);
const { VueLoaderPlugin } = require(&apos;vue-loader&apos;);
const path = require(&apos;path&apos;);
const MiniCssExtractPlugin = require(&apos;mini-css-extract-plugin&apos;);

module.exports = {
  mode: &apos;development&apos;,
  devtool: &apos;eval&apos;,
  entry: [
    &apos;./src/js/app.js&apos;,
  ],
  output: {
    clean: true,
    path: path.resolve(__dirname, &apos;dist&apos;),
    publicPath: &apos;/dist&apos;,
    filename: &apos;[name].bundle.[contenthash].js&apos;,
  },
  resolve: {
    // point bundler to the vue template compiler
    alias: {
      &apos;vue$&apos;: &apos;vue/dist/vue.esm.js&apos;,
    },
    // allow imports to omit file exensions, 
    // e.g. &quot;import foo from &apos;foobar&apos;&quot; instead of &quot;import foo from &apos;foobar.js&apos;&quot;
    extensions: [&apos;.js&apos;, &apos;.vue&apos;],
  },
  module: {
    rules: [
      // use vue-loader plugin for .vue files
      {
        test: /\.vue$/,
        use: &apos;vue-loader&apos;
      },
      // rule for loading .scss files
      {
        test: /\.scss$/,
        use: [
          MiniCssExtractPlugin.loader,
          {
            loader: &apos;css-loader&apos;,
          },
          {
            loader: &apos;sass-loader&apos;,
            options: {
              sourceMap: true,
            },
          },
        ],
      }
    ],
  },
  plugins: [
    new VueLoaderPlugin(),
    new MiniCssExtractPlugin({
      filename: &apos;css/[name].bundle.[contenthash].css&apos;,
    }),
    new HtmlWebpackPlugin({
      template: &apos;src/index.html&apos;,
      inject: true,
      //favicon: &apos;src/images/favicon.ico&apos;,
      publicPath: &apos;/dist&apos;
    }),
  ],
};
</code></pre><figcaption>webpack.config.js updated for loading sass</figcaption></figure><p>After these changes, running the build should emit a third file, <code>css/main.bundle.[hash].css</code>:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-6.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1078" height="618" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-6.png 600w, https://kleypot.com/content/images/size/w1000/2021/09/image-6.png 1000w, https://kleypot.com/content/images/2021/09/image-6.png 1078w" sizes="(min-width: 720px) 720px"><figcaption>Successful webpack build</figcaption></figure><h4 id="responsive-layout">Responsive Layout</h4><p>Now that Bulma is set up, we can begin using the framework by adding a layout to wrap our content. Here is a boilerplate layout file, very similar to the example provided in the Bulma documentation. Add this code to <code>~/src/components/Layout.vue</code>:</p><figure class="kg-card kg-code-card"><pre><code class="language-Vue">&lt;template&gt;
  &lt;div class=&quot;container&quot;&gt;
    &lt;nav class=&quot;navbar&quot; role=&quot;navigation&quot; aria-label=&quot;main navigation&quot;&gt;
      &lt;div class=&quot;navbar-brand&quot;&gt;
        &lt;a class=&quot;navbar-item&quot; href=&quot;/&quot;&gt;
          &lt;h4 class=&quot;heading is-size-4&quot;&gt;Mvc5 + Vue.js&lt;/h4&gt;
        &lt;/a&gt;

        &lt;a role=&quot;button&quot;
           :class=&quot;`navbar-burger ${menuActive ? &apos;is-active&apos; : &apos;&apos;}`&quot;
           aria-label=&quot;menu&quot;
           aria-expanded=&quot;false&quot;
           data-target=&quot;navbarBasicExample&quot;
           @click=&quot;menuActive = !menuActive&quot;&gt;
          &lt;span aria-hidden=&quot;true&quot;&gt;&lt;/span&gt;
          &lt;span aria-hidden=&quot;true&quot;&gt;&lt;/span&gt;
          &lt;span aria-hidden=&quot;true&quot;&gt;&lt;/span&gt;
        &lt;/a&gt;
      &lt;/div&gt;

      &lt;div id=&quot;navbarBasicExample&quot;
           :class=&quot;`navbar-menu ${menuActive ? &apos;is-active&apos; : &apos;&apos;}`&quot;&gt;
        &lt;div class=&quot;navbar-start&quot;&gt;
          &lt;a class=&quot;navbar-item&quot;&gt;
            Home
          &lt;/a&gt;

          &lt;a class=&quot;navbar-item&quot;&gt;
            &lt;span class=&quot;icon has-text-primary&quot;&gt;
              &lt;icon icon=&quot;book&quot;&gt;&lt;/icon&gt;
            &lt;/span&gt;
            &lt;span&gt;Documentation&lt;/span&gt;
          &lt;/a&gt;

          &lt;a class=&quot;navbar-item&quot;&gt;
            &lt;span class=&quot;icon has-text-info&quot;&gt;
              &lt;icon icon=&quot;info-circle&quot;&gt;&lt;/icon&gt;
            &lt;/span&gt;
            &lt;span&gt;About&lt;/span&gt;
          &lt;/a&gt;
        &lt;/div&gt;

        &lt;div class=&quot;navbar-end&quot;&gt;
          &lt;div class=&quot;navbar-item&quot;&gt;
            &lt;div class=&quot;buttons&quot;&gt;
              &lt;a class=&quot;button is-primary&quot;&gt;
                &lt;strong&gt;Sign up&lt;/strong&gt;
              &lt;/a&gt;
              &lt;a class=&quot;button is-light&quot;&gt;
                Log in
              &lt;/a&gt;
            &lt;/div&gt;
          &lt;/div&gt;
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/nav&gt;
    &lt;main&gt;
      &lt;slot&gt;&lt;/slot&gt;
    &lt;/main&gt;
  &lt;/div&gt;
&lt;/template&gt;

&lt;script&gt;
  export default {
    data() {
      return {
        menuActive: false,
      };
    },
  };
&lt;/script&gt;

&lt;style lang=&quot;scss&quot; scoped&gt;
@import &quot;~bulma/sass/utilities/mixins&quot;;

@media screen and (min-width: $widescreen) {
  .navbar {
    font-size: 1.125rem;
    padding: 2rem 0;
  }
}

@media screen and (min-width: $desktop) {
  .navbar {
    padding: 1rem 0;
  }
}

.navbar-item &gt; .icon {
  margin-left: -0.25rem;
  margin-right: 0.25rem;
}
&lt;/style&gt;
</code></pre><figcaption>~/src/components/Layout.vue</figcaption></figure><p>And then we just need to update app.js and index.js to use the Layout:</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">import Vue from &apos;vue&apos;;

import HelloWorld from &quot;../components/HelloWorld&quot;;
import Layout from &quot;../components/Layout&quot;;

require(&apos;../sass/app.scss&apos;);

new Vue({
  el: &apos;#app&apos;,
  components: {
    HelloWorld,
    Layout
  }
});</code></pre><figcaption>Add Layout component to app.js</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-HTML">&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
  &lt;meta charset=&quot;utf-8&quot; /&gt;
  &lt;title&gt;MVC 5 Vue.js Template&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;div id=&quot;app&quot;&gt;
    &lt;layout&gt;
      &lt;hello-world&gt;&lt;/hello-world&gt;
    &lt;/layout&gt;
  &lt;/div&gt;
&lt;/body&gt;
&lt;/html&gt;</code></pre><figcaption>Add &lt;layout&gt; component to index.html</figcaption></figure><p>Now if we rebuild the bundle and run the VS debugger, we should see our hello world component styled and rendered in our new layout component.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-7.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1209" height="328" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-7.png 600w, https://kleypot.com/content/images/size/w1000/2021/09/image-7.png 1000w, https://kleypot.com/content/images/2021/09/image-7.png 1209w" sizes="(min-width: 1200px) 1200px"><figcaption>Bulma Layout</figcaption></figure><h2 id="routing-and-api-setup">Routing and API Setup</h2><p>Now we have almost everything we need to start building out our SPA. At this point, <code>index.html</code> is hard coded to serve up our layout and our hello-world component. We need to change this so that it instead serves up dynamic content depending on which route the user has requested. </p><p>But before I set up the routing, I will set up some models and controller actions so that our routes actually have something useful to serve up.</p><h4 id="api-backend">API Backend</h4><p>Let&apos;s create a simple data model and a data context for storing and reading data. Then we can add our API actions and routing.</p><p>1. Add the data model under <code>~/Models/MovieModel.cs</code></p><figure class="kg-card kg-code-card"><pre><code class="language-C#">namespace mvc5_vuejs_template.Models
{
    public class MovieModel
    {
        public int Id { get; set; }

        public string Title { get; set; }

        public int Year { get; set; }

        public string Director { get; set; }

        public string Studio { get; set; }
    }
}</code></pre><figcaption>MovieModel.cs</figcaption></figure><p>2. Add the data context under <code>~/Services/MovieService.cs</code></p><figure class="kg-card kg-code-card"><pre><code class="language-C#">namespace mvc5_vuejs_template.Services
{
    public class MovieService
    {
        private static List&lt;MovieModel&gt; _movieContext = new List&lt;MovieModel&gt;()
        {
            new MovieModel()
            {
                Id = 1,
                Title = &quot;Jurrasic Park&quot;,
                Director = &quot;Steven Spielberg&quot;,
                Year = 1993,
                Studio = &quot;Universal Pictures&quot;
            },
            new MovieModel()
            {
                Id = 2,
                Title = &quot;Alien&quot;,
                Director = &quot;Ridley Scott&quot;,
                Year = 1979,
                Studio = &quot;20th Century Fox&quot;
            },
            new MovieModel()
            {
                Id = 3,
                Title = &quot;Titanic&quot;,
                Director = &quot;James Cameron&quot;,
                Year = 1997,
                Studio = &quot;Paramount Pictures&quot;
            }
        };

        public IEnumerable&lt;MovieModel&gt; GetMovies()
        {
            return _movieContext;
        }

        public int InsertMovie(MovieModel model)
        {
            int id = _movieContext.Last().Id + 1;

            model.Id = id;
            _movieContext.Add(model);
            return id;
        }
    }
}</code></pre><figcaption>MovieService.cs</figcaption></figure><p>3. Add the actions to <code>~/Controllers/MovieController.cs</code></p><figure class="kg-card kg-code-card"><pre><code class="language-C#">namespace mvc5_vuejs_template.Controllers
{
    public class MovieController : Controller
    {
        private MovieService _movieService;

        public MovieController()
        {
            _movieService = new MovieService();
        }

        public JsonResult Index()
        {
            var movies = _movieService.GetMovies();
            return Json(movies.ToArray(), JsonRequestBehavior.AllowGet);
        }

        [HttpPost]
        public JsonResult Create(MovieModel movie)
        {
            var insertedMovie = _movieService.InsertMovie(movie);
            return Json(insertedMovie);
        }
    }
}</code></pre><figcaption>MovieController.cs</figcaption></figure><p>4. Add the API routes under <code>~/App_Start/RouteConfig.cs</code></p><figure class="kg-card kg-code-card"><pre><code class="language-C#">namespace mvc5_vuejs_template
{
    public class RouteConfig
    {
        public static void RegisterRoutes(RouteCollection routes)
        {
            routes.IgnoreRoute(&quot;{resource}.axd/{*pathInfo}&quot;);

            routes.MapRoute(
                name: &quot;API&quot;,
                url: &quot;api/{controller}/{action}/{id}&quot;,
                defaults: new { id = UrlParameter.Optional }
            );

            routes.MapRoute(
                name: &quot;SPA&quot;,
                url: &quot;{*catchall}&quot;,
                defaults: new { controller = &quot;Spa&quot;, action = &quot;Index&quot; }
            );
        }
    }
}</code></pre><figcaption>RouteConfig.cs</figcaption></figure><p>Now our CRUD actions are accessible using api routes. The main difference between an API action and a traditional MVC action is that our API actions only return JSON data, whereas MVC actions typically return Razor views.</p><p>We can test the API by running the debugger and navigating to <code>/api/movie/index</code> to hit the Index action. We should get raw JSON back.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-8.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="561" height="721"><figcaption>/api/movie/index route</figcaption></figure><h4 id="movie-views-and-vue-router">Movie Views and Vue Router</h4><p>Next, add some Vue components to display and add new Movies. Then we can set up client-side routing using Vue Router so we can navigate between the &quot;pages&quot; of our SPA.</p><p>1. Pull in the axios library for making requests to the API.</p><pre><code class="language-BASH">$ npm i -s axios</code></pre><p>2. Add the list-view component under <code>~/src/components/Movies/Index.vue</code>. This component will get the list of movies and render it in a table.</p><figure class="kg-card kg-code-card"><pre><code class="language-Vue">&lt;template&gt;
  &lt;section&gt;
    &lt;div class=&quot;mb-5&quot;&gt;
      &lt;h1 class=&quot;title&quot;&gt;Movies&lt;/h1&gt;
    &lt;/div&gt;
    &lt;div&gt;
      &lt;a class=&quot;button is-link mb-2&quot; @click=&quot;$router.push(&apos;movies/create&apos;)&quot;&gt;
        &lt;icon icon=&quot;plus&quot; class=&quot;mr-2&quot;&gt;&lt;/icon&gt;
        &lt;span&gt;Add Movie&lt;/span&gt;
      &lt;/a&gt;
    &lt;/div&gt;
    &lt;icon v-if=&quot;loading&quot; icon=&quot;spinner&quot; spin&gt;&lt;/icon&gt;
    &lt;table v-else class=&quot;table is-striped&quot;&gt;
      &lt;thead&gt;
        &lt;tr&gt;
          &lt;th&gt;Id&lt;/th&gt;
          &lt;th&gt;Title&lt;/th&gt;
          &lt;th&gt;Year&lt;/th&gt;
          &lt;th&gt;Director&lt;/th&gt;
          &lt;th&gt;Studio&lt;/th&gt;
        &lt;/tr&gt;
      &lt;/thead&gt;
      &lt;tbody&gt;
        &lt;tr v-for=&quot;movie in movies&quot; :key=&quot;movie.id&quot;&gt;
          &lt;td&gt;{{ movie.Id }}&lt;/td&gt;
          &lt;td&gt;{{ movie.Title }}&lt;/td&gt;
          &lt;td&gt;{{ movie.Year }}&lt;/td&gt;
          &lt;td&gt;{{ movie.Director }}&lt;/td&gt;
          &lt;td&gt;{{ movie.Studio }}&lt;/td&gt;
        &lt;/tr&gt;
      &lt;/tbody&gt;
    &lt;/table&gt;
  &lt;/section&gt;
&lt;/template&gt;

&lt;script&gt;
  import axios from &apos;axios&apos;;

  export default {
    name: &apos;MovieIndex&apos;,
    data() {
      return {
        loading: false,
        movies: [],
      };
    },
    mounted() {
      this.getMovies();
    },
    methods: {
      getMovies() {
        this.loading = true;
        axios.get(&apos;/api/movie/index&apos;)
          .then((data) =&gt; {
            this.movies = data.data;
          })
          .catch((error) =&gt; {
            // eslint-disable-next-line no-console
            console.log(&apos;Error caught when getting movies from the api:&apos;);
            // eslint-disable-next-line no-console
            console.log(error);
          })
          .then(() =&gt; {
            this.loading = false;
          });
      },
    },
  };
&lt;/script&gt;</code></pre><figcaption>Movie/Index.vue</figcaption></figure><p>3. Add the create view under <code>~/src/Components/Movie/Create.vue</code>. This component has a form to create new movies.</p><figure class="kg-card kg-code-card"><pre><code class="language-Vue">&lt;template&gt;
  &lt;div&gt;
    &lt;div class=&quot;mb-5&quot;&gt;
      &lt;h1 class=&quot;title&quot;&gt;Create Movie&lt;/h1&gt;
    &lt;/div&gt;
    &lt;div class=&quot;columns&quot;&gt;
      &lt;div class=&quot;column is-half&quot;&gt;
        &lt;form @submit.prevent=&quot;submitForm&quot;&gt;
          &lt;div class=&quot;field&quot;&gt;
            &lt;label class=&quot;label&quot;&gt;Title&lt;/label&gt;
            &lt;div class=&quot;control&quot;&gt;
              &lt;input v-model=&quot;title&quot; class=&quot;input&quot; type=&quot;text&quot; /&gt;
            &lt;/div&gt;
          &lt;/div&gt;
          &lt;div class=&quot;field&quot;&gt;
            &lt;label class=&quot;label&quot;&gt;Year&lt;/label&gt;
            &lt;div class=&quot;control&quot;&gt;
              &lt;input v-model=&quot;year&quot; class=&quot;input&quot; type=&quot;text&quot; /&gt;
            &lt;/div&gt;
          &lt;/div&gt;
          &lt;div class=&quot;field&quot;&gt;
            &lt;label class=&quot;label&quot;&gt;Director&lt;/label&gt;
            &lt;div class=&quot;control&quot;&gt;
              &lt;input v-model=&quot;director&quot; class=&quot;input&quot; type=&quot;text&quot; /&gt;
            &lt;/div&gt;
          &lt;/div&gt;
          &lt;div class=&quot;field&quot;&gt;
            &lt;label class=&quot;label&quot;&gt;Studio&lt;/label&gt;
            &lt;div class=&quot;control&quot;&gt;
              &lt;input v-model=&quot;studio&quot; class=&quot;input&quot; type=&quot;text&quot; /&gt;
            &lt;/div&gt;
          &lt;/div&gt;
          &lt;div class=&quot;field is-grouped&quot;&gt;
            &lt;div class=&quot;control&quot;&gt;
              &lt;button class=&quot;button is-link&quot;&gt;Submit&lt;/button&gt;
            &lt;/div&gt;
            &lt;div class=&quot;control&quot;&gt;
              &lt;a class=&quot;button is-link is-light&quot;
                 @click=&quot;$router.push(&apos;/movies&apos;)&quot;&gt;
                Cancel
              &lt;/a&gt;
            &lt;/div&gt;
          &lt;/div&gt;
        &lt;/form&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/template&gt;

&lt;script&gt;
  import axios from &apos;axios&apos;;

  export default {
    data() {
      return {
        title: &apos;&apos;,
        year: &apos;&apos;,
        director: &apos;&apos;,
        studio: &apos;&apos;,
      };
    },
    methods: {
      submitForm() {
        axios
          .post(&apos;/api/movie/create&apos;, {
            Title: this.title,
            Year: this.year,
            Director: this.director,
            Studio: this.studio,
          })
          .then(() =&gt; {
            this.$router.push(&apos;/movies&apos;);
          })
          .catch((error) =&gt; {
            // eslint-disable-next-line no-console
            console.log(&apos;Error caught when getting movies from the api:&apos;);
            // eslint-disable-next-line no-console
            console.log(error);
          });
      },
    },
  };
&lt;/script&gt;</code></pre><figcaption>Movie/Create.vue</figcaption></figure><p>Now we have our views, but we need a way to navigate them in our SPA. </p><p>4. Pull in Vue Router from npm.</p><pre><code>$ npm i -S vue-router</code></pre><p>5. Add the routing to <code>~/src/js/router.js</code>. This is where we will set up all of our client-side routing.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">import VueRouter from &apos;vue-router&apos;;
import HelloWorld from &apos;../components/HelloWorld&apos;;
import MovieIndex from &apos;../components/Movie/Index&apos;;
import MovieCreate from &apos;../components/Movie/Create&apos;;

const routes = [
  {
    path: &apos;/&apos;,
    name: &apos;default&apos;,
    component: HelloWorld,
  },
  {
    path: &apos;/movies&apos;,
    name: &apos;movie_index&apos;,
    component: MovieIndex,
  },
  {
    path: &apos;/movies/create&apos;,
    name: &apos;movie_create&apos;,
    component: MovieCreate,
  },
];

export default new VueRouter({
  mode: &apos;history&apos;,
  routes,
  linkActiveClass: &apos;is-active&apos;, // apply bulma class when a router link is active
});
</code></pre><figcaption>router.js</figcaption></figure><p>6. Import Vue Router and the custom routes to the entry point <code>~/src/js/app.js</code>. We can also remove the code that imports the HelloWorld component, because that is handled by the router now.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">import Vue from &apos;vue&apos;;

import VueRouter from &apos;vue-router&apos;;
import router from &apos;./router&apos;;
Vue.use(VueRouter);

// load ALL solid icons
// modify here to load individual icons as needed to reduce bundle size
import { fas } from &apos;@fortawesome/free-solid-svg-icons&apos;;
import { library } from &apos;@fortawesome/fontawesome-svg-core&apos;;
import { FontAwesomeIcon } from &apos;@fortawesome/vue-fontawesome&apos;;
library.add(fas);
Vue.component(&apos;icon&apos;, FontAwesomeIcon);

import Layout from &quot;../components/Layout&quot;;

require(&apos;../sass/app.scss&apos;);

new Vue({
  el: &apos;#app&apos;,
  router,
  components: {
    Layout
  }
});</code></pre><figcaption>app.js</figcaption></figure><p>7. Finally, we need to update <code>~/src/index.html</code> to render the component returned by our router. We just have to replace the hard-coded hello-world component with the built-in router-view component:</p><figure class="kg-card kg-code-card"><pre><code class="language-HTML">&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
  &lt;meta charset=&quot;utf-8&quot; /&gt;
  &lt;title&gt;MVC 5 Vue.js Template&lt;/title&gt;
&lt;/head&gt;
&lt;body&gt;
  &lt;div id=&quot;app&quot;&gt;
    &lt;layout&gt;
      &lt;router-view&gt;&lt;/router-view&gt;
    &lt;/layout&gt;
  &lt;/div&gt;
&lt;/body&gt;
&lt;/html&gt;</code></pre><figcaption>index.html</figcaption></figure><p>Re-run the webpack build and start the visual studio debugger, then navigate to the <code>/movies</code> route. You should see the list of movies and you should be able to add more to the list.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-9.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="662" height="533" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-9.png 600w, https://kleypot.com/content/images/2021/09/image-9.png 662w"><figcaption>/movies route</figcaption></figure><h4 id="router-links-and-transitions">Router Links and Transitions</h4><p>Vue Router also provides a convenient way to generate links to your routes using the router-link component. We will use this component now to wire up our navigation links. We can also wrap our content in a <code>&lt;transition&gt;</code> to create a smoother feel when navigating between different routes.</p><figure class="kg-card kg-code-card"><pre><code class="language-Vue">&lt;template&gt;
  &lt;div class=&quot;container&quot;&gt;
    &lt;nav class=&quot;navbar&quot; role=&quot;navigation&quot; aria-label=&quot;main navigation&quot;&gt;
      &lt;div class=&quot;navbar-brand&quot;&gt;
        &lt;a class=&quot;navbar-item&quot; href=&quot;/&quot;&gt;
          &lt;h4 class=&quot;heading is-size-4&quot;&gt;Mvc5 + Vue.js&lt;/h4&gt;
        &lt;/a&gt;

        &lt;a role=&quot;button&quot;
           :class=&quot;`navbar-burger ${menuActive ? &apos;is-active&apos; : &apos;&apos;}`&quot;
           aria-label=&quot;menu&quot;
           aria-expanded=&quot;false&quot;
           data-target=&quot;navbarBasicExample&quot;
           @click=&quot;menuActive = !menuActive&quot;&gt;
          &lt;span aria-hidden=&quot;true&quot;&gt;&lt;/span&gt;
          &lt;span aria-hidden=&quot;true&quot;&gt;&lt;/span&gt;
          &lt;span aria-hidden=&quot;true&quot;&gt;&lt;/span&gt;
        &lt;/a&gt;
      &lt;/div&gt;

      &lt;div id=&quot;navbarBasicExample&quot;
           :class=&quot;`navbar-menu ${menuActive ? &apos;is-active&apos; : &apos;&apos;}`&quot;&gt;
        &lt;div class=&quot;navbar-start&quot;&gt;
          &lt;router-link to=&quot;/&quot; class=&quot;navbar-item&quot;&gt;
            Home
          &lt;/router-link&gt;

          &lt;router-link to=&quot;/movies&quot; class=&quot;navbar-item&quot;&gt;
            &lt;span class=&quot;icon has-text-primary&quot;&gt;
              &lt;icon icon=&quot;film&quot;&gt;&lt;/icon&gt;
            &lt;/span&gt;
            &lt;span&gt;Movies&lt;/span&gt;
          &lt;/router-link&gt;

          &lt;a class=&quot;navbar-item&quot;&gt;
            &lt;span class=&quot;icon has-text-info&quot;&gt;
              &lt;icon icon=&quot;info-circle&quot;&gt;&lt;/icon&gt;
            &lt;/span&gt;
            &lt;span&gt;About&lt;/span&gt;
          &lt;/a&gt;
        &lt;/div&gt;

        &lt;div class=&quot;navbar-end&quot;&gt;
          &lt;div class=&quot;navbar-item&quot;&gt;
            &lt;div class=&quot;buttons&quot;&gt;
              &lt;a class=&quot;button is-primary&quot;&gt;
                &lt;strong&gt;Sign up&lt;/strong&gt;
              &lt;/a&gt;
              &lt;a class=&quot;button is-light&quot;&gt;
                Log in
              &lt;/a&gt;
            &lt;/div&gt;
          &lt;/div&gt;
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/nav&gt;
    &lt;main&gt;
      &lt;transition name=&quot;fade&quot; mode=&quot;out-in&quot;&gt;
        &lt;slot&gt;&lt;/slot&gt;
      &lt;/transition&gt;
    &lt;/main&gt;
  &lt;/div&gt;
&lt;/template&gt;

&lt;script&gt;
  export default {
    data() {
      return {
        menuActive: false,
      };
    },
  };
&lt;/script&gt;

&lt;style lang=&quot;scss&quot; scoped&gt;
@import &quot;~bulma/sass/utilities/mixins&quot;;

@media screen and (min-width: $widescreen) {
  .navbar {
    font-size: 1.125rem;
    padding: 2rem 0;
  }
}

@media screen and (min-width: $desktop) {
  .navbar {
    padding: 1rem 0;
  }
}

.navbar-item &gt; .icon {
  margin-left: -0.25rem;
  margin-right: 0.25rem;
}

.fade-enter-active, .fade-leave-active {
  transition: opacity .25s
}

.fade-enter, .fade-leave-to {
  opacity: 0
}
&lt;/style&gt;
</code></pre><figcaption>Layout.vue</figcaption></figure><p>Run the build again and try clicking the Home and Movies links in the navigation. The views should fade out/in smoothly as you navigate back and forth. You can tweak the fade effect by updating the included sass rules.</p><h2 id="integrated-build-for-dev-and-prod">Integrated Build for Dev and Prod</h2><p>Now we have a fully working SPA. The MVC application exposes a CRUD API, and all of the View code and routing is handled in Vue. The last thing we have to do is integrate the webpack build into our MSBuild process. This will ensure that our webpack bundles are built whenever...</p><ul><li>A new developer checks out and builds the solution</li><li>You manually publish the web project to a folder</li><li>You deploy using your CI/CD process</li></ul><h4 id="dev-and-prod-builds">Dev and Prod Builds</h4><p>First, let&apos;s split our webpack build into two separate configurations. This will allow us to do conditional things like minification and source mapping depending on the environment.</p><p>1. Add a new webpack config for prod, <code>~/webpack.prod.js</code>. This config will contain overrides for our production build.</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">const baseConfig = require(&apos;./webpack.config.js&apos;);
const { merge } = require(&quot;webpack-merge&quot;);

module.exports = merge(baseConfig, {
	mode: &apos;production&apos;,
	devtool: false,
});
</code></pre><figcaption>webpack.prod.js</figcaption></figure><p>2. Add helper scripts to package.json so we have cleaner build commands.</p><figure class="kg-card kg-code-card"><pre><code class="language-JSON">{
  // package.json...
  // ...
  
  &quot;scripts&quot;: {
    &quot;dev&quot;: &quot;webpack --config=webpack.config.js&quot;,
    &quot;prod&quot;: &quot;webpack --config=webpack.prod.js&quot;
  }
}</code></pre><figcaption>Excerpt from package.json</figcaption></figure><p>3. Now, test out the different builds using the new custom commands:</p><pre><code>$ npm run dev
$ npm run prod</code></pre><p>Look at the output in the dist folder to see the differences between the two builds. The prod build should exclude source-maps, and the generated bundles will be much smaller in size. See the <a href="https://webpack.js.org/configuration/mode/#mode-production">webpack documentation</a> for more on how webpack optimizes production bundles.</p><h4 id="msbuild-integration">MSBuild Integration</h4><p>Now we can set up our project to automatically run the webpack build corresponding to the selected Build Configuration.</p><p>1. In Visual Studio, right-click the Project and unload it.</p><p>2. Double-click the project to edit the .csproj file.</p><p>3. Add the following code at the end of the fi<code>g</code>it</p><figure class="kg-card kg-code-card"><pre><code class="language-csproj">&lt;PropertyGroup&gt;
  &lt;CompileDependsOn&gt;
    $(CompileDependsOn);
    WebpackBuild;
  &lt;/CompileDependsOn&gt;
  &lt;CopyAllFilesIntoSingleFolderForPackageDependsOn&gt;
    $(CopyAllFilesIntoSingleFolderForPackageDependsOn);
    CollectWebpackOutput;
  &lt;/CopyAllFilesIntoSingleFolderForPackageDependsOn&gt;
  &lt;CopyAllFilesToSingleFolderFolderForMsDeployDependsOn&gt;
    $(CopyAllFilesToSingleFolderFolderForMsDeployDependsOn);
    CollectWebpackOutput;
  &lt;/CopyAllFilesToSingleFolderFolderForMsDeployDependsOn&gt;
&lt;/PropertyGroup&gt;
&lt;Target Name=&quot;WebpackBuild&quot;&gt;
  &lt;Message Condition=&quot;&apos;$(Configuration)&apos; != &apos;UnitTest&apos;&quot; Text=&quot;Running npm install&quot; Importance=&quot;high&quot; /&gt;
  &lt;Exec Condition=&quot;&apos;$(Configuration)&apos; != &apos;UnitTest&apos;&quot; Command=&quot;npm install&quot; WorkingDirectory=&quot;$(ProjectDir)&quot; /&gt;
  &lt;Message Condition=&quot;&apos;$(Configuration)&apos; == &apos;Debug&apos;&quot; Text=&quot;Running webpack build (development)&quot; Importance=&quot;high&quot; /&gt;
  &lt;Exec Condition=&quot;&apos;$(Configuration)&apos; == &apos;Debug&apos;&quot; Command=&quot;npm run dev&quot; WorkingDirectory=&quot;$(ProjectDir)&quot; /&gt;
  &lt;Message Condition=&quot;&apos;$(Configuration)&apos; == &apos;Release&apos;&quot; Text=&quot;Running webpack build (production)&quot; Importance=&quot;high&quot; /&gt;
  &lt;Exec Condition=&quot;&apos;$(Configuration)&apos; == &apos;Release&apos;&quot; Command=&quot;npm run prod&quot; WorkingDirectory=&quot;$(ProjectDir)&quot; /&gt;
&lt;/Target&gt;
&lt;Target Name=&quot;CollectWebpackOutput&quot; BeforeTargets=&quot;CopyAllFilesToSingleFolderForPackage;CopyAllFilesToSingleFolderForMsdeploy&quot;&gt;
  &lt;Message Text=&quot;Adding gulp-generated files&quot; Importance=&quot;high&quot; /&gt;
  &lt;ItemGroup&gt;
    &lt;CustomFilesToInclude Include=&quot;.\dist\**\*.*&quot; /&gt;
    &lt;FilesForPackagingFromProject Include=&quot;%(CustomFilesToInclude.Identity)&quot;&gt;
      &lt;DestinationRelativePath&gt;.\dist\%(RecursiveDir)%(Filename)%(Extension)&lt;/DestinationRelativePath&gt;
    &lt;/FilesForPackagingFromProject&gt;
  &lt;/ItemGroup&gt;
&lt;/Target&gt;</code></pre><figcaption>Excerpt from .csproj file</figcaption></figure><p>Overview of what this does:</p><ul><li>Creates a new build target called WebpackBuild. Building the project depends on the success of WebpackBuild.</li><li>Defines the target WebpackBuild which runs <code>npm install</code> and <code>npm run dev</code> or <code>npm run prod</code> depending on your build configuration.</li><li>Includes all files in the <code>~/dist</code> output folder in the published output.</li></ul><p>When you Build the project in Visual Studio now, you should see the webpack build in the Build Output:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-11.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1197" height="970" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-11.png 600w, https://kleypot.com/content/images/size/w1000/2021/09/image-11.png 1000w, https://kleypot.com/content/images/2021/09/image-11.png 1197w" sizes="(min-width: 720px) 720px"><figcaption>MSBuild output</figcaption></figure><p>Note that if the webpack build fails, then the entire build will fail and Visual Studio will show the errors in the build output. This is really important for other developers who may not realize that project depends on this extra build process. It will force them to install node.js and run webpack in order to build and debug the project.</p><p>The code sample above also has a custom condition for a build configuration called Unit Test. This gives you a way to skip the entire npm/build process to save time when you are running tests.</p><h2 id="setting-up-your-environment-for-rapid-development">Setting up your environment for rapid development</h2><p>The SPA is completely set up now and ready for development. In the previous sections, I demonstrated how you would build out this application by setting up new models, controller actions, views (Vue components), and client-side routing. But before starting development, you should consider also setting up some tools and plugins to make this process even easier.</p><p>In the rest of this post I will summarize how I set up my own dev environment to quickly and efficient turn out clean, working Vue code.</p><h4 id="visual-studio-code">Visual Studio Code</h4><p>There is bad news and good news if you are looking to write Vue code for .NET web applications. The bad news is, as of the time I&apos;m writing this post, Visual Studio has not really embraced Vue.js. That may change under .NET Core, but you&apos;re mostly on your own when it comes to Vue.js tooling, especially with MVC 5 projects.</p><p>The good news is that Visual Studio Code has amazing support for Vue.js. I actually recommend using VS Code in parallel with Visual Studio. You can edit and debug your ASP.NET code in VS, and you can use VS Code for all of the front-end work. If you haven&apos;t tried VS Code yet, this guide will be a great introduction for you &#x2013; go ahead and install the latest version and continue on.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://code.visualstudio.com/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Visual Studio Code - Code Editing. Redefined</div><div class="kg-bookmark-description">Visual Studio Code is a code editor redefined and optimized for building and debugging modern web and cloud applications. Visual Studio Code is free and available on your favorite platform - Linux, macOS, and Windows.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://code.visualstudio.com/favicon.ico" alt="Vue.js Single Page Application with ASP.NET MVC 5"><span class="kg-bookmark-author">Microsoft</span><span class="kg-bookmark-publisher">Microsoft</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://code.visualstudio.com/opengraphimg/opengraph-home.png" alt="Vue.js Single Page Application with ASP.NET MVC 5"></div></a></figure><h4 id="git-bash-integration">Git Bash Integration</h4><p>My first recommendation after you install VS Code is to set up Git Bash as the default terminal. As shown throughout this post, you will be using the terminal a lot, and in my opinion Git Bash is the best option on vanilla Windows unless you are a PowerShell expert.</p><ol><li>Install <a href="https://gitforwindows.org/">Git for Windows</a> which includes Git Bash. You most likely already have this, but here is the link in case you do not.</li><li>Open VS Code, and hit Ctrl+Shift+P then run the command <code>Terminal: Select Default Profile</code>.</li><li>Choose Git Bash from the list of options</li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-12.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1254" height="511" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-12.png 600w, https://kleypot.com/content/images/size/w1000/2021/09/image-12.png 1000w, https://kleypot.com/content/images/2021/09/image-12.png 1254w" sizes="(min-width: 720px) 720px"><figcaption>Git Bash as default terminal</figcaption></figure><p>Now you can open the terminal (Ctrl+`) and you should get a new Git Bash terminal. Try out some basic commands like <code>ls -la</code>, or try running your build with <code>npm run dev</code>.</p><h4 id="vetur-extension">Vetur Extension</h4><p>The first extension you will want to grab is <a href="https://marketplace.visualstudio.com/items?itemName=octref.vetur">Vetur</a>, the official extension for Vue.js tooling. Vetur will allow you to do things like lint and auto-format your Vue code, and it adds Intellisense and snippets to save you from having to constantly look at documentation.</p><ol><li>Download and enable the Extension in VS Code</li><li>Add a jsconfig file to the project:</li></ol><figure class="kg-card kg-code-card"><pre><code class="language-JSON">{
    &quot;include&quot;: [
      &quot;./src/**/*&quot;
    ]
}</code></pre><figcaption>jsconfig.json</figcaption></figure><p>3. Restart VS Code and open a Vue file. You should have syntax highlighting and intellisense right away. See examples below.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-13.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1460" height="552" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-13.png 600w, https://kleypot.com/content/images/size/w1000/2021/09/image-13.png 1000w, https://kleypot.com/content/images/2021/09/image-13.png 1460w" sizes="(min-width: 720px) 720px"><figcaption>Syntax highlighting before/after</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/image-14.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1181" height="612" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-14.png 600w, https://kleypot.com/content/images/size/w1000/2021/09/image-14.png 1000w, https://kleypot.com/content/images/2021/09/image-14.png 1181w" sizes="(min-width: 720px) 720px"><figcaption>Vue API Intellisense</figcaption></figure><h4 id="vue-vscode-snippets">Vue VSCode Snippets</h4><p>Next, you should grab Vue VSCode Snippets, which will save you a lot of time by scaffolding common structures with just a few keystrokes. Check out the Extension details for some snippets you can try.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/SnippetDemo.gif" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1280" height="720" srcset="https://kleypot.com/content/images/size/w600/2021/09/SnippetDemo.gif 600w, https://kleypot.com/content/images/size/w1000/2021/09/SnippetDemo.gif 1000w, https://kleypot.com/content/images/2021/09/SnippetDemo.gif 1280w" sizes="(min-width: 720px) 720px"><figcaption>Vue VSCode Snippets</figcaption></figure><h4 id="other-helpful-extensions">Other Helpful Extensions</h4><p>Some other Extensions that I can&apos;t live without include:</p><ul><li><a href="https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-close-tag">Auto Close Tag</a> &#x2013; automatically add HTML close tags just like Visual Studio</li><li><a href="https://marketplace.visualstudio.com/items?itemName=formulahendry.auto-rename-tag">Auto Rename Tag</a> &#x2013; automatically rename HTML close tags just like Visual Studio</li><li><a href="https://marketplace.visualstudio.com/items?itemName=CoenraadS.bracket-pair-colorizer">Bracket Pair Colorizer</a> &#x2013; colorize matching open/close brackets</li><li><a href="https://marketplace.visualstudio.com/items?itemName=eamodio.gitlens">GitLens</a> &#x2013; visualize code authorship inline in the editor</li><li><a href="https://marketplace.visualstudio.com/items?itemName=christian-kohler.npm-intellisense">npm Intellisense</a> &#x2013; autocomplete import statements</li><li><a href="https://chrome.google.com/webstore/detail/vuejs-devtools/nhdogjmejiglipccpnnnanhbledajbpd?hl=en">Vue.js devtools</a> - first-party chrome extension for debugging Vue components</li></ul><h4 id="eslint-for-vue">ESLint for Vue</h4><p>If you work on a team or plan on publishing your code, you should really consider setting up a linter for enforcing code styles and checking syntax. </p><p>Here, I&apos;ll show you how you can integrate ESLint into your build process so that the build fails if the code does not pass lint. I will also demonstrate how ESLint integrates into VS Code by highlighting errors, and how you can auto-correct common syntax issues. Using ESLint in this way will dramatically improve your code quality without getting in your way too much. Over time it will actually make you a better Vue coder.</p><p>1. Install ESLint dependencies</p><pre><code class="language-BASH">$ npm i -D eslint eslint-webpack-plugin</code></pre><p>2. Generate your ESLint config</p><pre><code class="language-BASH">$ ./node_modules/.bin/eslint --init</code></pre><p>For the prompts the follow, choose:</p><ul><li>To check syntax, find problems, and enforce code style</li><li>JavaScript modules</li><li>Vue.js</li><li>Typescript: no</li><li>Browser</li><li>Use a popular style guide</li><li>Airbnb (or whichever you prefer)</li><li>JSON</li></ul><figure class="kg-card kg-image-card"><img src="https://kleypot.com/content/images/2021/09/image-17.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="875" height="301" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-17.png 600w, https://kleypot.com/content/images/2021/09/image-17.png 875w" sizes="(min-width: 720px) 720px"></figure><p>The final prompt will ask you to install extra dependencies. This may or may not work due to permissions issues on Windows file systems. If the install errors out, just install the dependencies manually.</p><figure class="kg-card kg-code-card"><pre><code class="language-BASH">$ npm i -D eslint-plugin-vue@latest eslint-config-airbnb-base@latest eslint-plugin-import@^2.22.1</code></pre><figcaption>Manually install dependencies</figcaption></figure><p>3. Run the linter</p><pre><code class="language-BASH">$ npx eslint ./src/**/*.*</code></pre><p>You should get a bunch of errors referencing lines of code in the src folder. Do not worry about fixing these yet, we will do that later with the help of VS Code and Vetur.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/10/image.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="768" height="383" srcset="https://kleypot.com/content/images/size/w600/2021/10/image.png 600w, https://kleypot.com/content/images/2021/10/image.png 768w" sizes="(min-width: 720px) 720px"><figcaption>Build output with lint errors</figcaption></figure><p>4. Add a helper script to package.json so that you can run the linter with <code>npm run lint</code></p><figure class="kg-card kg-code-card"><pre><code class="language-JSON">{
  // package.json
  // ...
  &quot;scripts&quot;: {
    &quot;lint&quot;: &quot;eslint ./src/**/*.*&quot;,
    // ...
  }
}</code></pre><figcaption>Excerpt from package.json</figcaption></figure><p>5. Update .eslintrc.json to suit your tastes. I recommend starting with the following:</p><figure class="kg-card kg-code-card"><pre><code class="language-JSON">{
  &quot;env&quot;: {
    &quot;browser&quot;: true,
    &quot;es2021&quot;: true
  },
  &quot;extends&quot;: [
    &quot;plugin:vue/base&quot;,
    &quot;airbnb-base&quot;
  ],
  &quot;parserOptions&quot;: {
    &quot;ecmaVersion&quot;: 12,
    &quot;sourceType&quot;: &quot;module&quot;
  },
  &quot;plugins&quot;: [
    &quot;vue&quot;
  ],
  &quot;rules&quot;: {
    &quot;linebreak-style&quot;: [
      &quot;warn&quot;,
      &quot;windows&quot;
    ]
  }
}</code></pre><figcaption>.eslintrc.json</figcaption></figure><p>I have overridden one rule to allow Windows linebreak styles. Otherwise you will have to manually change to LF every time you edit a file in VS or VS Code. You can add/modify different rules here as needed.</p><p>6. Install the <a href="https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint">ESLint </a>Extension in VSCode.</p><p>7. In VS Code, open your Vue and JS files to start fixing issues. When you open a file, you should get a list of issues in the Problems window, with in-line highlighting provided by Vetur. Vetur also allows you to quickly resolve these issues.</p><figure class="kg-card kg-image-card"><img src="https://kleypot.com/content/images/2021/09/image-19.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="777" height="1020" srcset="https://kleypot.com/content/images/size/w600/2021/09/image-19.png 600w, https://kleypot.com/content/images/2021/09/image-19.png 777w" sizes="(min-width: 720px) 720px"></figure><p>Here are just some of the ways you can resolve your lint errors:</p><ul><li>Hover over the red squiggles, choose Quick Fix for a menu of options.</li><li>Move your cursor to the issue and hit Alt+Shift+. to auto-fix the issue.</li><li>Disable the rule globally in .eslintrc.json. </li><li>Use in-line or for the whole file, or for a specific block or line of code.</li><li>My personal favorite: Hit Ctrl+Shift+P and run the command <code>ESLint: Fix all auto-fixable Problems</code></li></ul><p>Using the code accumulated so far in this guide, I was actually able to autofix almost every problem. If you are following along, go ahead and fix all of your issues and run <code>npm run lint</code> until you get no errors back.</p><p>8. Update your webpack config to run and depend on ESLint.</p><figure class="kg-card kg-code-card"><pre><code>const ESLintWebpackPlugin = require(&apos;eslint-webpack-plugin&apos;);

module.exports = {
  // ...
  plugins: [
    // ...
    new ESLintWebpackPlugin({
      failOnError: false,
    }),
  ],
};</code></pre><figcaption>Excerpt from webpack.config.js</figcaption></figure><p>Now when you run your build, ESLint will automatically check all of your source code and it will emit errors into the build output. You can enable <code>failOnError</code> to block the build if you have lint issues &#x2013; it is nice to do this for prod builds but can be annoying for dev. <code>failOnError</code> is really important because it will prevent un-linted builds from going out, regardless of who is building the code. ESLint will be run as part of the MSBuild process, so even your CI/CD will error out if the linter fails.</p><h4 id="file-watching">File Watching</h4><p>There is one more thing we can do to speed up development. A common annoyance with bundlers like webpack is that you have to run the build every time you make changes because your browser cannot directly execute the source code. Large bundles can easily take 10-20 seconds to run, and only then can you refresh the page and re-test. This delay is a significant cognitive burden and can really interrupt your flow as you are making changes.</p><p>The quickest fix for this problem is to use the <code>watch</code> flag when running webpack. </p><pre><code>{
  // package.json
  // ...
  &quot;scripts&quot;: {
    // ...
    &quot;watch&quot;: &quot;webpack --config=webpack.config.js --watch&quot;
  }
}</code></pre><pre><code>$ npm run watch</code></pre><p>When you run using <code>watch</code>, the build runs once and then the process stays open instead of returning. If you touch a file while the process is open, webpack will rapidly re-bundle your code. It is practically instantaneous, so really all you need to do is save the file and refresh your browser.</p><h4 id="webpack-dev-server-and-hot-module-reloading">Webpack Dev Server and Hot Module Reloading</h4><p>But what if you didn&apos;t even need to refresh your browser? We can take the watcher a step farther by using the Webpack DevServer to host our website.</p><p><a href="https://webpack.js.org/configuration/dev-server/">Webpack DevServer</a> is a node-based web server which can be used to automatically open your browser and reload your modules when you make changes to the code. The dev server runs the build and the watcher, and serves up the bundled output from <em>memory</em>. A common mistake is thinking that the dev server is using the files in your <code>/dist</code> folder, but it actually never touches those files. In fact, you will even see a message in the output like, <em>Content not from webpack is served from &apos;C:\Users\username\source\repos\mvc5-vuejs-template\public&apos; directory.</em></p><p>1. Install Webpack DevServer</p><pre><code>$ npm i -D webpack-dev-server</code></pre><p>2. Add devServer options to the webpack config</p><figure class="kg-card kg-code-card"><pre><code class="language-JS">module.exports = {
  // ...
  devServer: {
    historyApiFallback: {
      index: &apos;/dist/index.html&apos;,
    },
    proxy: [
      {
        context: &apos;/api/**&apos;,
        target: &apos;http://localhost:64373&apos;, // use port from IISExpress
      },
    ],
    open: true
  },
};</code></pre><figcaption>Excerpt from webpack.config.js</figcaption></figure><p>The configuration above is where the special sauce is. We are telling the dev server to route all requests to /dist/index.html, which is the same exact routing we set up in the MVC project. Next, we are proxying our API routes to use the IIS Express Url. This way, we can get data from our MVC API by running the debugger in the background. </p><p><em>Note that you will have to update the IISExpress port to match the one used by your project.</em></p><p>3. Add a new script to package.json to run the dev server.</p><pre><code>{
  // package.json
  // ...
  &quot;scripts&quot;: {
    &quot;hot&quot;: &quot;webpack-dev-server --config=webpack.config.js&quot;
  }
}</code></pre><p>3. Start the Visual Studio debugger to make sure IISExpress is running</p><p>4. Start the dev server by running <code>npm run hot</code></p><p>If everything is set up correctly, your browser should open automatically and you should be able to interact with your SPA on port 8080. All of the API requests should be proxied to IISExpress, which you can confirm by setting breakpoints in the controller. </p><p>Most importantly, when you edit and save a Vue file, the page should reload with your changes almost instantaneously. And as long as you are changing code inside of a <code>&lt;template&gt;</code> tag, the <a href="https://vue-loader.vuejs.org/guide/hot-reload.html">vue-loader</a> is utilized to<strong> preserve the current Vue state</strong> while swapping in the new component!</p><p>This is so, so valuable, especially if you are working on part of part of the app which requires a deep level of interaction. For instance, if you are working a report that loads a ton of data, now you no longer have to refresh and reload that data on every code change. This will greatly increase your efficiency and help you work so much faster.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/09/C992DB77-945B-4A73-8687-78522CDCEC79.GIF" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="1068" height="1024" srcset="https://kleypot.com/content/images/size/w600/2021/09/C992DB77-945B-4A73-8687-78522CDCEC79.GIF 600w, https://kleypot.com/content/images/size/w1000/2021/09/C992DB77-945B-4A73-8687-78522CDCEC79.GIF 1000w, https://kleypot.com/content/images/2021/09/C992DB77-945B-4A73-8687-78522CDCEC79.GIF 1068w"><figcaption>Hot reloading Vue components</figcaption></figure><h4 id="browser-sync">Browser-sync</h4><p>One more optional step is to add Browser-sync. If you plan to test in multiple browsers or multiple viewport sizes, you can use browser-sync to test everything at once. All you have to do is set up Browser-sync to proxy from Webpack DevServer, then you can open multiple browser windows and they will stay in sync.</p><p>1. Pull in Browser-sync dependencies</p><pre><code class="language-BASH">$ npm install i -D browser-sync browser-sync-webpack-plugin</code></pre><p>2. Add a new webpack config file called <code>webpack.browsersync.js</code></p><figure class="kg-card kg-code-card"><pre><code class="language-JS">const { merge } = require(&apos;webpack-merge&apos;);
const baseConfig = require(&apos;./webpack.config&apos;);
const BrowserSyncPlugin = require(&apos;browser-sync-webpack-plugin&apos;);

module.exports = merge(baseConfig, {
  plugins: [
    new BrowserSyncPlugin({
     host: &apos;localhost&apos;,
     port: &apos;3000&apos;,
     // proxy the webpack dev server
     proxy: &apos;http://localhost:8080&apos;,
    },
    {
     reload: false
    }),
  ],
});
</code></pre><figcaption>~/webpack.browsersync.js</figcaption></figure><p>3. Add a new script to <code>package.json</code> to run Browser-sync</p><figure class="kg-card kg-code-card"><pre><code class="language-JSON">{
  &quot;scripts&quot;: {
    &quot;sync&quot;: &quot;webpack-dev-server --config=webpack.browsersync.js&quot;,
  }
}</code></pre><figcaption>Snippet from package.json</figcaption></figure><p>4. Run the new script</p><pre><code class="language-BASH">$ npm run sync</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2022/07/image.png" class="kg-image" alt="Vue.js Single Page Application with ASP.NET MVC 5" loading="lazy" width="960" height="405" srcset="https://kleypot.com/content/images/size/w600/2022/07/image.png 600w, https://kleypot.com/content/images/2022/07/image.png 960w" sizes="(min-width: 720px) 720px"><figcaption>Browser-sync output</figcaption></figure><p>Now you can open the web service running at port 3000. Try opening multiple browsers and notice that whatever actions you make in one window should be mirrored into the others. This is great for testing different browsers, or for testing different viewport sizes at the same time.</p><h2 id="conclusion">Conclusion</h2><p>Now our MVC SPA project is completely set up for rapid development in Vue.js. By integrating the webpack build into MS Build, we have ensured that any developer working on this project will not miss the critical step of building the client-side assets. Auto linting enforces code standards and helps avoid build errors, and hot module reloading dramatically speeds up the development loop.</p><p>Thanks for reading!</p>]]></content:encoded></item><item><title><![CDATA[Git Merge Deep Dive]]></title><description><![CDATA[I started my research by trying to reverse engineer what Visual Studio does when you merge and resolve conflicts. That lead me to explore some other tools, like Meld, and to experiment with some of the optional merge strategies within git. Here is what I found.]]></description><link>https://kleypot.com/git-merge-deep-dive/</link><guid isPermaLink="false">63052aeb3ecc781c55057cac</guid><category><![CDATA[software-development]]></category><category><![CDATA[git]]></category><category><![CDATA[dev-tools]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Wed, 22 Sep 2021 19:29:43 GMT</pubDate><content:encoded><![CDATA[<p>In my <a href="https://kleypot.com/meld-merge-on-windows-and-visual-studio/">last post</a> I outlined how my team standardized our process for diffing and merging code by configuring Meld for use in Visual Studio and other IDEs. During this process, I did a lot of research on git-merge and came away with several key insights that I want to document in this post.</p><p>I have a lot of experience with git, and because I follow a pretty strict gitflow policy, I rarely experience issues with merging. When other developers &#x2013; especially the juniors &#x2013; approached me with questions about why their merge isn&apos;t making sense, I never really had good answers. I gave them my na&#xEF;ve assumptions about how merging works, but I couldn&apos;t explain specifically why Visual Studio picked one side of the diff or the other.</p><p>I started my research by trying to reverse engineer what Visual Studio does when you merge and resolve conflicts. That lead me to explore some other tools, like Meld, and to experiment with some of the optional merge strategies within git. Here is what I found.</p><h2 id="setup">Setup</h2><p>For the proceeding examples I will be using <a href="https://github.com/akmolina28/AspNetDocs">my own fork of the official ASP.NET MVC demo project</a>. I started by cloning and checking out a new branch called <code>develop</code>.</p><pre><code class="language-BASH">$ git clone https://github.com/akmolina28/AspNetDocs.git

$ git checkout -b develop</code></pre><p>First, I committed a simple change in the AccountController refactoring the name of a variable. </p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/08/image-1.png" class="kg-image" alt loading="lazy" width="960" height="907" srcset="https://kleypot.com/content/images/size/w600/2021/08/image-1.png 600w, https://kleypot.com/content/images/2021/08/image-1.png 960w"><figcaption><a href="https://github.com/akmolina28/AspNetDocs/commit/b8e0bfb74d177a81f0fc85f54b62b30692a5f503">https://github.com/akmolina28/AspNetDocs/commit/b8e0bfb74d177a81f0fc85f54b62b30692a5f503</a></figcaption></figure><p>Next, I checked out the <code>main</code> branch and made some changes in the same file, which will conflict with my changes in the <code>develop</code> branch.</p><pre><code>$ git checkout main</code></pre><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/08/image-2.png" class="kg-image" alt loading="lazy" width="956" height="772" srcset="https://kleypot.com/content/images/size/w600/2021/08/image-2.png 600w, https://kleypot.com/content/images/2021/08/image-2.png 956w"><figcaption>https://github.com/akmolina28/AspNetDocs/commit/0254b15ee14fa425e23823ab59ebe8006e867f5f</figcaption></figure><p>Now, when I initiate a merge from <code>develop</code> into <code>main</code>, I would expect conflicts because some of the same code was changed in both branches. Below is the result.</p><pre><code class="language-BASH">$ git merge develop
Auto-merging aspnet/mvc/overview/getting-started/introduction/sample/MvcMovie/MvcMovie/Controllers/AccountController.cs
CONFLICT (content): Merge conflict in aspnet/mvc/overview/getting-started/introduction/sample/MvcMovie/MvcMovie/Controllers/AccountController.cs
Automatic merge failed; fix conflicts and then commit the result.
</code></pre><h2 id="resolving-conflicts-in-visual-studio">Resolving Conflicts in Visual Studio</h2><p>Visual Studio ships with Microsoft&apos;s built-in tool for diffing and merging, vsdiffmerge. When Visual Studio is installed and configured for git, it will automatically configure vsdiffmerge as the global mergetool for resolving merge conflicts. Here is what happens when you try to resolve the file that I set up in the previous section.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/08/image-3.png" class="kg-image" alt loading="lazy" width="1859" height="1281" srcset="https://kleypot.com/content/images/size/w600/2021/08/image-3.png 600w, https://kleypot.com/content/images/size/w1000/2021/08/image-3.png 1000w, https://kleypot.com/content/images/size/w1600/2021/08/image-3.png 1600w, https://kleypot.com/content/images/2021/08/image-3.png 1859w" sizes="(min-width: 1200px) 1200px"><figcaption>Visual Studio conflict resolution (vsdiffmerge)</figcaption></figure><p>If you have experience with git in Visual Studio, this result should feel very familiar. Visual Studio does a good job of figuring out which changes to take from either side. Here is what the conflict looks like, as expected:</p><figure class="kg-card kg-image-card kg-width-wide"><img src="https://kleypot.com/content/images/2021/08/image-5.png" class="kg-image" alt loading="lazy" width="1873" height="1278" srcset="https://kleypot.com/content/images/size/w600/2021/08/image-5.png 600w, https://kleypot.com/content/images/size/w1000/2021/08/image-5.png 1000w, https://kleypot.com/content/images/size/w1600/2021/08/image-5.png 1600w, https://kleypot.com/content/images/2021/08/image-5.png 1873w" sizes="(min-width: 1200px) 1200px"></figure><p>Let&apos;s take stock of what we have at this point. The left side has our incoming version from <code>develop</code>, the right side has our local version in <code>main</code>, and at the bottom is the result of our merge, where we make our final changes. 5 changes were auto-merged, which Visual Studio shows with the pre-checked, green selections. The two conflicting changes are highlighted in red.</p><p>At this point, I immediately have several questions:</p><ol><li>How does Visual Studio know which side to take when it AutoMerges? What exactly does &quot;AutoMerge&quot; mean?</li><li>Does git perform the AutoMerge? Or is Visual Studio doing it?</li><li>How is the Result file in the bottom pane generated? Is it created by git or by Visual Studio?</li></ol><p>In the next sections I will deconstruct the merge process to try to answer these questions.</p><h2 id="back-to-basics">Back to Basics</h2><p>For now, let&apos;s step away from Visual Studio and understand how to resolve this merge at the most basic level. I will abort the merge-in-progress and start over, just in case Visual Studio changed something.</p><pre><code class="language-BASH">$ git merge --abort
$ git merge develop</code></pre><p>We have generated the same conflict once again. Let&apos;s see what git did with AccountController.cs by looking at the file in a text editor:</p><figure class="kg-card kg-code-card"><pre><code class="language-C#">// ...

namespace MvcMovie.Controllers
{
    [Authorize]
    public class AccountController : Controller
    {
        private ApplicationSignInManager _signInManager;
        private ApplicationUserManager _applicationUserManager;

        public AccountController()
        {
        }

        public AccountController(ApplicationUserManager userManager, ApplicationSignInManager signInManager )
        {
            UserManager = userManager;
            SignInManager = signInManager;
        }

        public ApplicationSignInManager SignInManager
        {
            get
            {
                if (_signInManager == null)
                {
                    return HttpContext.GetOwinContext().Get&lt;ApplicationSignInManager&gt;();
                }
                else
                {
                    return _signInManager;
                }
            }
            private set 
            { 
                _signInManager = value; 
            }
        }

        public ApplicationUserManager UserManager
        {
            get
            {
&lt;&lt;&lt;&lt;&lt;&lt;&lt; HEAD
                if (_userManager == null)
                {
                    HttpContext.GetOwinContext().GetUserManager&lt;ApplicationUserManager&gt;();
                }
                else
                {
                    return _userManager;
                }
=======
                return _applicationUserManager ?? HttpContext.GetOwinContext().GetUserManager&lt;ApplicationUserManager&gt;();
&gt;&gt;&gt;&gt;&gt;&gt;&gt; develop
            }
            private set
            {
                _applicationUserManager = value;
            }
        }
        
// ...</code></pre><figcaption>Excerpt from AccountController.cs (MERGED)</figcaption></figure><p>It appears that git auto-merged the non-conflicting differences in the same way that Visual Studio did. So, git is definitely doing its own auto-merging here. All of the changes from both branches are reflected in the merged result, except for the conflict where we see the merge markers.</p><p>Between <code>&lt;&lt;&lt;&lt;&lt;&lt;&lt;</code> and <code>=======</code> we have our code from <code>main</code>. And between <code>=======</code> and <code>&gt;&gt;&gt;&gt;&gt;&gt;&gt;</code> we have our changes from <code>develop</code>. My job here is simple &#x2013; edit the code by hand and resolve both changes. I am familiar with the code in both branches, so this conflict is trivial for me.</p><p>If however I was <em>not</em> familiar with both of these changes, I might be confused and unsure how to resolve this conflict. This is very common in large teams, or long-running projects where several weeks pass before code gets merged. According to the git docs, you can add more context here by using the diff3 conflict style. Let&apos;s see what that does.</p><figure class="kg-card kg-code-card"><pre><code>$ git merge --abort
$ git config --global merge.conflictStyle diff3
$ git merge develop</code></pre><figcaption>Redo the merge using the diff3 conflict style</figcaption></figure><p>Here is what the conflict looks like now:</p><figure class="kg-card kg-code-card"><pre><code class="language-c#">        public ApplicationUserManager UserManager
        {
            get
            {
&lt;&lt;&lt;&lt;&lt;&lt;&lt; HEAD
                if (_userManager == null)
                {
                    HttpContext.GetOwinContext().GetUserManager&lt;ApplicationUserManager&gt;();
                }
                else
                {
                    return _userManager;
                }
||||||| af5e62ba8
                return _userManager ?? HttpContext.GetOwinContext().GetUserManager&lt;ApplicationUserManager&gt;();
=======
                return _applicationUserManager ?? HttpContext.GetOwinContext().GetUserManager&lt;ApplicationUserManager&gt;();
&gt;&gt;&gt;&gt;&gt;&gt;&gt; develop
            }
            private set
            {
                _applicationUserManager = value;
            }
        }</code></pre><figcaption>Excerpt from AccountController.cs (MERGED)</figcaption></figure><p>Now there is a <em>third</em> version of the code between the <code>main</code> and <code>develop</code> versions. This version is called the &quot;common ancestor&quot;, also referred to as the BASE in the context of a merge. The common ancestor is the most recent version before <code>main</code> and <code>develop</code> diverged. You can think of git branches as a tree structure, and git is tracing each branch back, recursively, until a common version is found.</p><p>In fact, you can actually visualize the tree structure in the terminal using <code>git log</code>:</p><figure class="kg-card kg-code-card"><pre><code class="language-BASH">$ git log --graph --oneline main develop
</code></pre><figcaption>Git log graph</figcaption></figure><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/08/image-7.png" class="kg-image" alt loading="lazy" width="962" height="171" srcset="https://kleypot.com/content/images/size/w600/2021/08/image-7.png 600w, https://kleypot.com/content/images/2021/08/image-7.png 962w"><figcaption>Git Bash output</figcaption></figure><p>In the output above, the stars represent commits and the lines represent their ancestry. We can see my commit to <code>main</code>, and the commit to <code>develop</code> which branches out from <code>main</code>. We can trace the ancestry of both of my commits to the common ancestor with the hash <code>af5e62ba8</code>. Indeed, that&apos;s the same commit hash we see in the middle part of the conflict markers!</p><p>Having the base code from the common ancestor makes it much easier to understand the actual changes in both branches because it gives us a common starting point for comparison. And if we go back and look at the &quot;Result&quot; file from Visual Studio, it appears to be using the common ancestor as the placeholder for the conflict. Now we are starting to answer some of our questions about how Visual Studio does what it does...</p><p>Let&apos;s keep digging.</p><h2 id="git-mergetool">Git Mergetool</h2><p>My example conflict above is fairly easy to resolve by hand, especially once we have the base code as a reference. But not all merges will be that simple. Sometimes conflicts can be tens or hundreds of lines long. When the conflicts start getting too long, you need to put them side-by-side so you can see all the changes at once. This is where we get into merge tools.</p><p>The default mergetool that ships with git is vimdiff. Here is what vimdiff looks like in Git Bash on Windows:</p><pre><code>$ git mergetool -t vimdiff</code></pre><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/08/image-6.png" class="kg-image" alt loading="lazy" width="1561" height="1021" srcset="https://kleypot.com/content/images/size/w600/2021/08/image-6.png 600w, https://kleypot.com/content/images/size/w1000/2021/08/image-6.png 1000w, https://kleypot.com/content/images/2021/08/image-6.png 1561w" sizes="(min-width: 1200px) 1200px"><figcaption>vimdiff merge tool</figcaption></figure><p>Here we see four panes with different versions of the AccountController.cs:</p><ol><li>LOCAL - &quot;our&quot; code, as it currently exists in the <code>main</code> branch</li><li>BASE - the common ancestor between <code>main</code> and <code>develop</code></li><li>REMOTE - &quot;their&quot; code, from the incoming <code>develop</code> branch</li><li>MERGED - the merged result, from git, containing the conflict markers</li></ol><p>Git has automatically pulled each of these versions of AccountController.cs out of the index and passed them into the vimdiff program, which displays those versions side-by-side. I&apos;m not as comfortable with vim as I wish I was, so I&apos;m going to switch to a more user-friendly tool called Meld. <em>see post below for more on setting up Meld on Windows</em></p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://kleypot.com/meld-merge-on-windows-and-visual-studio/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Meld Merge on Windows and Visual Studio - Setup and Configuration</div><div class="kg-bookmark-description">I set out to standardize our tooling for diffing and merging code on our git repos so that everyone has the same experience when visualizing code differences, regardless of IDE or operating system. Ultimately I settled on Meld.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://kleypot.com/content/images/2019/07/logo.png" alt><span class="kg-bookmark-author">kleypot</span><span class="kg-bookmark-publisher">Andrew Molina</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://kleypot.com/content/images/size/w100/2020/10/20190607_180958_.jpg" alt></div></a></figure><p>Here is a very basic configuration for meld:</p><figure class="kg-card kg-code-card"><pre><code class="language-BASH">$ git config --global mergetool.meld.cmd &apos;&quot;C:\Program Files (x86)\Meld\Meld.exe&quot; &quot;$LOCAL&quot; &quot;$BASE&quot; &quot;$REMOTE&quot;&apos;
$ git mergetool -t meld</code></pre><figcaption>Configuring and Running Meld on Windows</figcaption></figure><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/08/image-9.png" class="kg-image" alt loading="lazy" width="1921" height="1400" srcset="https://kleypot.com/content/images/size/w600/2021/08/image-9.png 600w, https://kleypot.com/content/images/size/w1000/2021/08/image-9.png 1000w, https://kleypot.com/content/images/size/w1600/2021/08/image-9.png 1600w, https://kleypot.com/content/images/2021/08/image-9.png 1921w" sizes="(min-width: 1200px) 1200px"><figcaption>Meld Merge - No Auto-merging</figcaption></figure><p>Now we can see all three versions &#x2013; LOCAL, BASE, and REMOTE &#x2013; side-by-side with the differences highlighted. But, notice that no auto-merge has been done here. The result file in the middle is completely identical to the BASE. </p><p>We know that Git performed an auto-merge in the MERGED version, but that version is not represented here. In this example, we have to manually merge <em>all</em> of the differences, even the ones with no conflicts. Not very helpful!</p><p>But notice what happens if you click Changes &gt; Merge All in the Meld menu:</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/08/image-10.png" class="kg-image" alt loading="lazy" width="1921" height="1400" srcset="https://kleypot.com/content/images/size/w600/2021/08/image-10.png 600w, https://kleypot.com/content/images/size/w1000/2021/08/image-10.png 1000w, https://kleypot.com/content/images/size/w1600/2021/08/image-10.png 1600w, https://kleypot.com/content/images/2021/08/image-10.png 1921w" sizes="(min-width: 1200px) 1200px"><figcaption>Meld Merge - After Auto-merging</figcaption></figure><p>Aha! Meld has auto-merged to give us the same result that Visual Studio gives us. Now we can compare the result before and after auto-merging to try to understand what Meld is actually doing. Here is a deconstruction of the process:</p><ol><li>Find the common ancestor (the BASE) of the LOCAL and REMOTE versions</li><li>For each part of the code where LOCAL made a change, if REMOTE did NOT make a change, then take the changes from LOCAL.</li><li>For each part of the code where REMOTE made a change, if LOCAL did NOT make a change, then take the changes from REMOTE.</li><li>If LOCAL and REMOTE both changed the same part of the code, take neither side and generate a conflict.</li></ol><p>Take line 19 for example:</p><ul><li>LOCAL: <code>private ApplicationUserManager _userManager;</code></li><li>BASE: <code>private ApplicationUserManager _userManager;</code></li><li>REMOTE: <code>private ApplicationUserManager _applicationUserManager;</code></li></ul><p>The LOCAL version from <code>main</code> matches the BASE version from the common ancestor. This means that there were no commits in LOCAL which changed that line of code.</p><p>The REMOTE version from <code>develop</code> does NOT match the BASE version. This means that someone made a change in develop, so the auto-merge is going to accept that change. </p><p>This is how every implementation of auto-merge works, at a basic level. Visual Studio, meld, and git itself follow this same algorithm. What makes each tool different is how they determine what makes a chunk of code different. For example, some tools can be configured to ignore differences in white space or encoding type. But, for the most part, every merge tool is going to agree on how to automerge and where to present conflicts.</p><h2 id="conclusion">Conclusion</h2><p>Let&apos;s revisit my questions from the beginning and try to answer them.</p><p><em>1. How does Visual Studio know which side to take when it AutoMerges? What exactly does &quot;AutoMerge&quot; mean?</em></p><p>A: Visual Studio is using the same basic auto-merge algorithm that every merge tool uses, including git. When you start a merge on a conflicted file, Visual Studio uses git to pull out the BASE version of the file to compare it with the two heads being merged together. Every difference between the BASE and <em>either one</em> of the heads will get auto-merged.</p><p><em>2. Does git perform the AutoMerge? Or is Visual Studio doing it?</em></p><p>A: Git performs its own auto-merge and generates the MERGED version of the file, which becomes part of the working tree (when you open the conflicted file in a text editor, you are opening the MERGED version). Visual Studio also performs its own auto-merge when you start to resolve a conflict. VS does <em>not</em> use the MERGED result from git. VS creates its own result by comparing the branch versions with the common ancestor. This is how every mergetool works.</p><p><em>3. How is the Result file in the bottom pane generated? Is it created by git or by Visual Studio?</em></p><p>A: The Result file is the BASE version, the common ancestor, plus all of the differences introduced by the AutoMerge.</p><p></p><p>Understanding git merge is really important if you want to lead a large project. Especially when junior devs have questions about how this process works, you need to know what you are talking about. Now that I am more comfortable with the default merge algorithm and the terminology, I find myself reaching for more advanced merge options because I actually understand how they differ from each other. I also feel more comfortable managing my team&apos;s gitflow now because I know what will and will not happen when two branches are merged together.</p><p>Thanks for reading!</p>]]></content:encoded></item><item><title><![CDATA[Meld Merge on Windows and Visual Studio]]></title><description><![CDATA[I set out to standardize our tooling for diffing and merging code on our git repos so that everyone has the same experience when visualizing code differences, regardless of IDE or operating system. Ultimately I settled on Meld.]]></description><link>https://kleypot.com/meld-merge-on-windows-and-visual-studio/</link><guid isPermaLink="false">63052aeb3ecc781c55057cab</guid><category><![CDATA[software-development]]></category><category><![CDATA[dev-tools]]></category><category><![CDATA[git]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Thu, 26 Aug 2021 18:41:55 GMT</pubDate><content:encoded><![CDATA[<p>Click <a href="#installation-on-windows">here</a> to skip to the setup instructions.</p><p>Recently, my team has been running into issues with our git merges in Visual Studio. Developers have reported alarming problems like code disappearing or duplicating. Seemingly identical files show huge blocks of differences in Visual Studio&apos;s merge tool. Developer A has a problem with a particular merge conflict, but Developer B is not able to reproduce the same.</p><p>This is a problem which is all-too-familiar for developers using Windows. Some of these issues stem from the fact that a lot of our source code runs on Unix-like systems which can introduce differences in line endings or text encoding. Developers also have their own versions of Visual Studio with their own configurations, or may not use Visual Studio at all. Our novice developers especially had a hard time because the senior developers could only scratch their heads and say, &quot;well, it works for me!&quot;</p><p>Eventually this became enough of a problem that we had to address it. I set out to standardize our tooling for diffing and merging code on our git repos so that everyone has the same experience when visualizing code differences, regardless of IDE or operating system. Ultimately I settled on Meld.</p><h2 id="meld-merge">Meld Merge</h2><p>I trialed several popular third-party diff tools and settled on <a href="https://meldmerge.org/">Meld </a>for our team. Meld had the right balance of simplicity and configurability to fit our needs. Meld is open source with a very active community, so it seemed like a safe bet.</p><p>Meld offers many advantages over vsdiffmerge, the tool that ships with Visual Studio:</p><ul><li>Open source, cross platform</li><li>Can work with any IDE or from the terminal</li><li>Configurable text filters</li><li>Intuitive hotkeys</li><li>Ability to resolve conflicts without auto-merging</li></ul><h2 id="global-configuration">Global Configuration</h2><p>For diffing and merging, Visual Studio will honor the global git configuration in <code>C:\Users\YourName\.gitconfig</code>. If you have never looked at this file before, it is worth opening it up and understanding the config. Visual Studio users will see something like this:</p><figure class="kg-card kg-code-card"><pre><code>[merge]
	tool = vsdiffmerge
[diff]
	tool = vsdiffmerge
[difftool]
	prompt = true
[difftool &quot;vsdiffmerge&quot;]
	cmd = \&quot;C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Enterprise\\Common7\\IDE\\CommonExtensions\\Microsoft\\TeamFoundation\\Team Explorer\\vsdiffmerge.exe\&quot; \&quot;$LOCAL\&quot; \&quot;$REMOTE\&quot; //t
	keepBackup = false
[mergetool &quot;vsdiffmerge&quot;]
	cmd = \&quot;C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Enterprise\\Common7\\IDE\\CommonExtensions\\Microsoft\\TeamFoundation\\Team Explorer\\vsdiffmerge.exe\&quot; \&quot;$REMOTE\&quot; \&quot;$LOCAL\&quot; \&quot;$BASE\&quot; \&quot;$MERGED\&quot; //m
	keepBackup = false
	trustExitCode = true</code></pre><figcaption>.gitconfig on Windows</figcaption></figure><p>vsdiffmerge, the built-in tool that ships with Visual Studio, is configured <em>globally</em> as the default tool for diffing and merging code. Whenever you compare two files in Visual Studio, or you open the conflict resolution window, Visual Studio is triggering the commands shown above to launch vsdiffmerge.</p><p>If we change vsdiffmerge to something else, like Meld, Visual Studio will honor that choice as well. And because this is a global configuration, every IDE should follow suit. This is exactly what happens when you run <code>git difftool</code> or <code>git mergetool</code>.</p><p>See the instructions below to update your global config to use Meld.</p><h2 id="installation-on-windows">Installation on Windows</h2><p>Here is a basic setup that we are trying out with all of our developers. This is a good starting point if you are using git with Visual Studio or any IDE really.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://meldmerge.org/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Meld</div><div class="kg-bookmark-description"></div><div class="kg-bookmark-metadata"><span class="kg-bookmark-publisher">Kai Willadsen</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://meldmerge.org/images/meld-mary.png" alt></div></a></figure><ol><li>Install Meld by downloading the Windows msi at <a href="https://meldmerge.org/">https://meldmerge.org</a></li><li>Open Git Bash, the terminal which ships with Git for Windows. If you do not have Git Bash, install it from <a href="https://gitforwindows.org/">https://gitforwindows.org/</a></li><li>In the Git Bash terminal, run the following command. Copy it from here and right-click paste into the terminal, press enter to run it.</li></ol><figure class="kg-card kg-code-card"><pre><code class="language-BASH">git config --global merge.tool meld &amp;&amp;
git config --global mergetool.meld.cmd &apos;&quot;C:\Program Files (x86)\Meld\Meld.exe&quot; --label &quot;LOCAL&gt;BASE&lt;REMOTE&quot; --auto-merge &quot;$LOCAL&quot; &quot;$BASE&quot; &quot;$REMOTE&quot; --output &quot;$MERGED&quot;&apos; &amp;&amp;
git config --global mergetool.meld.keepBackup false &amp;&amp;
git config --global diff.tool meld &amp;&amp;
git config --global difftool.meld.cmd &apos;&quot;C:\Program Files (x86)\Meld\Meld.exe&quot; --label &quot;LOCAL|REMOTE&quot; &quot;$REMOTE&quot; &quot;$LOCAL&quot;&apos; &amp;&amp;
git config --global difftool.meld.keepBackup false</code></pre><figcaption>Git Global Configuration</figcaption></figure><p>4. Test by comparing files or resolving conflicts in Visual Studio. Meld should be launched in a new window for every operation. </p><h2 id="advanced-configuration">Advanced Configuration</h2><p>Running the command above will add some new settings to your <code>.gitconfig</code> file.</p><figure class="kg-card kg-code-card"><pre><code>[mergetool &quot;meld&quot;]
	cmd = \&quot;C:\\Program Files (x86)\\Meld\\Meld.exe\&quot; --label \&quot;LOCAL&gt;BASE&lt;REMOTE\&quot; --auto-merge \&quot;$LOCAL\&quot; \&quot;$BASE\&quot; \&quot;$REMOTE\&quot; --output \&quot;$MERGED\&quot;
	keepBackup = false
[difftool &quot;meld&quot;]
	cmd = \&quot;C:\\Program Files (x86)\\Meld\\Meld.exe\&quot; --label \&quot;LOCAL|REMOTE\&quot; \&quot;$REMOTE\&quot; \&quot;$LOCAL\&quot;
	keepBackup = false</code></pre><figcaption>Excerpt from .gitconfig</figcaption></figure><p>These are the commands that are executed for your diffs and merges. Here are some additional tweaks you can make.</p><ol><li>In the mergetool <code>cmd</code> setting change <code>--auto-merge</code> to <code>--diff</code>. This will disable auto-merging when you resolve a conflict. This way you can start with a clean base file and merge everything by hand. You can still manually auto-merge in Meld by choosing Changes &gt; Merge All.</li><li>In the mergetool <code>cmd</code> setting, swap &quot;LOCAL&quot; and &quot;REMOTE&quot; if you prefer showing the local changes on the right side instead of the left side (this is how vsdiffmerge displays).</li><li>Set the mergetool setting <code>keepBackup = true</code>. This way git will save backup copies of the LOCAL, REMOTE, BASE, and MERGED files. Once you are done merging you can manually delete the backups.</li><li>Add a new setting under mergetool, <code>trustExitCode = true</code>. This way when you save your changes and exit Meld gracefully, Visual Studio will automatically accept and stage your merged result.</li></ol><p>I like to keep a few commands commented out so I can easily toggle between them without screwing something up:</p><figure class="kg-card kg-code-card"><pre><code>[mergetool &quot;meld&quot;]
	cmd = \&quot;C:\\Program Files (x86)\\Meld\\Meld.exe\&quot; --label \&quot;LOCAL&gt;BASE&lt;REMOTE\&quot; --auto-merge \&quot;$LOCAL\&quot; \&quot;$BASE\&quot; \&quot;$REMOTE\&quot; --output \&quot;$MERGED\&quot;
	# cmd = \&quot;C:\\Program Files (x86)\\Meld\\Meld.exe\&quot; --label \&quot;LOCAL&gt;BASE&lt;REMOTE\&quot; --diff \&quot;$LOCAL\&quot; \&quot;$BASE\&quot; \&quot;$REMOTE\&quot; --output \&quot;$MERGED\&quot;
	keepBackup = false</code></pre><figcaption>Excerpt from .gitconfig</figcaption></figure><h2 id="going-back-to-vsdiffmerge">Going Back to vsdiffmerge</h2><p>If you screw something up or you just want to go back to the built-in tool in Visual Studio, the best way to roll everything back is to let Visual Studio reset the .gitconfig for you. You can do this by going into the Tools &gt; Options window, then navigating to Source Control &gt; Git Global Settings. From there, click the respective Use Visual Studio link for the operation that you want to change. (this is how to do it in VS2019, but the steps may differ for other versions of VS).</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/08/image.png" class="kg-image" alt loading="lazy" width="923" height="701" srcset="https://kleypot.com/content/images/size/w600/2021/08/image.png 600w, https://kleypot.com/content/images/2021/08/image.png 923w" sizes="(min-width: 720px) 720px"><figcaption>Visual Studio Diff and Merge tools</figcaption></figure><p>Once you click Use Visual Studio, your .gitconfig will be updated automatically. This is a bit safer than editing the file by hand which could introduce mistakes.</p><p>Thanks for reading!</p>]]></content:encoded></item><item><title><![CDATA[Fully Offline Video Doorbell for Home Assistant]]></title><description><![CDATA[Today I am sharing my setup for a fully offline video doorbell in Home Assistant. The camera, motion sensor, and doorbell are all integrated over local IP.]]></description><link>https://kleypot.com/fully-offline-video-doorbell-for-home-assistant/</link><guid isPermaLink="false">63052aeb3ecc781c55057ca9</guid><category><![CDATA[blue-iris]]></category><category><![CDATA[android]]></category><category><![CDATA[home-assistant]]></category><category><![CDATA[home-security]]></category><category><![CDATA[last-watch-ai]]></category><category><![CDATA[home-automation]]></category><category><![CDATA[node-red]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Sun, 02 May 2021 17:00:00 GMT</pubDate><media:content url="https://kleypot.com/content/images/2021/05/1200px-Home_Assistant_Logo.svg-1.png" medium="image"/><content:encoded><![CDATA[<img src="https://kleypot.com/content/images/2021/05/1200px-Home_Assistant_Logo.svg-1.png" alt="Fully Offline Video Doorbell for Home Assistant"><p>Today I am sharing my setup for a fully offline video doorbell in Home Assistant. The live video feed, motion sensor, and doorbell button are all integrated into Home Assistant over local IP. This allows me to view the feed and create automations, securely, without exposing my camera to the outside world and without relying on a cloud-connected service.</p><h2 id="the-doorbell">The Doorbell</h2><p>I am using an Amcrest AD110 video doorbell, which is based around a Dahua camera.</p><figure class="kg-card kg-bookmark-card kg-card-hascaption"><a class="kg-bookmark-container" href="https://www.amazon.com/dp/B07ZJS3L5Y"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Amazon.com : Amcrest 1080P Video Doorbell Camera Pro, Outdoor Smart Home 2.4GHz WiFi Doorbell Camera (Wired Power), MicroSD Card, PIR Motion Detect, RTSP, IP55 Weatherproof, 2-Way Audio, 140&#xBA; Wide-Angle AD110 : Electronics</div><div class="kg-bookmark-description">Amazon.com : Amcrest 1080P Video Doorbell Camera Pro, Outdoor Smart Home 2.4GHz WiFi Doorbell Camera (Wired Power), MicroSD Card, PIR Motion Detect, RTSP, IP55 Weatherproof, 2-Way Audio, 140&#xBA; Wide-Angle AD110 : Electronics</div><div class="kg-bookmark-metadata"><span class="kg-bookmark-publisher">Amcrest Direct</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://fls-na.amazon.com/1/batch/1/OP/ATVPDKIKX0DER:140-3100500-8029555:R95PZ77CC030M6HS75JC$uedata=s:%2Frd%2Fuedata%3Fstaticb%26id%3DR95PZ77CC030M6HS75JC%26pty%3DTabletUDP%26spty%3DGlance%26pti%3DB01M8PPTPR:1000" alt="Fully Offline Video Doorbell for Home Assistant"></div></a><figcaption>This is not a sponsored link</figcaption></figure><p>The Amcrest has two critical features for the offline build. First, it has an RTSP feed for recording locally with home NVR software. Second, it has a local API to get motion sensor and doorbell events.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/20210501_212837.jpg" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="2000" height="2667" srcset="https://kleypot.com/content/images/size/w600/2021/05/20210501_212837.jpg 600w, https://kleypot.com/content/images/size/w1000/2021/05/20210501_212837.jpg 1000w, https://kleypot.com/content/images/size/w1600/2021/05/20210501_212837.jpg 1600w, https://kleypot.com/content/images/size/w2400/2021/05/20210501_212837.jpg 2400w" sizes="(min-width: 720px) 720px"><figcaption>Amcrest AD110 Video Doorbell Installed</figcaption></figure><p>I installed the AD110 over my existing doorbell according to the manufacturer, using the official Amcrest app to set everything up. The installation was no different from any other video doorbell.</p><h2 id="24-7-recording-and-live-monitoring">24/7 Recording and Live Monitoring</h2><p>The AD110 exposes its RTSP stream by default, so the first thing I did was wire it up in Blue Iris. I use Blue Iris to record and monitor all of my home security cameras.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/image-1.png" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="1526" height="1077" srcset="https://kleypot.com/content/images/size/w600/2021/05/image-1.png 600w, https://kleypot.com/content/images/size/w1000/2021/05/image-1.png 1000w, https://kleypot.com/content/images/2021/05/image-1.png 1526w" sizes="(min-width: 720px) 720px"><figcaption>Blue Iris - Add Camera</figcaption></figure><p>To add the camera, just enter the rstp address using the doorbell&apos;s IP, and enter the username and password from the Amcrest app. The Find/inspect button will fill in the rest of the details.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/image-2.png" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="1108" height="1162" srcset="https://kleypot.com/content/images/size/w600/2021/05/image-2.png 600w, https://kleypot.com/content/images/size/w1000/2021/05/image-2.png 1000w, https://kleypot.com/content/images/2021/05/image-2.png 1108w" sizes="(min-width: 720px) 720px"><figcaption>Video tab</figcaption></figure><p></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/image-3.png" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="1114" height="1168" srcset="https://kleypot.com/content/images/size/w600/2021/05/image-3.png 600w, https://kleypot.com/content/images/size/w1000/2021/05/image-3.png 1000w, https://kleypot.com/content/images/2021/05/image-3.png 1114w" sizes="(min-width: 720px) 720px"><figcaption>Record tab</figcaption></figure><p>On the Record tab, tick Video and select Continuous recording. Under &quot;video file format and compression&quot;, choose Direct-to-disc (again, this is to reduce CPU usage).</p><p>Now the AD110&apos;s video feed is continuously recording in Blue Iris completely over local IP. I can also securely view the live feed from anywhere using UI3 web player from BI.</p><h2 id="doorbell-and-motion-sensors">Doorbell and Motion Sensors</h2><p>Next, I set up my sensors in Home Assistant. Using dchesterton/<strong><a href="https://github.com/dchesterton/amcrest2mqtt">amcrest2mqtt</a>, </strong>you can easily generate MQTT sensors for the doorbell button and motion sensor. This all works over local IP using the <a href="https://pypi.org/project/amcrest/">Amcrest python module</a> to access the Dahua API. </p><p><em>If you have not previously set up MQTT, follow <a href="https://www.home-assistant.io/docs/mqtt/broker/">this guide </a>to set up your broker. Also, make sure your configuration.yaml contains &quot;mqtt:&quot; to enable device discovery.</em></p><p>amcrest2mqtt is a Docker image, so we will install Portainer in Home Assistant to run it.</p><h4 id="1-install-portainer">1. Install Portainer</h4><p>Install the <a href="https://github.com/hassio-addons/addon-portainer/tree/v1.4.0">Portainer add-on</a> and start it up. Once it is running, open the web UI.</p><h4 id="2-add-the-container">2. Add the Container</h4><p>Select the <strong>primary </strong>endpoint. In the side menu, select <strong>Containers. </strong>Use the <strong>+Add Container</strong> button to add a new container. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/image-4.png" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="1455" height="1050" srcset="https://kleypot.com/content/images/size/w600/2021/05/image-4.png 600w, https://kleypot.com/content/images/size/w1000/2021/05/image-4.png 1000w, https://kleypot.com/content/images/2021/05/image-4.png 1455w" sizes="(min-width: 720px) 720px"><figcaption>Adding the container</figcaption></figure><p>For <strong>Name </strong>enter <code>amcrest2mqtt</code> and for <strong>Image </strong>enter <code>dchesterton/amcrest2mqtt:latest</code>. Under <strong>Advanced &gt; Env </strong>add the environment variables specified in the <a href="https://github.com/dchesterton/amcrest2mqtt">README</a>.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/image-5.png" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="1954" height="1021" srcset="https://kleypot.com/content/images/size/w600/2021/05/image-5.png 600w, https://kleypot.com/content/images/size/w1000/2021/05/image-5.png 1000w, https://kleypot.com/content/images/size/w1600/2021/05/image-5.png 1600w, https://kleypot.com/content/images/2021/05/image-5.png 1954w" sizes="(min-width: 1200px) 1200px"><figcaption>Setting the container&apos;s environment variables</figcaption></figure><p>Finally, under <strong>Advanced &gt; Restart policy </strong>choose <code>Unless stopped</code>. Deploy the container and ensure that it is running. The container logs should spit out some information about your camera, so check there to make sure the API is responding correctly.</p><figure class="kg-card kg-code-card"><pre><code>02/05/2021 03:35:49 [INFO] Fetching camera details...
02/05/2021 03:35:52 [INFO] Device type: AD110
02/05/2021 03:35:52 [INFO] Serial number: ##############
02/05/2021 03:35:52 [INFO] Software version: 1.000.00AC006.0.R
02/05/2021 03:35:52 [INFO] Device name: Front Door
02/05/2021 03:35:52 [INFO] Writing Home Assistant discovery config...
02/05/2021 03:35:52 [INFO] Fetching storage sensors...
02/05/2021 03:35:54 [INFO] Listening for events...</code></pre><figcaption>Container logs</figcaption></figure><p>With the HOME_ASSISTANT flag set to <code>true</code>, the container will publish discovery messages over MQTT to create several new entities. The container also publishes state changes whenever the doorbell is pressed or motion is detected. As soon as the container is running, you should see the new doorbell device in Home Assistant. Note that the Storage sensors don&apos;t work unless you install an SD card in the doorbell.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/image-7.png" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="1604" height="1198" srcset="https://kleypot.com/content/images/size/w600/2021/05/image-7.png 600w, https://kleypot.com/content/images/size/w1000/2021/05/image-7.png 1000w, https://kleypot.com/content/images/size/w1600/2021/05/image-7.png 1600w, https://kleypot.com/content/images/2021/05/image-7.png 1604w" sizes="(min-width: 720px) 720px"><figcaption>MQTT Device config in Home Assistant</figcaption></figure><p>Now you can test the binary_sensor entities by triggering the motion sensor or the doorbell button. In my experience, the sensors are very responsive &#x2013; they are usually as fast, or faster, than the actual Amcrest app. The sensors also continue work when disconnected from the internet.</p><h2 id="optional-block-internet-access">Optional: Block Internet Access</h2><p>At this point, I had everything I needed from the doorbell &#x2013; a live feed, continuous recording, and hardware events &#x2013; and I decided to take my doorbell completely offline. I did this using my router to block that IP from the internet. This way I don&apos;t have to worry about security vulnerabilities which might allow someone to snoop on my camera. </p><p>The trade-off is that the Amcrest app no longer works for answering the doorbell, but that is a feature which I can live without. I always have the option to disable the block if I want those features again temporarily.</p><h2 id="automation-example-1-virtual-peepholes">Automation Example #1 - Virtual Peepholes</h2><p>I have several kiosk tablets around the house running Fully Kiosk, so the first automation I set up was to bring up the live feed on each tablet when the doorbell is pressed.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/20210502_100859--1-.jpg" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="2000" height="1500" srcset="https://kleypot.com/content/images/size/w600/2021/05/20210502_100859--1-.jpg 600w, https://kleypot.com/content/images/size/w1000/2021/05/20210502_100859--1-.jpg 1000w, https://kleypot.com/content/images/size/w1600/2021/05/20210502_100859--1-.jpg 1600w, https://kleypot.com/content/images/size/w2400/2021/05/20210502_100859--1-.jpg 2400w" sizes="(min-width: 720px) 720px"><figcaption>Doorbell Live Feed</figcaption></figure><p>First, I added a view in Lovelace to show the live feed. This new view will be loaded up by my automation when the doorbell is pressed. The code here would depend on how you choose to integrate camera feeds into Home Assistant. I kept it very simple by using an iframe to pull up the Blue Iris web player.</p><figure class="kg-card kg-code-card"><pre><code class="language-YAML">title: Doorbell
path: doorbell-cameras
panel: true
cards:
- type: iframe
  url: !secret doorbell_480p_url
  aspect_ratio: &apos;16:9&apos;</code></pre><figcaption>doorbell-camera.yaml</figcaption></figure><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://kleypot.com/home-assistant-blue-iris-ui3-player-in-lovelace-ui/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Home Assistant - Better Blue Iris Integration using Lovelace iframes</div><div class="kg-bookmark-description">In this post, I will show how I greatly improved the performance of my Blue Iris cameras in Home Assistant. Rather than using the camera component, I am using iframes to directly embed the UI3 interface into Lovelace.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://kleypot.com/content/images/2019/07/logo.png" alt="Fully Offline Video Doorbell for Home Assistant"><span class="kg-bookmark-author">kleypot</span><span class="kg-bookmark-publisher">Andrew Molina</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://kleypot.com/content/images/size/w100/2020/10/20190607_180958_.jpg" alt="Fully Offline Video Doorbell for Home Assistant"></div></a></figure><p>Next, I set up my automations to load up the view when the doorbell button is pressed. This was very easy to do because I already have REST services set up in Home Assistant to control each kiosk.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://kleypot.com/fully-kiosk-rest-api-integration-in-home-assistant/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Fully Kiosk Rest API Integration in Home Assistant</div><div class="kg-bookmark-description">10/25/20 Update - The isScreenOn property has been changed to screenOn in laterversions of Fully Kiosk. Check your JSON output for the correct name In this post I&#x2019;ll show how I exposed the backlight of my crappy old Androidtablet as a Light in Home Assistant using the Fully Kiosk Rest API. Next, &#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://kleypot.com/content/images/2019/07/logo.png" alt="Fully Offline Video Doorbell for Home Assistant"><span class="kg-bookmark-author">kleypot</span><span class="kg-bookmark-publisher">Andrew Molina</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://kleypot.com/content/images/2020/02/Home_Assistant_Logo-1.jpg" alt="Fully Offline Video Doorbell for Home Assistant"></div></a></figure><p>I used Node-RED for the automation, but this could very easily be scripted out in YAML. It consists of three service calls:</p><ol><li>Turn on the screen, in case it was turned off for some reason</li><li>Bring the Fully Kiosk app to the front, in case it had been minimized</li><li>Navigate to the new doorbell view</li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/image-9.png" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="910" height="354" srcset="https://kleypot.com/content/images/size/w600/2021/05/image-9.png 600w, https://kleypot.com/content/images/2021/05/image-9.png 910w" sizes="(min-width: 720px) 720px"><figcaption>Node-RED flow using the Fully Kiosk REST service</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-JSON">[{&quot;id&quot;:&quot;741d75fd.e0135c&quot;,&quot;type&quot;:&quot;trigger-state&quot;,&quot;z&quot;:&quot;2ef67aae.d4bfe6&quot;,&quot;name&quot;:&quot;Doorbell Rung&quot;,&quot;server&quot;:&quot;a86c4410.e2a568&quot;,&quot;exposeToHomeAssistant&quot;:false,&quot;haConfig&quot;:[{&quot;property&quot;:&quot;name&quot;,&quot;value&quot;:&quot;&quot;},{&quot;property&quot;:&quot;icon&quot;,&quot;value&quot;:&quot;&quot;}],&quot;entityid&quot;:&quot;binary_sensor.front_door_doorbell&quot;,&quot;entityidfiltertype&quot;:&quot;exact&quot;,&quot;debugenabled&quot;:false,&quot;constraints&quot;:[{&quot;targetType&quot;:&quot;this_entity&quot;,&quot;targetValue&quot;:&quot;&quot;,&quot;propertyType&quot;:&quot;current_state&quot;,&quot;comparatorType&quot;:&quot;is&quot;,&quot;comparatorValueDatatype&quot;:&quot;str&quot;,&quot;comparatorValue&quot;:&quot;on&quot;,&quot;propertyValue&quot;:&quot;new_state.state&quot;}],&quot;outputs&quot;:2,&quot;customoutputs&quot;:[],&quot;outputinitially&quot;:false,&quot;state_type&quot;:&quot;str&quot;,&quot;x&quot;:220,&quot;y&quot;:3000,&quot;wires&quot;:[[&quot;b0291746.cff2c8&quot;],[]]},{&quot;id&quot;:&quot;b0291746.cff2c8&quot;,&quot;type&quot;:&quot;api-call-service&quot;,&quot;z&quot;:&quot;2ef67aae.d4bfe6&quot;,&quot;name&quot;:&quot;Turn on screen&quot;,&quot;server&quot;:&quot;a86c4410.e2a568&quot;,&quot;version&quot;:1,&quot;debugenabled&quot;:false,&quot;service_domain&quot;:&quot;rest_command&quot;,&quot;service&quot;:&quot;wall_tablet_kiosk_command&quot;,&quot;entityId&quot;:&quot;&quot;,&quot;data&quot;:&quot;{\&quot;cmd\&quot;:\&quot;screenOn\&quot;}&quot;,&quot;dataType&quot;:&quot;json&quot;,&quot;mergecontext&quot;:&quot;&quot;,&quot;output_location&quot;:&quot;&quot;,&quot;output_location_type&quot;:&quot;none&quot;,&quot;mustacheAltTags&quot;:false,&quot;x&quot;:410,&quot;y&quot;:3000,&quot;wires&quot;:[[&quot;b8514a52.6e3fd8&quot;]]},{&quot;id&quot;:&quot;b8514a52.6e3fd8&quot;,&quot;type&quot;:&quot;api-call-service&quot;,&quot;z&quot;:&quot;2ef67aae.d4bfe6&quot;,&quot;name&quot;:&quot;Bring app to foreground&quot;,&quot;server&quot;:&quot;a86c4410.e2a568&quot;,&quot;version&quot;:1,&quot;debugenabled&quot;:false,&quot;service_domain&quot;:&quot;rest_command&quot;,&quot;service&quot;:&quot;wall_tablet_kiosk_command&quot;,&quot;entityId&quot;:&quot;&quot;,&quot;data&quot;:&quot;{\&quot;cmd\&quot;:\&quot;toForeground\&quot;}&quot;,&quot;dataType&quot;:&quot;json&quot;,&quot;mergecontext&quot;:&quot;&quot;,&quot;output_location&quot;:&quot;&quot;,&quot;output_location_type&quot;:&quot;none&quot;,&quot;mustacheAltTags&quot;:false,&quot;x&quot;:620,&quot;y&quot;:2980,&quot;wires&quot;:[[&quot;4cbd748a.3d9d9c&quot;]]},{&quot;id&quot;:&quot;4cbd748a.3d9d9c&quot;,&quot;type&quot;:&quot;api-call-service&quot;,&quot;z&quot;:&quot;2ef67aae.d4bfe6&quot;,&quot;name&quot;:&quot;Navigate to view&quot;,&quot;server&quot;:&quot;a86c4410.e2a568&quot;,&quot;version&quot;:1,&quot;debugenabled&quot;:false,&quot;service_domain&quot;:&quot;rest_command&quot;,&quot;service&quot;:&quot;wall_tablet_kiosk_command&quot;,&quot;entityId&quot;:&quot;&quot;,&quot;data&quot;:&quot;{\&quot;cmd\&quot;:\&quot;loadURL\&quot;, \&quot;url\&quot;:\&quot;http://192.168.1.53:8123/wall-tablet/doorbell-cameras\&quot;}&quot;,&quot;dataType&quot;:&quot;json&quot;,&quot;mergecontext&quot;:&quot;&quot;,&quot;output_location&quot;:&quot;&quot;,&quot;output_location_type&quot;:&quot;none&quot;,&quot;mustacheAltTags&quot;:false,&quot;x&quot;:830,&quot;y&quot;:2960,&quot;wires&quot;:[[]]},{&quot;id&quot;:&quot;4c31f2b2.94f04c&quot;,&quot;type&quot;:&quot;inject&quot;,&quot;z&quot;:&quot;2ef67aae.d4bfe6&quot;,&quot;name&quot;:&quot;test&quot;,&quot;props&quot;:[{&quot;p&quot;:&quot;payload&quot;},{&quot;p&quot;:&quot;topic&quot;,&quot;vt&quot;:&quot;str&quot;}],&quot;repeat&quot;:&quot;&quot;,&quot;crontab&quot;:&quot;&quot;,&quot;once&quot;:false,&quot;onceDelay&quot;:0.1,&quot;topic&quot;:&quot;&quot;,&quot;payload&quot;:&quot;&quot;,&quot;payloadType&quot;:&quot;date&quot;,&quot;x&quot;:240,&quot;y&quot;:2960,&quot;wires&quot;:[[&quot;b0291746.cff2c8&quot;]]},{&quot;id&quot;:&quot;a86c4410.e2a568&quot;,&quot;type&quot;:&quot;server&quot;,&quot;name&quot;:&quot;Home Assistant&quot;,&quot;legacy&quot;:false,&quot;addon&quot;:true,&quot;rejectUnauthorizedCerts&quot;:true,&quot;ha_boolean&quot;:&quot;y|yes|true|on|home|open&quot;,&quot;connectionDelay&quot;:true,&quot;cacheJson&quot;:true}]</code></pre><figcaption>Node-RED flow</figcaption></figure><p>I repeated this flow for each of my kiosks. Now when the doorbell is rung, the live feed pops up on all of my tablets giving me a virtual peephole for the front door no matter where I am in the house. </p><h2 id="automation-example-2-ai-enhanced-push-notifications">Automation Example #2 - AI Enhanced Push Notifications</h2><p>The next automation I set up was mobile push notifications which trigger when someone is at the front door. At first, I used the doorbell press as my trigger, but I was missing a lot of events because my local delivery drivers usually do not press the doorbell. I decided to instead use motion events to trigger the notifications, with AI to filter out only the events which actually contain people.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/Screenshot_20210502-110533_Gmail.jpg" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="1080" height="2280" srcset="https://kleypot.com/content/images/size/w600/2021/05/Screenshot_20210502-110533_Gmail.jpg 600w, https://kleypot.com/content/images/size/w1000/2021/05/Screenshot_20210502-110533_Gmail.jpg 1000w, https://kleypot.com/content/images/2021/05/Screenshot_20210502-110533_Gmail.jpg 1080w" sizes="(min-width: 720px) 720px"><figcaption>Doorbell Person Alert Notification</figcaption></figure><p>This automation relies on a bunch of tools which I have already set up &#x2013; Blue Iris, <a href="https://kleypot.com/last-watch-ai-blue-iris-integration/">Last Watch AI</a>, Samba sharing, Android/iOS notifications, and Node-RED. It was easy for me to add this automation, but to show all of the prerequisite setup would take a whole post. Here is just a summary of how it works:</p><ol><li>Blue Iris triggers motion events when a person approaches the doorbell. The motion detection is fine tuned to only trigger on very large objects (otherwise it will trigger when the person is far away and hard to identify).</li><li>Last Watch AI checks the event snapshot for humans. If a person is in the event, the snapshot is sent over to Home Assistant.</li><li>A Node-RED flow is triggered when the new event comes in to Home Assistant. The flow publishes push notifications containing the event snapshot to each mobile device.</li><li>The Node-RED flow then blocks itself until the person has cleared the area. I already have AI sensors that turn on whenever a person is in the driveway, so I just wait for those to turn off. This is important because we only want to generate one notification per &quot;approach&quot;.</li></ol><p>The result is a push notification that consistently shows every approach to the doorbell. The notifications almost always show the person when they are close to and facing the camera, which is perfect for package deliveries because you can see the driver holding the package.</p><p>Tapping the notification also brings up the full size image along with the live feed.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/05/Screenshot_20210502-114043_Home-Assistant.jpg" class="kg-image" alt="Fully Offline Video Doorbell for Home Assistant" loading="lazy" width="1080" height="2280" srcset="https://kleypot.com/content/images/size/w600/2021/05/Screenshot_20210502-114043_Home-Assistant.jpg 600w, https://kleypot.com/content/images/size/w1000/2021/05/Screenshot_20210502-114043_Home-Assistant.jpg 1000w, https://kleypot.com/content/images/2021/05/Screenshot_20210502-114043_Home-Assistant.jpg 1080w" sizes="(min-width: 720px) 720px"><figcaption>Doorbell view in Home Assistant mobile app</figcaption></figure><h2 id="other-automations">Other Automations</h2><p>Here are the other automations I am working on:</p><ul><li>Push notifications for doorbell button presses</li><li>Record and play back video clips of recent doorbell approaches</li><li>Turn on lights inside the house if a person approaches while nobody is home to make it look like someone is home</li><li>Automatically unlock doors with AI face detection</li></ul><p>Thanks for reading!</p><!--kg-card-begin: markdown--><script type="text/javascript" src="https://cdnjs.buymeacoffee.com/1.0.0/button.prod.min.js" data-name="bmc-button" data-slug="akmolina28" data-color="#5F7FFF" data-emoji="&#x1F37A;" data-font="Cookie" data-text="Buy me a beer" data-outline-color="#000000" data-font-color="#ffffff" data-coffee-color="#FFDD00"></script><!--kg-card-end: markdown-->]]></content:encoded></item><item><title><![CDATA[Home Assistant - Better Blue Iris Integration]]></title><description><![CDATA[In this post, I will show how I greatly improved the performance of my Blue Iris cameras in Home Assistant. Rather than using the camera component, I am using iframes to directly embed the UI3 interface into Lovelace.]]></description><link>https://kleypot.com/home-assistant-blue-iris-ui3-player-in-lovelace-ui/</link><guid isPermaLink="false">63052aeb3ecc781c55057ca8</guid><category><![CDATA[home-assistant]]></category><category><![CDATA[home-automation]]></category><category><![CDATA[blue-iris]]></category><category><![CDATA[home-security]]></category><category><![CDATA[networking]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Sun, 25 Apr 2021 02:47:06 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/Screen_Recording_20210424-143648_Home-Assistant_1.gif" class="kg-image" alt loading="lazy" width="368" height="776"><figcaption>Blue Iris UI3 iframed into Lovelace</figcaption></figure><p>In this post, I will show how I greatly improved the performance of my Blue Iris cameras in Home Assistant. Rather than using the camera component, I am using iframes to directly embed the UI3 interface into Lovelace. At the end of this post I have included a <a href="#setup-guide">setup guide</a> to show you how I achieved the following improvements:</p><ul><li>90% less bandwidth per camera</li><li>Low latency and high responsiveness</li><li>High quality</li><li>Externally accessible over secure HTTPS</li></ul><h2 id="bandwidth-issues">Bandwidth Issues</h2><p>I have multiple kiosks around the house which have camera feeds open, and recently when I added a new camera to my house I noticed that my wireless network started to choke. That&apos;s when I looked at the connections in Blue Iris to see how much my cameras were impacting the network.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-2.png" class="kg-image" alt loading="lazy" width="927" height="382" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-2.png 600w, https://kleypot.com/content/images/2021/04/image-2.png 927w" sizes="(min-width: 720px) 720px"><figcaption>Blue Iris Connection Status</figcaption></figure><p>It turns out that my connections to Home Assistant were constantly pulling around 30mbps. If I opened the HA app on my phone, the bandwidth would go up even more. Add in a few devices streaming Netflix and Youtube and it is pretty clear why the network was starting to die. Just to confirm, I turned off the BI web server and saw all of my network issues went away.</p><h2 id="mjpeg-camera-component">Mjpeg Camera Component</h2><p>Up until this point, I was using the mjpeg camera platform for each of my cameras in Blue Iris. Here is a simplified example:</p><figure class="kg-card kg-code-card"><pre><code class="language-YAML">camera:
- platform: mjpeg
  mjpeg_url: http://[BI_PI_ADDR]:[BI_PORT]/mjpg/[BI_CAMERA_SHORTNAME]
  name: Driveway Camera</code></pre><figcaption>Blue Iris mjpeg integration</figcaption></figure><p>To check the bandwidth, you can open the url shown above in a browser and check the status in Blue Iris.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-3.png" class="kg-image" alt loading="lazy" width="928" height="378" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-3.png 600w, https://kleypot.com/content/images/2021/04/image-3.png 928w" sizes="(min-width: 720px) 720px"><figcaption>Single mjpeg stream performance</figcaption></figure><p>What I found above is that a single stream from a 480p (!!!) camera pulls around 6mbps. I knew that mjpeg streams were uncompressed but I had no idea that a low definition feed would pull so much bandwidth. </p><p>You can use some tricks like specifying the size and quality in the url, for example <code>http://[BI_PI_ADDR]:[BI_PORT]/mjpg/[BI_CAMERA_SHORTNAME]<a href="http://192.168.1.31:81/mjpg/annke101sd?q=25&amp;s=50">?q=25&amp;s=50</a></code>. But this just turns the stream into a potato.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-4.png" class="kg-image" alt loading="lazy" width="801" height="602" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-4.png 600w, https://kleypot.com/content/images/2021/04/image-4.png 801w" sizes="(min-width: 720px) 720px"><figcaption>Quality and Size cut in half using URL parameters</figcaption></figure><p>And this beautiful picture still requires 1mbps of bandwidth per connection. Not great!</p><h2 id="h264-live-streams">H264 Live Streams</h2><p>At this point, I spent a long time playing around with different ways to get h264 streaming working. The idea is that HA will pass through a highly compressed stream which gets decoded and rendered on the tablet or smartphone. I tried the Stream component and the ffmpeg platform, and in the rare instance I got a stream to play on my device, I had three major issues:</p><ol><li>The stream took several seconds to buffer and open, and often stopped and buffered some more.</li><li>The decoding was never very good. The picture was always blocky and the decoder could never smooth things out. </li><li>The stream was 15-30 seconds behind realtime. This seems to be a common problem and major drawback of HLS streams.</li></ol><p>And on top of all of that, the h264 streams still pulled 2-3mbps, almost as bad as the mjpeg streams. Regardless, the 30 second delay was enough of an issue for me to rule out HLS streams as an option.</p><h2 id="blue-iris-ui3-web-viewer">Blue Iris UI3 Web Viewer</h2><p>Once I had given up on mjpeg and h264 streams, I thought I was out of options. I began to dig into the built-in web viewer from Blue Iris to see if I could learn how it was handling camera feeds. Out of curiosity, I checked the status page and I was amazed by what I saw.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-7.png" class="kg-image" alt loading="lazy" width="988" height="630" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-7.png 600w, https://kleypot.com/content/images/2021/04/image-7.png 988w" sizes="(min-width: 720px) 720px"><figcaption>480p camera in Blue Iris UI3</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-6.png" class="kg-image" alt loading="lazy" width="920" height="375" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-6.png 600w, https://kleypot.com/content/images/2021/04/image-6.png 920w" sizes="(min-width: 720px) 720px"><figcaption>Blue Iris Connection Status</figcaption></figure><p>When viewing the same camera, on it&apos;s native resolution and framerate, UI3 is only pulling around 500kbps. This is much more in line with what I would expect from a standard definition h264 stream. Even better, the playback is extremely smooth, without any of the freezing/buffering/pixelation issues I had experienced so far. </p><p>This is exactly what I want in my Lovelace views &#x2013; the best of both worlds: high quality and low bandwidth. The rest of this guide will walk through how I was able to render UI3 into Lovelace and make it accessible both inside and outside of my home network.</p><h1 id="setup-guide">Setup Guide</h1><p>The basic idea is to use iframes to render UI3 as a Lovelace card by setting URL parameters to load UI3 in a specific way. For example, a url like</p><p><code>https://[BI_HOST]:[BI_PORT]/ui3.htm?maximize=1&amp;cam=driveway</code></p><p>will open the web player with the driveway camera maximized. If you configure a Lovelace iframe card with this url, you will get a camera tile like this:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-22.png" class="kg-image" alt loading="lazy" width="561" height="569"><figcaption>UI3 iframe card</figcaption></figure><p>If you only use Home Assistant at home and you don&apos;t use HTTPS, then this is all you would have to do. But if you want to access Home Assistant and view your cameras from outside of your network, there is a bit more setup involved.</p><h2 id="remote-access-over-https">Remote Access over HTTPS</h2><p>The first step is to make Blue Iris accessible from the internet. You can do this for free with DuckDNS and Let&apos;s Encrypt:</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://community.home-assistant.io/t/integrating-blue-iris-into-home-assistant/125651"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Integrating Blue Iris into Home Assistant</div><div class="kg-bookmark-description">Introduction Hey everyone! Recently, I went about redoing my hass.io installation in a Proxmox VM since my Raspberry Pi was barely keeping up with the workload. While I was setting up my components and integrations, I realized that one huge part of my home automation system was missing. My secur&#x2026;</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://community-assets.home-assistant.io/optimized/3X/f/8/f8099f0c21c870a655cc57d43ea304d6e3caf7d7_2_180x180.png" alt><span class="kg-bookmark-author">Home Assistant Community</span><span class="kg-bookmark-publisher">nickdaria</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://community-assets.home-assistant.io/original/3X/8/0/80d3f33958a234aab5cb9364b998048064d12b9d.png" alt></div></a></figure><p>This guide is a great starting point, but I deviated a bit because I also want Home Assistant to be available externally. You should follow the guide above, but with these extra steps:</p><p>1. First, set up a new domain on <a href="http://www.duckdns.org/">DuckDNS</a>. For example if you were already using &quot;mydomain.duckdns.org&quot; for your Home Assistant, add another one like &quot;mydomainbi.duckdns.org&quot;.</p><p>2. Update your DuckDNS add-on config to include your new domain and restart the add-on. Check the logs to make sure the add-on updated your SSL certificate with the new domain.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-9.png" class="kg-image" alt loading="lazy" width="1058" height="480" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-9.png 600w, https://kleypot.com/content/images/size/w1000/2021/04/image-9.png 1000w, https://kleypot.com/content/images/2021/04/image-9.png 1058w" sizes="(min-width: 720px) 720px"><figcaption>Duck DNS add-on configuration</figcaption></figure><p>3. Now follow the guide until you get to the step where you are setting up the SSL certificate. Instead of generating a new SSL cert with Let&apos;s Encrypt, instead add a Custom certificate. Name it with your &quot;mydomainbi.duckdns.org&quot; domain and upload the certificate and key files from your HA instance. Then repeat the process using your other domain name. Nginx should show two certificates now &#x2013; one will be used for HA and the other for BI.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-10.png" class="kg-image" alt loading="lazy" width="980" height="430" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-10.png 600w, https://kleypot.com/content/images/2021/04/image-10.png 980w" sizes="(min-width: 720px) 720px"><figcaption>Nginx SSL certificates</figcaption></figure><p>4. Next, you will need to create <strong>two</strong> hosts. One host should point &quot;mydomain.duckdns.org&quot; to port 8123 on your Home Assistant. The second host will point &quot;mydomainbi.duckdns.org&quot; to your BI server. For each host, choose the SSL cert that matches their domain names.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-11.png" class="kg-image" alt loading="lazy" width="1189" height="268" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-11.png 600w, https://kleypot.com/content/images/size/w1000/2021/04/image-11.png 1000w, https://kleypot.com/content/images/2021/04/image-11.png 1189w"><figcaption>Proxy hosts</figcaption></figure><p>5. Proceed through the BI setup from the guide, but make sure to use your new DuckDNS domain name as the external URL for the BI web server.</p><p>Once you finish the BI setup, you should be able to access both HA and BI externally on your two DuckDNS domain names. This is all you need to make the iframes work.</p><h2 id="lovelace-config">Lovelace Config</h2><p>Now you can add iframe cards to your Lovelace config, for example:</p><pre><code class="language-YAML">- type: iframe
  url: https://mydomainbi.duckdns.org/ui3.htm?tab=live&amp;maximize=1&amp;cam=driveway&amp;group=all_cams&amp;streamingprofile=480p
  aspect_ratio: &apos;16:9&apos;</code></pre><p>A bunch of URL options are available to play with, not limited to:</p><ul><li><strong>tab </strong>- which side-panel should be shown if the camera is minimized, either &quot;live&quot; or &quot;clips&quot;</li><li><strong>maximize </strong>- set to 1 to maximize the video and hide the sidebars</li><li><strong>cam </strong>- the shortname of the camera to show when maximized</li><li><strong>group </strong>- the camera group to show when the camera is minimized</li><li><strong>streamingprofile </strong>- change the quality (ex: 480p, 720p, 1080p)</li></ul><p>Add the iframe to your config and see if it loads. You may be asked to authenticate the first time you connect. Then try turning off your wifi on your phone to see if you can see the iframe externally.</p><h2 id="dns-setup">DNS Setup</h2><p>The only problem now is that when you are connected to your LAN, the iframe card will still load over internet because you supplied the external web address in the config. You can confirm this by unplugging the internet from your router/modem. Once your network is no longer connected to the internet, your iframe cards will fail to load. We want to fix this so that we can view our cameras over LAN, without needing to go through the internet.</p><p>The simplest way to fix this problem is with a custom DNS record. If you do not have your own DNS server, do yourself a favor and set up a <a href="https://pi-hole.net/">Pi-hole</a>. I use a Pi-hole at home, so all I had to do was add a custom record to route my Blue Iris domain to my Home Assistant IP. I used my Home Assistant IP because that is the IP of my Nginx proxy.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-12.png" class="kg-image" alt loading="lazy" width="1000" height="241" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-12.png 600w, https://kleypot.com/content/images/2021/04/image-12.png 1000w"><figcaption>Pi-hole custom DNS record</figcaption></figure><p>Now, all requests to my BI domain in my LAN will be routed back down to my Nginx proxy, rather than being routed out through the internet. This way I can access my cameras in my LAN even if my internet is down.</p><h2 id="stream-optimization-optional-">Stream Optimization (optional)</h2><p>Now that we are using the UI3 player, we should see a significant reduction in bandwidth. But there is one more trick to squeeze out even more performance without sacrificing quality.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-15.png" class="kg-image" alt loading="lazy" width="919" height="371" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-15.png 600w, https://kleypot.com/content/images/2021/04/image-15.png 919w" sizes="(min-width: 720px) 720px"><figcaption>480p camera at native resolution and bitrate</figcaption></figure><p> First, go to <strong>Blue Iris Settings &gt; Web Server &gt; Advanced </strong>and configure Steaming Profile 0. Reduce the quality and bitrate to an acceptable level and save everything. Here are the levels I used.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-13.png" class="kg-image" alt loading="lazy" width="546" height="691"><figcaption>Blue Iris Streaming Profile</figcaption></figure><p>Next, go into Settings &gt; Users and edit your user profile. Check the box to limit bandwidth and set a maximum framerate around 10-15 FPS, and choose the Streaming Profile you set up in the previous step.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-18.png" class="kg-image" alt loading="lazy" width="359" height="94"><figcaption>Limit bandwidth for admin user</figcaption></figure><p>Once you click OK, the changes should take effect immediately, but you may need to restart the BI service.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-17.png" class="kg-image" alt loading="lazy" width="927" height="378" srcset="https://kleypot.com/content/images/size/w600/2021/04/image-17.png 600w, https://kleypot.com/content/images/2021/04/image-17.png 927w" sizes="(min-width: 720px) 720px"><figcaption>Reduced bandwidth</figcaption></figure><p>Now the bandwidth has been cut nearly in half while keeping the quality at an acceptable level.</p><h2 id="conclusion">Conclusion</h2><p>That&apos;s it! Now I am using h264 encoding the way it was intended &#x2013; low bitrate and high quality. I have fixed all of the bottlenecking in my home network, and my Lovelace cameras work better than ever.</p><p>Next, I am working on a way to show a list of BI recordings in lovelace, with direct links to launch the clips. Feel free to subscribe to my RSS or Twitter to get updates if you are interested in more features like this.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/04/image-21.png" class="kg-image" alt loading="lazy" width="243" height="454"><figcaption>Lovelace BI Clip Player (work in progress)</figcaption></figure><p>Thanks for reading!</p><!--kg-card-begin: html--><script type="text/javascript" src="https://cdnjs.buymeacoffee.com/1.0.0/button.prod.min.js" data-name="bmc-button" data-slug="akmolina28" data-color="#5F7FFF" data-emoji="&#x1F37A;" data-font="Cookie" data-text="Buy me a beer" data-outline-color="#000000" data-font-color="#ffffff" data-coffee-color="#FFDD00"></script><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Last Watch AI Update - v1.1 Released]]></title><description><![CDATA[<p>It has been about 4 months since Last Watch AI was first <a href="https://kleypot.com/last-watch-ai-introduction/">introduced</a>. I received a lot of great feedback from the community and today I am finally releasing a new major version. Full details below.</p><p><a href="https://github.com/akmolina28/last-watch-ai/releases/tag/1.1.1">Last Watch AI - Release 1.1.1</a></p><p>The most difficult part of this</p>]]></description><link>https://kleypot.com/last-watch-ai-v1-1-released/</link><guid isPermaLink="false">63052aeb3ecc781c55057ca6</guid><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Fri, 26 Mar 2021 15:52:35 GMT</pubDate><content:encoded><![CDATA[<p>It has been about 4 months since Last Watch AI was first <a href="https://kleypot.com/last-watch-ai-introduction/">introduced</a>. I received a lot of great feedback from the community and today I am finally releasing a new major version. Full details below.</p><p><a href="https://github.com/akmolina28/last-watch-ai/releases/tag/1.1.1">Last Watch AI - Release 1.1.1</a></p><p>The most difficult part of this project is making it work smoothly on Windows and Linux (still looking for Mac users!). Linux is the preferred environment, but a lot of people out there still prefer Windows. Many IP camera programs like Blue Iris are Windows-only, and most of those users understandably don&apos;t want to bother with standing up a separate Linux server. Docker has not been perfect but it is working well enough, so platform independence will continue to be a priority for Last Watch.</p><p>Now that I have smoothed out all of the issues with the core functionality, I will be turning my attention to a long list of feature requests and quality-of-life improvements. I am especially interested in improving the setup and upgrade process. Please open a ticket in Github if you have any suggestions &#x2013; or better yet, open a pull request!</p><h3 id="file-watcher-and-file-storage-improvements">File Watcher and File Storage Improvements</h3><p>The biggest issues in the previous version of Last Watch have to do with performance while accessing the file system, with many users reporting high CPU usage or high latency when detecting new image files. As it turns out, mounting the watch folder as a volume in docker and hosting the files from the mount was not a great approach.</p><p>In v1.1 I have completely re-written the File Watcher so that it moves files out of the watch folder and into the public folder of the web server. This fixes the performance issues by keeping the watch folder virtually empty. </p><p>As a result of this change, the web server now stores the images instead of leaving them in the watch folder. This allowed for some other much needed improvements like image compression, preview thumbnails, and automatic deletion. </p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/03/image.png" class="kg-image" alt loading="lazy" width="735" height="639" srcset="https://kleypot.com/content/images/size/w600/2021/03/image.png 600w, https://kleypot.com/content/images/2021/03/image.png 735w" sizes="(min-width: 720px) 720px"><figcaption>Detection Event Thumbnails</figcaption></figure><h3 id="high-priority-automations">High Priority Automations</h3><p>Last Watch handles all of the AI and automation work using queued worker threads. In version 0.x all worker threads had the same priority, meaning the AI detection, webhook calls, and all of the automations were handled on a first-come first-serve basis. This could be a problem if one of your automations needed to be near real-time, such as triggering a video recording. There could be situations where an important automation was blocked for several seconds while waiting for other jobs to finish.</p><p>In version 1.1 you can now designate automations to be High Priority when attaching them to a profile.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/03/image-1.png" class="kg-image" alt loading="lazy" width="958" height="595" srcset="https://kleypot.com/content/images/size/w600/2021/03/image-1.png 600w, https://kleypot.com/content/images/2021/03/image-1.png 958w" sizes="(min-width: 720px) 720px"><figcaption>High Priority Automations</figcaption></figure><p>High priority jobs will be processed first. By default, all automations are low priority and the AI job is medium. </p><h3 id="webhook-improvements">Webhook Improvements</h3><p>The original idea behind the webhook was to expose an API that allowed you to POST image files without having to write images to the watch folder. In version 1.1 the webhook has been rewritten to make this possible. The File Watcher is now just a wrapper around the webhook &#x2013; you can completely disable the watcher and POST images to the webhook yourself. See the <a href="https://github.com/akmolina28/last-watch-ai/blob/master/docs/API.md#event-webhook">API docs</a> for details.</p><p>The downside of this approach is that there is an extra I/O step if you do use the file watcher (the web server has to write the image when it is moved out of the watch folder). I am currently working on a modification to run the AI while the file is in memory, before writing it to disk.</p><h3 id="detection-event-ui">Detection Event UI</h3><p>The Detection Event page has been cleaned up and improved quite a bit since the first release of Last Watch. The screen now handles large 4k resolution images much better, which pairs nicely with the image compression improvements on the back end. You can also click on the Predictions badge to see all of the predictions from Deepstack, and you filter through each one to see the type of object and the confidence.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/03/image-2.png" class="kg-image" alt loading="lazy" width="1223" height="984" srcset="https://kleypot.com/content/images/size/w600/2021/03/image-2.png 600w, https://kleypot.com/content/images/size/w1000/2021/03/image-2.png 1000w, https://kleypot.com/content/images/2021/03/image-2.png 1223w" sizes="(min-width: 1200px) 1200px"><figcaption>Detection Event Predictions</figcaption></figure><p>As before, the page shows a list of profiles which were triggered by the event and you can click each profile to highlight the matching predictions. It&apos;s also easier to see which objects were masked or filtered by the profile.</p><h2 id="upgrading-from-older-versions">Upgrading from Older Versions</h2><p>If you are upgrading from an older version of Last Watch, there are a few things to be aware of.</p><p>Most importantly, the .env file has many new additions. It is recommended that you start over again with one of the example .env files rather than keeping your existing file. Make sure to look over the new file and set your options as desired.</p><figure class="kg-card kg-code-card"><pre><code># Back up your old env file
$ cp .env .env.old

# Copy in a new example config and edit it
$ cp configs/.env.linux .env
$ nano .env</code></pre><figcaption>Recreating the .env file</figcaption></figure><p>Other than the changes to the env file, you can follow the normal <a href="https://github.com/akmolina28/last-watch-ai/blob/master/README.md#upgrade-from-source">upgrade steps</a>.</p><p>There is one breaking change to be aware of. Because of the changes to how image files are stored, your old detection events will no longer show their images and will be missing thumbnails.</p><p>After upgrading, you should consider deleting everything from your watch folder. Keeping the watch folder clean will prevent the File Watcher from overutilizing CPU. Note that all new images will automatically be purged from the watch folder after they are posted to the application (unless you opt to disable automatic deletion).</p><h2 id="full-list-of-changes">Full list of changes</h2><ul><li>Ability to delete automation configurations.</li><li>Improve detection event UI with better discovery and filtering of predictions for each matched profile and better handling of large images up to 4k resolution.</li><li>The Watch Folder is no longer mounted in the web application. The web server now manages images files once they are processed into Last Watch.</li><li>Image files are automatically deleted according to the data retention policy configured in the .env file.</li><li>File Watcher performance and stability improvements. Added a new config flag to disable automatic deletion of images from the watch folder.</li><li>Implement worker queue priorities to ensure high priority tasks are run first.</li><li>Ability to designate automations to be run in high priority mode.</li><li>Implement Webhook API to POST image files to Last Watch. The Webhook will accept the image and return immediately, handling the AI and automations in background processes.</li><li>Compress images after the AI processing is finished to reduce file size. Use progressive encoding to help load images faster in web browsers.</li><li>Add config flag to set image compression quality or to disable compression completely.</li><li>Show image thumbnails on the detection events list.</li><li>Create example .env files for windows and linux to separate differences in how each OS should be configured.</li><li>Add config options to disable optional containers like the File Watcher and Deepstack.</li><li>Enabled automatic retries for queued jobs. Automations or tasks which fail or time out will be retried up to a maximum of 3 attempts. For example, if Last Watch times out while trying to send an image to telegram, it will try again after 90 seconds. If a job fails 3 times, it will be recorded as an error in the logs.</li><li>Several fixes for various errors that were logged by automations.</li></ul>]]></content:encoded></item><item><title><![CDATA[Last Watch AI - Blue Iris Walkthrough]]></title><description><![CDATA[In this post I will walk through setting up Last Watch AI to process motion events from Blue Iris. I will show how to feed motion alerts from BI into Last Watch, and how to trigger recordings in BI if certain objects are detected.]]></description><link>https://kleypot.com/last-watch-ai-blue-iris-integration/</link><guid isPermaLink="false">63052aeb3ecc781c55057ca4</guid><category><![CDATA[last-watch-ai]]></category><category><![CDATA[home-automation]]></category><category><![CDATA[home-security]]></category><category><![CDATA[blue-iris]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Fri, 29 Jan 2021 03:57:50 GMT</pubDate><media:content url="https://kleypot.com/content/images/2021/01/image-19.png" medium="image"/><content:encoded><![CDATA[<img src="https://kleypot.com/content/images/2021/01/image-19.png" alt="Last Watch AI - Blue Iris Walkthrough"><p>In this post I will walk through setting up Last Watch AI to process motion events from Blue Iris. I will show how to feed motion alerts from BI into Last Watch, and how to trigger recordings in BI if certain objects are detected. Finally, I will cover some advanced features in Last Watch to further refine your AI motion events.</p><p><em>For more about Last Watch &#x2013; <a href="https://kleypot.com/last-watch-ai-introduction/">Introducing: Last Watch AI</a></em></p><h2 id="windows-setup">Windows Setup</h2><p>This guide assumes that you already have Blue Iris set up on a Windows machine. You can install Last Watch on the same machine (simpler setup) or you can run Last Watch on a separate Linux server (better performance). If you want to use Linux, just skip ahead to the next section.</p><h3 id="1-install-docker-required-">1. Install Docker (required)</h3><p>The first step is to install Docker for Windows. Docker is a virtualization environment where Last Watch will be hosted.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.docker.com/docker-for-windows/install/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Install Docker Desktop on Windows</div><div class="kg-bookmark-description">How to install Docker Desktop for Windows</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.docker.com/favicons/docs@2x.ico" alt="Last Watch AI - Blue Iris Walkthrough"><span class="kg-bookmark-author">Docker Documentation</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.docker.com/favicons/docs@2x.ico" alt="Last Watch AI - Blue Iris Walkthrough"></div></a></figure><h3 id="2-install-docker-compose-required-">2. Install Docker Compose (required)</h3><p>Docker Compose is a tool for defining and running multi-container applications such as Last Watch. You will use Docker Compose to start and stop the application.</p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://docs.docker.com/compose/install/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Install Docker Compose</div><div class="kg-bookmark-description">How to install Docker Compose</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://docs.docker.com/favicons/docs@2x.ico" alt="Last Watch AI - Blue Iris Walkthrough"><span class="kg-bookmark-author">Docker Documentation</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://docs.docker.com/favicons/docs@2x.ico" alt="Last Watch AI - Blue Iris Walkthrough"></div></a></figure><h3 id="3-download-and-un-zip-last-watch-required-">3. Download and un-zip Last Watch (required)</h3><ol><li>Download the latest release zip (last-watch-ai-x.x.x.zip) from Github: <a href="https://github.com/akmolina28/last-watch-ai/releases">https://github.com/akmolina28/last-watch-ai/releases</a></li><li>Extract the zipped folder to the installation directory of your choice. Usually this would be somewhere on the main OS drive. In my case I will extract directly to the C drive. You should end up with a folder that looks like this:</li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image-1.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1126" height="900" srcset="https://kleypot.com/content/images/size/w600/2021/01/image-1.png 600w, https://kleypot.com/content/images/size/w1000/2021/01/image-1.png 1000w, https://kleypot.com/content/images/2021/01/image-1.png 1126w" sizes="(min-width: 720px) 720px"><figcaption>Last Watch application folder</figcaption></figure><h3 id="4-configuration-required-">4. Configuration (required)</h3><p>Next, create your watch folder. This is where Last Watch will check for new image files, and where BI will send its motion alert snapshots. For this example I will create the folder <code>C:\aiinput</code>.</p><p>Now, navigate to the application folder that you just un-zipped and edit the file called <code>.env</code> using notepad or an editor of your choice. Set the <code>WATCH_FOLDER</code> to the path of the folder you created in this step. Familiarize yourself with other settings that you may wish to change in the future.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image-9.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="638" height="433" srcset="https://kleypot.com/content/images/size/w600/2021/01/image-9.png 600w, https://kleypot.com/content/images/2021/01/image-9.png 638w"><figcaption>.env configuration file</figcaption></figure><h3 id="5-start-the-app">5. Start the app</h3><p>Last Watch is started and stopped via the command line.</p><ol><li>Open a new command prompt (Win+R, then type &quot;cmd&quot;)</li></ol><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="399" height="206"><figcaption>Open a new command prompt</figcaption></figure><p>2. &#xA0; Change directory (cd) to the folder you extracted in step 3. This folder should contain a file called <code>docker-compose.yaml</code></p><p><code>&gt; cd c:/last-watch-ai</code></p><p>3. &#xA0; Run Docker Compose to bring up the containers</p><p><code>&gt; docker-compose up -d --build site</code></p><p>This command will take a few minutes to run the first time. Docker will pull images for all of the containers that make up Last Watch, including Deepstack. Once the containers are up you should see output that looks like this:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image--1-.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1207" height="391" srcset="https://kleypot.com/content/images/size/w600/2021/01/image--1-.png 600w, https://kleypot.com/content/images/size/w1000/2021/01/image--1-.png 1000w, https://kleypot.com/content/images/2021/01/image--1-.png 1207w" sizes="(min-width: 720px) 720px"><figcaption>Start the containers</figcaption></figure><p>4. &#xA0; Open a browser and navigate to http://localhost:8080</p><p>You should be greeted with an empty landing page that looks like this:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image--3-.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1193" height="690" srcset="https://kleypot.com/content/images/size/w600/2021/01/image--3-.png 600w, https://kleypot.com/content/images/size/w1000/2021/01/image--3-.png 1000w, https://kleypot.com/content/images/2021/01/image--3-.png 1193w" sizes="(min-width: 720px) 720px"><figcaption>Last Watch AI landing page</figcaption></figure><p>At this point, Last Watch is running and is ready to start processing image files. You can test by dropping an image into the watch folder and checking the Detection Events feed for the new event. Otherwise, you can skip down to the Blue Iris setup below.</p><h2 id="ubuntu-setup">Ubuntu Setup</h2><p>Linux is the preferred environment for Last Watch, but the setup a bit less convenient for most Blue Iris users. Not only do you need to install Last Watch, but you also need to set up a share for Blue Iris to write images to. In this example I will show how I set up Last Watch on my own Ubuntu server. This is how I run my own setup at home.</p><p>This guide assumes a fresh install of Ubuntu. Some of these steps may not be required if you are working from an existing server.</p><h3 id="1-create-a-sudo-user">1. Create a Sudo User</h3><p>On a fresh Ubuntu install, the first task is to set up a sudo user so that you aren&apos;t running as root.</p><figure class="kg-card kg-code-card"><pre><code class="language-BASH">&gt; apt-get update

&gt; apt-get upgrade

&gt; adduser lastwatch

&gt; usermod -aG sudo lastwatch

&gt; su - lastwatch</code></pre><figcaption>Create user lastwatch and switch to that user</figcaption></figure><p>Now you are logged in as a non-root user and can continue with the rest of the install.</p><h3 id="2-install-docker-and-docker-compose">2. Install Docker and Docker Compose</h3><p>Last Watch runs in docker containers so this dependency needs to be set up first. Rather than copy/paste the commands from the Docker docs, I&apos;ll just link them here:</p><ol><li><strong><strong>Install Docker Engine</strong></strong> - <a href="https://docs.docker.com/engine/install/ubuntu/">https://docs.docker.com/engine/install/ubuntu/</a></li><li><strong><strong>Install Docker Compose</strong></strong> - <a href="https://docs.docker.com/compose/install/">https://docs.docker.com/compose/install/</a></li></ol><h3 id="3-clone-last-watch-source-code">3. &#xA0;Clone Last Watch source code</h3><p>Clone latest source code for Last Watch to your home directory. This will create a directory in your home called last-watch-ai.</p><figure class="kg-card kg-code-card"><pre><code class="language-BASH">&gt; cd ~

&gt; git clone https://github.com/akmolina28/last-watch-ai.git
</code></pre><figcaption>Install git and pull the source code</figcaption></figure><h3 id="4-configuration-important-">4. &#xA0;Configuration (important)</h3><p>Next, set up a directory for the watch folder. We will configure Last Watch to watch for new images in this directory.</p><figure class="kg-card kg-code-card"><pre><code class="language-BASH">&gt; mkdir ~/aiinput</code></pre><figcaption>Make the watch folder</figcaption></figure><p>Now, navigate to the application directory and set up the configuration file.</p><pre><code class="language-BASH">&gt; cd ~/last-watch-ai

&gt; cp .env.example .env

&gt; nano .env</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image-6.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="823" height="479" srcset="https://kleypot.com/content/images/size/w600/2021/01/image-6.png 600w, https://kleypot.com/content/images/2021/01/image-6.png 823w" sizes="(min-width: 720px) 720px"><figcaption>.env config file</figcaption></figure><p>Set <code>WATCH_FOLDER</code> to the path of the directory we just created, and set <code>WEB_INTERFACE_URL</code> to the address of your server. Check the rest of the settings and change them as desired.</p><h3 id="5-build-the-application">5. &#xA0;Build the application</h3><p>Now that everything is set up, you can build the app using the command below. This command will install dependencies, set up the database, and compile the code for use. Pulling in dependencies may take a few minutes.</p><figure class="kg-card kg-code-card"><pre><code class="language-BASH">&gt; cp src/.env.example src/.env &amp;&amp;
sudo docker-compose up -d mysql &amp;&amp;
sudo docker-compose run --rm composer install --optimize-autoloader --no-dev &amp;&amp;
sudo docker-compose run --rm artisan route:cache &amp;&amp;
sudo docker-compose run --rm artisan key:generate --force &amp;&amp;
sudo docker-compose run --rm artisan storage:link &amp;&amp;
sudo docker-compose run --rm artisan migrate --force &amp;&amp;
sudo docker-compose run --rm npm install --verbose &amp;&amp;
sudo docker-compose run --rm npm run prod --verbose</code></pre><figcaption>Build the application</figcaption></figure><h3 id="6-start-the-app">6. &#xA0;Start the app</h3><p>Finally, bring up the application by starting the containers. Again, it may take a few minutes to pull in dependencies.</p><figure class="kg-card kg-code-card"><pre><code class="language-BASH">sudo docker-compose up -d --build site</code></pre><figcaption>Start the application</figcaption></figure><p>Now you can confirm that the app is up by using a web browser to navigate to the IP and Port configured on the server.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image-5.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="700" height="415" srcset="https://kleypot.com/content/images/size/w600/2021/01/image-5.png 600w, https://kleypot.com/content/images/2021/01/image-5.png 700w"><figcaption>Last Watch AI landing page</figcaption></figure><h3 id="7-make-the-watch-folder-shareable">7. Make the watch folder shareable</h3><p>The last step is to make the watch folder accessible to your Blue Iris server. Here is how you can expose the folder as a Samba share.</p><p>1. Install Samba.</p><pre><code class="language-BASH">&gt; sudo apt install samba</code></pre><p>2. Set up a username and password.</p><pre><code class="language-BASH">&gt; sudo smbpasswd -a lastwatch</code></pre><p>3. Edit the Samba configuration to set up the watch folder as a share.</p><pre><code class="language-BASH">&gt; sudo nano /etc/samba/smb.conf</code></pre><p>At the very end of the file, add the following configuration for the watch folder and user set up in the previous steps:</p><pre><code class="language-.conf">[aiinput]
comment = Last Watch AI input
path = /home/lastwatch/aiinput
valid users = lastwatch
browsable = yes
read only = no
guest ok = no</code></pre><p>4. Finally, restart the Samba service to apply the settings.</p><pre><code class="language-BASH">&gt; sudo service smbd restart</code></pre><h3 id="8-map-the-samba-share-on-your-blue-iris-pc">8. Map the Samba share on your Blue Iris PC</h3><p>Now you can map the Samba share as a network drive on your Blue Iris PC. This will allow you to save the credentials in Windows and use the share as an output folder in Blue Iris.</p><p>Open a new Explorer window, right click on This PC and choose Map network drive.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image-7.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="401" height="478"><figcaption>Map a new network drive</figcaption></figure><p>Enter the the IP and name of your share as shown below. Make sure to check Reconnect at sign-in and Connect using different credentials. This will ensure that BI always has access to the folder.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2021/01/image-8.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="614" height="454" srcset="https://kleypot.com/content/images/size/w600/2021/01/image-8.png 600w, https://kleypot.com/content/images/2021/01/image-8.png 614w"><figcaption>Set up the network drive</figcaption></figure><p>Click finish and enter the credentials you used to set up Samba. Make sure to check Remember my credentials.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-4.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1060" height="873" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-4.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-4.png 1000w, https://kleypot.com/content/images/2020/12/image-4.png 1060w" sizes="(min-width: 720px) 720px"><figcaption>Enter credentials set up in Ubuntu</figcaption></figure><p>Now you can start configuring Blue Iris.</p><h2 id="blue-iris-setup">Blue Iris Setup</h2><p>At this point we have installed Last Watch either on the Windows PC or on a separate Linux server. Now, I will walk through setting up a camera in BI to leverage the AI detection in Last Watch. The goal is to set up our camera so that it only records clips when a motion event is set off by a relevant object.</p><p><em>This part of the guide assumes you already have a camera set up in BI. </em></p><p>First, we need to set up the watch folder as an output folder in Blue Iris. In the main settings menu, go to Clips and archiving.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-5.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1194" height="1167" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-5.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-5.png 1000w, https://kleypot.com/content/images/2020/12/image-5.png 1194w" sizes="(min-width: 720px) 720px"><figcaption>Blue Iris archive settings</figcaption></figure><p>Choose one of your unused Aux folders and rename it to <code>aiinput</code>. Set the path to your watch folder. If you installed on Windows your path would be something like &quot;C:\aiinput&quot;. If you installed on Linux, provide the path to the Samba share.</p><p>Next, open the settings for your camera and open the Trigger tab. Tick the box for Motion sensor. Configure the motion sensor settings as desired. Set the break time to 30 seconds.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-6.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1089" height="1162" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-6.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-6.png 1000w, https://kleypot.com/content/images/2020/12/image-6.png 1089w" sizes="(min-width: 720px) 720px"><figcaption>Camera trigger settings</figcaption></figure><p>Next, go to the Record tab and set the camera to save a JPEG snapshot to the aiinput folder when triggered.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-8.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1089" height="1162" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-8.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-8.png 1000w, https://kleypot.com/content/images/2020/12/image-8.png 1089w" sizes="(min-width: 720px) 720px"><figcaption>Camera record settings</figcaption></figure><p>The settings above tell the camera to trigger whenever there is motion. The trigger will last 30 seconds, or longer if there is more motion. During the trigger, BI will save snapshots to the aiinput folder every 5 seconds. Note that the camera is <em>not</em> set to record video when triggered &#x2013; this is because I want Last Watch to decide if a recording should start. </p><p>If everything is working, you should see motion alert snapshots feeding in to Last Watch. Note that the events will not be Relevant because we have not set up a detection profile yet.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-9.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1583" height="742" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-9.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-9.png 1000w, https://kleypot.com/content/images/2020/12/image-9.png 1583w" sizes="(min-width: 1200px) 1200px"><figcaption>Blue Iris snapshots in Last Watch</figcaption></figure><p>Before we set up the detection profile in Last Watch, we need to set up one more thing in BI. Right now, our camera is set to trigger on motion and create snapshots. But we also need a way to trigger a video recordings from Last Watch.</p><p>The only way to do this currently in BI is to create a clone of the camera with its own trigger and record settings. You can do this easily by adding a new camera and copying the settings from your existing camera.</p><p><em>Note: some cameras provide multiple streams &#x2013; usually a full HD stream and a lower resolution sub-stream. If your camera supports sub-streams, you don&apos;t need a clone! Just use the sub-stream for your snapshot camera, and the main stream for your video recordings.</em></p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-10.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1089" height="1162" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-10.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-10.png 1000w, https://kleypot.com/content/images/2020/12/image-10.png 1089w" sizes="(min-width: 720px) 720px"><figcaption>Motion sensor disabled for clone</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-11.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1089" height="1162" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-11.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-11.png 1000w, https://kleypot.com/content/images/2020/12/image-11.png 1089w" sizes="(min-width: 720px) 720px"><figcaption>Record video when triggered</figcaption></figure><p>The clone has motion disabled and is set to record video when triggered. The trigger in this case will come from Last Watch. I also added a 10 second buffer to account for some delay in the AI detection.</p><h3 id="last-watch-automation">Last Watch Automation</h3><p>At this point, BI is set up to feed motion alerts into Last Watch, and BI has a camera clone that is ready to be triggered by Last Watch. The final step is to set up a Detection Profile and configure the Web Request Automation in Last Watch.</p><p>Last Watch will trigger recordings using a Web Request. Go to the Web Request Automations page to set up a new automation to trigger your camera. You can use the trigger URL below as a template, filling in the details of your BI setup. Note that you must enable the web server in BI for this to work.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-12.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1634" height="355" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-12.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-12.png 1000w, https://kleypot.com/content/images/size/w1600/2020/12/image-12.png 1600w, https://kleypot.com/content/images/2020/12/image-12.png 1634w" sizes="(min-width: 1200px) 1200px"><figcaption>Web Request automation</figcaption></figure><p>Now we can create a Detection Profile to watch for images from our camera. Enter the camera name as the file pattern and select the types of objects should trigger recordings. The rest of the profile settings aren&apos;t important right now.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-13.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1368" height="918" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-13.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-13.png 1000w, https://kleypot.com/content/images/2020/12/image-13.png 1368w" sizes="(min-width: 720px) 720px"><figcaption>New detection profile</figcaption></figure><p>After you save the profile, subscribe it to the Web Request automation.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-14.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1296" height="603" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-14.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-14.png 1000w, https://kleypot.com/content/images/2020/12/image-14.png 1296w" sizes="(min-width: 720px) 720px"><figcaption>Subscribe to the automation</figcaption></figure><p>Now you can test the whole end-to-end integration by walking around in front of your camera. You should start to see Relevant Detection Events rolling in.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-15.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1482" height="603" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-15.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-15.png 1000w, https://kleypot.com/content/images/2020/12/image-15.png 1482w" sizes="(min-width: 1200px) 1200px"><figcaption>Relevant garage events</figcaption></figure><p>You can click on each event to open up the details and see the results of the detection and check if the automation ran.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-18.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1464" height="817" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-18.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-18.png 1000w, https://kleypot.com/content/images/2020/12/image-18.png 1464w" sizes="(min-width: 1200px) 1200px"><figcaption>Detection event details</figcaption></figure><p>You should also see clips populating in Blue Iris.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-19.png" class="kg-image" alt="Last Watch AI - Blue Iris Walkthrough" loading="lazy" width="1958" height="1050" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-19.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-19.png 1000w, https://kleypot.com/content/images/size/w1600/2020/12/image-19.png 1600w, https://kleypot.com/content/images/2020/12/image-19.png 1958w" sizes="(min-width: 1200px) 1200px"><figcaption>Blue Iris video capture</figcaption></figure><p>You will want to check the clips to see how well they line up with the detection events. You may need to adjust your pre-record buffer if the recording starts too late or too soon.</p><p>The integration is now working, but over time you will want to optimize some settings:</p><ul><li>Adjust the motion tolerance in BI to minimize the number of irrelevant events in LW</li><li>Adjust the pre-record buffer, trigger duration, and snapshot interval to improve the precision of your recordings</li><li>Adjust the AI confidence to reduce false positives</li></ul><p>Don&apos;t forget to explore the other Automations available in Last Watch. You can set up Telegram to send images as notifications to your phone. If you are into Home Automation, you can use Mqtt or Web Requests to set up custom sensors and much more.</p><p>If you encounter any issues, please check the <a href="https://github.com/akmolina28/last-watch-ai/issues">issues </a>page on Github and create a new issue if you can&apos;t resolve it on your own. </p><p>Thanks for reading!</p><!--kg-card-begin: html--><script type="text/javascript" src="https://cdnjs.buymeacoffee.com/1.0.0/button.prod.min.js" data-name="bmc-button" data-slug="akmolina28" data-color="#5F7FFF" data-emoji="&#x1F37A;" data-font="Cookie" data-text="Buy me a beer" data-outline-color="#000000" data-font-color="#ffffff" data-coffee-color="#FFDD00"></script><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[Car Presence Sensor with Home Assistant and Last Watch AI]]></title><description><![CDATA[<p>This guide will show how to set up a vehicle presence sensor in Home Assistant using some key new features in Last Watch. In this example, I have a camera set up on my driveway running on Blue Iris. By using two detection profiles in Last Watch, I was able</p>]]></description><link>https://kleypot.com/vehicle-presence-sensor-with-home-assistant-and-last-watch-ai/</link><guid isPermaLink="false">63052aeb3ecc781c55057ca0</guid><category><![CDATA[home-assistant]]></category><category><![CDATA[home-security]]></category><category><![CDATA[home-automation]]></category><category><![CDATA[last-watch-ai]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Wed, 30 Dec 2020 04:26:00 GMT</pubDate><media:content url="https://kleypot.com/content/images/2020/11/vehicle-presence.png" medium="image"/><content:encoded><![CDATA[<img src="https://kleypot.com/content/images/2020/11/vehicle-presence.png" alt="Car Presence Sensor with Home Assistant and Last Watch AI"><p>This guide will show how to set up a vehicle presence sensor in Home Assistant using some key new features in Last Watch. In this example, I have a camera set up on my driveway running on Blue Iris. By using two detection profiles in Last Watch, I was able to create a sensor in Home Assistant that turns on or off whenever a car is present in the driveway.</p><p><em>For more on setting up Last Watch, see <a href="https://kleypot.com/last-watch-ai-introduction/">here</a>.</em></p><h2 id="automation-setup">Automation Setup</h2><p>First, we will set up some automations to interact with the Home Assistant API. If you have not enabled the API in home assistant, make sure you add this line to your configuration.yaml:</p><figure class="kg-card kg-code-card"><pre><code class="language-YAML"># Example configuration.yaml entry
api:</code></pre><figcaption>https://www.home-assistant.io/integrations/api/</figcaption></figure><p>You will also need to set up a new <a href="https://developers.home-assistant.io/docs/auth_api/#long-lived-access-token">long-lived access token</a> for Last Watch. The token will allow Last Watch to authenticate and create a sensor via the API. Using this token, we can set up two HTTP automations in Last Watch: one to turn the sensor on, and another to turn the sensor off.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="2000" height="1081" srcset="https://kleypot.com/content/images/size/w600/2020/11/image.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image.png 1000w, https://kleypot.com/content/images/size/w1600/2020/11/image.png 1600w, https://kleypot.com/content/images/2020/11/image.png 2009w" sizes="(min-width: 1200px) 1200px"><figcaption>Automation to turn on new sensor</figcaption></figure><p>In the example above, you need to change &quot;ABCDEFGH&quot; to the access token you created in Home Assistant. You also need to make sure the URL points to your HA instance. When triggered, this automation will create an entity called <code>sensor.driveway_vehicle_presence</code> if it does not exist, and it will set the state to <code>on</code>.</p><p>Next, create another automation which turns the sensor off. It will be exactly the same as the first automation but the state in the Body should be set to <code>off</code>.</p><p>Now we can move on to setting up the profiles.</p><h2 id="detection-profiles">Detection Profiles</h2><p>The first profile will turn the HA sensor on whenever a car is detected in the driveway. This profile is very straightforward if you have set up profiles before. You will need to set the file pattern to match the image files generated by your AVR software.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-2.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="1342" height="938" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-2.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-2.png 1000w, https://kleypot.com/content/images/2020/11/image-2.png 1342w" sizes="(min-width: 720px) 720px"><figcaption>Detect cars on the driveway camera</figcaption></figure><p>You may also want to use a detection mask if there are areas where other cars might be visible such as the road or neighboring property. Once the profile is saved, make sure you subscribe it to the automation which turns on the sensor:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-3.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="1401" height="514" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-3.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-3.png 1000w, https://kleypot.com/content/images/2020/11/image-3.png 1401w" sizes="(min-width: 720px) 720px"><figcaption>Subscribe to the http automation</figcaption></figure><p>Finally, create the second profile to turn the sensor off. It should be identical to the first profile, except we will tick the Negative switch. This is a new feature in Last Watch 0.6.0 which will trigger the profile automations only if <em>none</em> of the relevant objects are detected on camera.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-4.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="1396" height="1041" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-4.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-4.png 1000w, https://kleypot.com/content/images/2020/11/image-4.png 1396w" sizes="(min-width: 720px) 720px"><figcaption>Negative Relevance profile</figcaption></figure><p>Save the profile and subscribe it to the automation which turns off the sensor:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-5.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="1118" height="523" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-5.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-5.png 1000w, https://kleypot.com/content/images/2020/11/image-5.png 1118w" sizes="(min-width: 720px) 720px"><figcaption>Subscribe to the http automation</figcaption></figure><p>The setup is now complete and the automations should begin to trigger.</p><h2 id="demo">Demo</h2><p>Here is a look at how this is working in my setup.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-6.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="2000" height="931" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-6.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-6.png 1000w, https://kleypot.com/content/images/size/w1600/2020/11/image-6.png 1600w, https://kleypot.com/content/images/2020/11/image-6.png 2066w" sizes="(min-width: 1200px) 1200px"><figcaption>Detection Profiles</figcaption></figure><p>I have the two profiles, Driveway Car Presence and Driveway Car No Presence set up exactly as outlined in this guide. Note that these profiles are distinct from the Driveway Vehicle profile, which handles other automations when there is vehicle <em>motion</em>.</p><p>When a car pulls into the driveway, the motion trigger in Blue Iris will feed a snapshot into Last Watch to run the AI and handle the automations.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-7.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="1851" height="1163" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-7.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-7.png 1000w, https://kleypot.com/content/images/size/w1600/2020/11/image-7.png 1600w, https://kleypot.com/content/images/2020/11/image-7.png 1851w" sizes="(min-width: 1200px) 1200px"><figcaption>Relevant Detection Event</figcaption></figure><p>In this example you can see how I have set up the detection mask (the red transparent layer) to filter out objects in my neighbor&apos;s driveway and in the street. You can also see that one automation was run, which should be the API for setting the sensor. We can check the states page in Home Assistant to confirm that the new entity is created.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-8.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="1326" height="778" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-8.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-8.png 1000w, https://kleypot.com/content/images/2020/11/image-8.png 1326w" sizes="(min-width: 720px) 720px"><figcaption>Home Assistant sensor created</figcaption></figure><p>Note that the sensor will stay on until it is set otherwise. There is no timeout or any other way to turn this sensor off. This is why we need a Negative profile to clear the sensor.</p><p>When the vehicle pulls out of the driveway, motion will trigger another event:</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-11.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="1839" height="1158" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-11.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-11.png 1000w, https://kleypot.com/content/images/size/w1600/2020/11/image-11.png 1600w, https://kleypot.com/content/images/2020/11/image-11.png 1839w" sizes="(min-width: 1200px) 1200px"><figcaption>Non-relevant detection event</figcaption></figure><p>Now the vehicle has entered the masked area on the street and the driveway is clear. In this case the event is <em>not</em> relevant, but we can see that one automation was run. This is because of the Negative Relevance profile which is set up to turn <em>off</em> the sensor in HA when no cars are present.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-10.png" class="kg-image" alt="Car Presence Sensor with Home Assistant and Last Watch AI" loading="lazy" width="1263" height="752" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-10.png 600w, https://kleypot.com/content/images/size/w1000/2020/11/image-10.png 1000w, https://kleypot.com/content/images/2020/11/image-10.png 1263w" sizes="(min-width: 720px) 720px"><figcaption>Sensor turns off</figcaption></figure><h2 id="conclusion">Conclusion</h2><p>In this guide we saw how the latest features in Last Watch can be used to set up a simple but very precise presence sensor in Home Assistant. This example would also obviously work for other object classes such as trucks, boats, or people. You could even create individual profiles to mask out different zones, like the left and right side of a two-car garage.</p><p>Once you have the sensor set up in HA you can display it or use it to trigger any of your automations. My automations are set up to play a chime and show a snapshot on my wall kiosk when a car pulls into the driveway. If I&apos;m not home, it gets sent to my phone instead.</p><p>Thanks for reading!</p>]]></content:encoded></item><item><title><![CDATA[Last Watch AI - Extend Automations with Node-RED]]></title><description><![CDATA[<p><em><a href="https://github.com/akmolina28/last-watch-ai/">Last Watch</a> is a standalone application for creating if-then automations based on AI object detection.</em></p><p>Last Watch comes with a few automations out of the box which allow you to do simple things like send a Telegram photo message if a person is detected on camera. The list of integrations</p>]]></description><link>https://kleypot.com/last-watch-ai-extend-automations-with-node-red/</link><guid isPermaLink="false">63052aeb3ecc781c55057ca3</guid><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Wed, 30 Dec 2020 04:25:29 GMT</pubDate><content:encoded><![CDATA[<p><em><a href="https://github.com/akmolina28/last-watch-ai/">Last Watch</a> is a standalone application for creating if-then automations based on AI object detection.</em></p><p>Last Watch comes with a few automations out of the box which allow you to do simple things like send a Telegram photo message if a person is detected on camera. The list of integrations continues to grow (MQTT is now available!), but the automation options are still very basic. </p><p>What if you want more advanced options, like turning on a sensor when a car is stationary, and a different sensor when a car is in motion? While you <em>could</em> create different Detection Profiles for each scenario, this can become very tedious with dozens of profiles to manage in LW. Instead, why not offload the automation logic to a real automation platform like Home Assistant or Node-RED?</p><p>As of version 0.8.0, Last Watch allows you to do just that. Using the MQTT automation in LW, you can publish entire detection events to be used in any platform you want. Node-RED is perfect in this case for managing automations and allows you to build new and interesting profiles that are not possible natively in Last Watch. In this post I will demonstrate how to set this up and begin building automations.</p><h2 id="setting-up-mqtt">Setting up MQTT</h2><p><em>I am running Home Assistant with the <a href="https://github.com/hassio-addons/addon-node-red">Node-RED</a> and <a href="https://github.com/home-assistant/addons/blob/master/mosquitto/DOCS.md">MQTT Broker</a> addons.</em></p><p>In this demo I am going to send events from Last Watch to Node-RED using MQTT. First, I need to define an Automation config in Last Watch for my MQTT topic.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-20.png" class="kg-image" alt loading="lazy" width="1732" height="1164" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-20.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-20.png 1000w, https://kleypot.com/content/images/size/w1600/2020/12/image-20.png 1600w, https://kleypot.com/content/images/2020/12/image-20.png 1732w" sizes="(min-width: 1200px) 1200px"><figcaption>Last Watch - MQTT Publish Config</figcaption></figure><p>Here I entered the IP of my Home Assistant server and the default MQTT port. The Topic and Client ID can be set to anything you want. Username and Password must be set unless you enabled Anonymous authentication in the MQTT add-on. Finally, make sure to <em>not</em> tick Custom Payload. We want Last Watch to automatically generate the event payload.</p><p>Now we can save the config and create a new Detection Profile to trigger the messages.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-23.png" class="kg-image" alt loading="lazy" width="1097" height="950" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-23.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-23.png 1000w, https://kleypot.com/content/images/2020/12/image-23.png 1097w" sizes="(min-width: 720px) 720px"><figcaption>Last Watch - New Detection Profile</figcaption></figure><p>Since I am handling all of my automation in Node-RED, I&apos;ll create a profile to capture every type of object with a very low level of confidence. </p><p>Now I can save my profile and subscribe it to my MQTT automation. Now messages should start flowing to the broker.</p><h2 id="node-red">Node-RED</h2><p>Next, we will set up an MQTT listener in Node-RED.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-25.png" class="kg-image" alt loading="lazy" width="1224" height="287" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-25.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-25.png 1000w, https://kleypot.com/content/images/2020/12/image-25.png 1224w" sizes="(min-width: 720px) 720px"><figcaption>MQTT listener in Node-RED</figcaption></figure><p>I&apos;ll start by connecting an Mqtt-in node to a debug node and setting the topic to match the string configured in Last Watch. We also need a json node in the middle to de-serialize the payload into a usable object. Detection Events should begin recording in the debug window.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/Untitled.png" class="kg-image" alt loading="lazy" width="1046" height="1013" srcset="https://kleypot.com/content/images/size/w600/2020/12/Untitled.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/Untitled.png 1000w, https://kleypot.com/content/images/2020/12/Untitled.png 1046w" sizes="(min-width: 720px) 720px"><figcaption>MQTT Payload</figcaption></figure><p>The payload has 3 properties. Two contain metadata about the event and the detection profile, and the third is an array of predictions. This array tells us what kinds of objects were detected, how confident the AI was, and whether or not the object was masked or filtered.</p><p>At this point we can begin building automations based on this message structure. I will use a split node to turn the prediction array into a stream of messages, and handle each prediction separately.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/12/image-28.png" class="kg-image" alt loading="lazy" width="1556" height="624" srcset="https://kleypot.com/content/images/size/w600/2020/12/image-28.png 600w, https://kleypot.com/content/images/size/w1000/2020/12/image-28.png 1000w, https://kleypot.com/content/images/2020/12/image-28.png 1556w" sizes="(min-width: 1200px) 1200px"><figcaption>Node-RED - Advanced Automation</figcaption></figure><p>A switch node is used to split out different object classes so I have different paths for people, cars, trucks, etc. I am also using switch nodes to check if objects are masked or filtered (the smart filter flags objects which have not moved).</p><p>Now I can hook each path into whatever automation I want. I can trigger scripts or turn on sensors in Home Assistant, send push notifications, control smart home devices, etc. I could also add additional logic and filtering based on things like AI confidence, object size/position, proximity of objects or total number of objects. Some of this is possible natively in Last Watch, but it is much easier to manage and debug using the visual editor in Node-RED.</p><figure class="kg-card kg-code-card"><pre><code class="language-JSON">[{&quot;id&quot;:&quot;87975ec6.d3688&quot;,&quot;type&quot;:&quot;mqtt in&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;&quot;,&quot;topic&quot;:&quot;mqtt/lastwatch/driveway&quot;,&quot;qos&quot;:&quot;0&quot;,&quot;datatype&quot;:&quot;auto&quot;,&quot;broker&quot;:&quot;e7767a8a.ec2a28&quot;,&quot;x&quot;:940,&quot;y&quot;:260,&quot;wires&quot;:[[&quot;2464c0b4.008c8&quot;]]},{&quot;id&quot;:&quot;2464c0b4.008c8&quot;,&quot;type&quot;:&quot;json&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Parse Json&quot;,&quot;property&quot;:&quot;payload&quot;,&quot;action&quot;:&quot;&quot;,&quot;pretty&quot;:false,&quot;x&quot;:1140,&quot;y&quot;:260,&quot;wires&quot;:[[&quot;dc32e8ef.1c3c88&quot;]]},{&quot;id&quot;:&quot;dc32e8ef.1c3c88&quot;,&quot;type&quot;:&quot;change&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Get Predictions&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;payload&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;payload.predictions&quot;,&quot;tot&quot;:&quot;msg&quot;}],&quot;action&quot;:&quot;&quot;,&quot;property&quot;:&quot;&quot;,&quot;from&quot;:&quot;&quot;,&quot;to&quot;:&quot;&quot;,&quot;reg&quot;:false,&quot;x&quot;:1310,&quot;y&quot;:260,&quot;wires&quot;:[[&quot;5f3453aa.b97d4c&quot;]]},{&quot;id&quot;:&quot;5f3453aa.b97d4c&quot;,&quot;type&quot;:&quot;split&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;&quot;,&quot;splt&quot;:&quot;\\n&quot;,&quot;spltType&quot;:&quot;str&quot;,&quot;arraySplt&quot;:1,&quot;arraySpltType&quot;:&quot;len&quot;,&quot;stream&quot;:false,&quot;addname&quot;:&quot;&quot;,&quot;x&quot;:1470,&quot;y&quot;:260,&quot;wires&quot;:[[&quot;e1cf19e1.e60798&quot;]]},{&quot;id&quot;:&quot;e1cf19e1.e60798&quot;,&quot;type&quot;:&quot;switch&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Object Class&quot;,&quot;property&quot;:&quot;payload.object_class&quot;,&quot;propertyType&quot;:&quot;msg&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;eq&quot;,&quot;v&quot;:&quot;car&quot;,&quot;vt&quot;:&quot;str&quot;},{&quot;t&quot;:&quot;eq&quot;,&quot;v&quot;:&quot;truck&quot;,&quot;vt&quot;:&quot;str&quot;},{&quot;t&quot;:&quot;eq&quot;,&quot;v&quot;:&quot;person&quot;,&quot;vt&quot;:&quot;str&quot;}],&quot;checkall&quot;:&quot;true&quot;,&quot;repair&quot;:false,&quot;outputs&quot;:3,&quot;x&quot;:910,&quot;y&quot;:460,&quot;wires&quot;:[[&quot;e7159af6.13ab38&quot;],[&quot;3a6e3c56.c56164&quot;],[&quot;beb71c4d.ed465&quot;]]},{&quot;id&quot;:&quot;beb71c4d.ed465&quot;,&quot;type&quot;:&quot;debug&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Person in driveway!&quot;,&quot;active&quot;:true,&quot;tosidebar&quot;:true,&quot;console&quot;:false,&quot;tostatus&quot;:false,&quot;complete&quot;:&quot;payload&quot;,&quot;targetType&quot;:&quot;msg&quot;,&quot;statusVal&quot;:&quot;&quot;,&quot;statusType&quot;:&quot;auto&quot;,&quot;x&quot;:1100,&quot;y&quot;:520,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;e7159af6.13ab38&quot;,&quot;type&quot;:&quot;switch&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Not Masked&quot;,&quot;property&quot;:&quot;payload.is_masked&quot;,&quot;propertyType&quot;:&quot;msg&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;eq&quot;,&quot;v&quot;:&quot;0&quot;,&quot;vt&quot;:&quot;num&quot;}],&quot;checkall&quot;:&quot;true&quot;,&quot;repair&quot;:false,&quot;outputs&quot;:1,&quot;x&quot;:1080,&quot;y&quot;:400,&quot;wires&quot;:[[&quot;d4b0f034.d300e&quot;]]},{&quot;id&quot;:&quot;3a6e3c56.c56164&quot;,&quot;type&quot;:&quot;switch&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Not Masked&quot;,&quot;property&quot;:&quot;payload.is_masked&quot;,&quot;propertyType&quot;:&quot;msg&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;eq&quot;,&quot;v&quot;:&quot;0&quot;,&quot;vt&quot;:&quot;num&quot;}],&quot;checkall&quot;:&quot;true&quot;,&quot;repair&quot;:false,&quot;outputs&quot;:1,&quot;x&quot;:1080,&quot;y&quot;:460,&quot;wires&quot;:[[&quot;c3f2847d.23f778&quot;]]},{&quot;id&quot;:&quot;4be162e5.fa3cbc&quot;,&quot;type&quot;:&quot;debug&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Car moving in driveway!&quot;,&quot;active&quot;:true,&quot;tosidebar&quot;:true,&quot;console&quot;:false,&quot;tostatus&quot;:false,&quot;complete&quot;:&quot;payload&quot;,&quot;targetType&quot;:&quot;msg&quot;,&quot;statusVal&quot;:&quot;&quot;,&quot;statusType&quot;:&quot;auto&quot;,&quot;x&quot;:1500,&quot;y&quot;:420,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;bef2ccdc.a5b72&quot;,&quot;type&quot;:&quot;debug&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Car parked in driveway!&quot;,&quot;active&quot;:true,&quot;tosidebar&quot;:true,&quot;console&quot;:false,&quot;tostatus&quot;:false,&quot;complete&quot;:&quot;payload&quot;,&quot;targetType&quot;:&quot;msg&quot;,&quot;statusVal&quot;:&quot;&quot;,&quot;statusType&quot;:&quot;auto&quot;,&quot;x&quot;:1500,&quot;y&quot;:360,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;c3f2847d.23f778&quot;,&quot;type&quot;:&quot;debug&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Truck in driveway!&quot;,&quot;active&quot;:true,&quot;tosidebar&quot;:true,&quot;console&quot;:false,&quot;tostatus&quot;:false,&quot;complete&quot;:&quot;payload&quot;,&quot;targetType&quot;:&quot;msg&quot;,&quot;statusVal&quot;:&quot;&quot;,&quot;statusType&quot;:&quot;auto&quot;,&quot;x&quot;:1280,&quot;y&quot;:460,&quot;wires&quot;:[]},{&quot;id&quot;:&quot;d4b0f034.d300e&quot;,&quot;type&quot;:&quot;switch&quot;,&quot;z&quot;:&quot;644db6a5.3303b8&quot;,&quot;name&quot;:&quot;Smart Filtered&quot;,&quot;property&quot;:&quot;payload.is_smart_filtered&quot;,&quot;propertyType&quot;:&quot;msg&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;eq&quot;,&quot;v&quot;:&quot;1&quot;,&quot;vt&quot;:&quot;num&quot;},{&quot;t&quot;:&quot;eq&quot;,&quot;v&quot;:&quot;0&quot;,&quot;vt&quot;:&quot;num&quot;}],&quot;checkall&quot;:&quot;true&quot;,&quot;repair&quot;:false,&quot;outputs&quot;:2,&quot;x&quot;:1280,&quot;y&quot;:400,&quot;wires&quot;:[[&quot;bef2ccdc.a5b72&quot;],[&quot;4be162e5.fa3cbc&quot;]]},{&quot;id&quot;:&quot;e7767a8a.ec2a28&quot;,&quot;type&quot;:&quot;mqtt-broker&quot;,&quot;name&quot;:&quot;HA-Broker&quot;,&quot;broker&quot;:&quot;192.168.1.53&quot;,&quot;port&quot;:&quot;1883&quot;,&quot;clientid&quot;:&quot;&quot;,&quot;usetls&quot;:false,&quot;compatmode&quot;:false,&quot;keepalive&quot;:&quot;60&quot;,&quot;cleansession&quot;:true,&quot;birthTopic&quot;:&quot;&quot;,&quot;birthQos&quot;:&quot;0&quot;,&quot;birthPayload&quot;:&quot;&quot;,&quot;closeTopic&quot;:&quot;&quot;,&quot;closeQos&quot;:&quot;0&quot;,&quot;closePayload&quot;:&quot;&quot;,&quot;willTopic&quot;:&quot;&quot;,&quot;willQos&quot;:&quot;0&quot;,&quot;willPayload&quot;:&quot;&quot;}]</code></pre><figcaption>Node-RED code</figcaption></figure><p>Thanks for reading!</p>]]></content:encoded></item><item><title><![CDATA[Last Watch AI - Ubuntu Installation and Upgrade Guide]]></title><description><![CDATA[<p>The preferred method to install Last Watch on Ubuntu is directly from the source code. Installing from source makes it much easier to upgrade in the future by just pulling the latest code and rebuilding. Alternatively, you can install using the official releases.</p><p><em>For a full walkthrough on setting up</em></p>]]></description><link>https://kleypot.com/last-watch-ai-ubuntu-installation-and-upgrading/</link><guid isPermaLink="false">63052aeb3ecc781c55057ca1</guid><category><![CDATA[home-automation]]></category><category><![CDATA[last-watch-ai]]></category><category><![CDATA[home-security]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Mon, 23 Nov 2020 16:33:00 GMT</pubDate><content:encoded><![CDATA[<p>The preferred method to install Last Watch on Ubuntu is directly from the source code. Installing from source makes it much easier to upgrade in the future by just pulling the latest code and rebuilding. Alternatively, you can install using the official releases.</p><p><em>For a full walkthrough on setting up Last Watch and integrating with an NVR system, see [coming soon...].</em></p><h2 id="dependencies">Dependencies</h2><p>First, make sure you have Docker installed. Last Watch runs in Docker containers.</p><ol><li><strong>Install Docker Engine</strong> - <a href="https://docs.docker.com/engine/install/ubuntu/">https://docs.docker.com/engine/install/ubuntu/</a></li><li><strong>Install Docker Compose</strong> - <a href="https://docs.docker.com/compose/install/">https://docs.docker.com/compose/install/</a></li></ol><h2 id="installing-from-source">Installing From Source</h2><p><em>I typically install everything in the home directory of a non-root user.</em></p><p><strong>1. Clone the source code</strong> and go to the application root folder</p><pre><code class="language-BASH">git clone https://github.com/akmolina28/last-watch-ai.git &amp;&amp; cd last-watch-ai</code></pre><p><strong>2. Set up your configuration</strong> by creating and editing the <code>.env</code> file in the application root folder.</p><pre><code class="language-BASH">cp .env.example .env
nano .env</code></pre><figure class="kg-card kg-image-card"><img src="https://kleypot.com/content/images/2020/11/image-28.png" class="kg-image" alt loading="lazy" width="624" height="250" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-28.png 600w, https://kleypot.com/content/images/2020/11/image-28.png 624w"></figure><ul><li>Set WATCH_FOLDER to the path for your input files</li><li>Set other settings as desired</li></ul><p><strong>3. Build the application. </strong>This will install dependencies, set up the app keys, symlinks, database, and compile/optimize the code.</p><pre><code class="language-BASH">sudo cp src/.env.example src/.env &amp;&amp;
sudo docker-compose up -d mysql &amp;&amp;
sudo docker-compose run --rm composer install --optimize-autoloader --no-dev &amp;&amp;
sudo docker-compose run --rm artisan route:cache &amp;&amp;
sudo docker-compose run --rm artisan key:generate --force &amp;&amp;
sudo docker-compose run --rm artisan storage:link &amp;&amp;
sudo docker-compose run --rm artisan migrate --force &amp;&amp;
sudo docker-compose run --rm npm install --verbose &amp;&amp;
sudo docker-compose run --rm npm run prod --verbose</code></pre><p><strong>4. Bring up the containers</strong> (downloading the images can take a few minutes).</p><pre><code class="language-BASH">sudo docker-compose up -d --build site</code></pre><p>Now, from your network you can access the web app using the IP of your server and the port number in the <code>.env</code> file.</p><h2 id="upgrading-from-source">Upgrading From Source</h2><p><em>This assumes you installed from source.</em></p><p><strong>1. Stop the containers during the upgrade</strong></p><pre><code class="language-BASH">cd /path/to/last-watch-ai

sudo docker-compose down</code></pre><p><strong>2. Pull latest source code</strong></p><pre><code class="language-BASH">git pull</code></pre><p><strong>3. Re-build application</strong></p><pre><code class="language-BASH">sudo docker-compose run --rm composer install --optimize-autoloader --no-dev &amp;&amp;
sudo docker-compose run --rm artisan route:cache &amp;&amp;
sudo docker-compose run --rm artisan migrate --force &amp;&amp;
sudo docker-compose run --rm npm install --verbose &amp;&amp;
sudo docker-compose run --rm npm rebuild &amp;&amp;
sudo docker-compose run --rm npm run prod --verbose</code></pre><p>The re-build will install any new dependencies and re-compile the code, and it will migrate your database to the latest version. Once the rebuild finishes, you may need to refresh your browser.</p><p><strong>4. Restart the containers</strong></p><pre><code class="language-BASH">sudo docker-compose up -d --build site</code></pre><h2 id="install-from-release-version">Install From Release Version</h2><p>You also have the option to install a pre-compiled release. The releases are primarily meant to simplify things for Windows users, but they also work on Ubuntu. There is just one extra step at the end to re-create the symlinks, which don&apos;t work out-of-the-box on Linux.</p><p><em>I typically install everything in the home directory of a non-root user.</em></p><p><strong>1. Download/unzip the latest release</strong> from Github using the command below, or by grabbing the desired release from the <a href="https://github.com/akmolina28/last-watch-ai/releases">Releases </a>page.</p><pre><code class="language-BASH">curl -s https://api.github.com/repos/akmolina28/last-watch-ai/releases/latest \
| grep &quot;browser_download_url&quot; \
| cut -d : -f 2,3 \
| tr -d \&quot; \
| wget -qi -

unzip &lt;zip_file&gt;</code></pre><p><strong>2. Go to the application root folder</strong>, which is called `last-watch-ai` if you unzipped using the previous step.</p><pre><code>cd last-watch-ai</code></pre><p><strong>3. Set up your configuration </strong>by editing the environment file in the application root folder.</p><pre><code class="language-BASH">nano .env</code></pre><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/11/image-27.png" class="kg-image" alt loading="lazy" width="624" height="250" srcset="https://kleypot.com/content/images/size/w600/2020/11/image-27.png 600w, https://kleypot.com/content/images/2020/11/image-27.png 624w"><figcaption>Example .env settings</figcaption></figure><ul><li>Set WATCH_FOLDER to the path for your input files.</li><li>Set other settings as desired.</li></ul><p><strong>4. Set file permissions</strong> for the web server.</p><pre><code class="language-BASH">sudo chown -R www-data:www-data src

sudo find src -type f -exec chmod 644 {} \;

sudo find src -type d -exec chmod 755 {} \;</code></pre><p><strong>5. Bring up the containers</strong> (downloading the images can take a few minutes)</p><pre><code class="language-BASH">sudo docker-compose up -d --build site</code></pre><p><strong>6. Set up symlinks </strong>(the symlinks in the pre-compiled release do not work on Linux and have to be recreated)</p><pre><code class="language-BASH">sudo docker exec -it lw_php rm /var/www/app/public/storage

sudo docker exec -it lw_php php artisan storage:link</code></pre><p>Now the web app should be up and running on port 8080 (or whichever port you configured).</p><h2 id="upgrading-to-release-version">Upgrading to Release Version</h2><p>You can also upgrade an existing install using a pre-compiled release. These steps should work even if you originally installed from source.</p><p><strong>1. Stop containers</strong></p><pre><code class="language-BASH">sudo docker stop $(sudo docker ps -a -q)

sudo docker rm $(sudo docker ps -a -q)</code></pre><p><strong>2. Move previous install to a backup folder</strong></p><pre><code class="language-BASH">mv last-watch-ai last-watch-ai_bak</code></pre><p><strong>3. Download/unzip the latest release</strong> from Github using the command below, or by grabbing the desired release from the <a href="https://github.com/akmolina28/last-watch-ai/releases">Releases </a>page.</p><pre><code class="language-BASH">curl -s https://api.github.com/repos/akmolina28/last-watch-ai/releases/latest \
| grep &quot;browser_download_url&quot; \
| cut -d : -f 2,3 \
| tr -d \&quot; \
| wget -qi -

unzip &lt;zip_file&gt;</code></pre><p><strong>4. Copy over the environment file</strong> from your previous install</p><pre><code class="language-BASH">cp last-watch-ai_bak/.env last-watch-ai/.env</code></pre><p><strong>5. Migrate your app data</strong> (unless you want to start fresh)</p><p> &#xA0; &#xA0;a. Copy over the database</p><pre><code class="language-BASH">rm -rf last-watch-ai/mysql

cp -R last-watch-ai_bak/mysql last-watch-ai/mysql</code></pre><p> &#xA0; &#xA0;b. Copy over your mask files</p><pre><code class="language-BASH">cp -R last-watch-ai_bak/src/storage/app/public/masks/. last-watch-ai/src/storage/app/public/masks/</code></pre><p> &#xA0; &#xA0;c. Go to the new application root and run the migrations to update your database</p><pre><code class="language-BASH">cd last-watch-ai
			
sudo docker-compose run --rm artisan migrate</code></pre><p><strong>6. Set file permissions</strong> for the web server</p><pre><code class="language-BASH">sudo chown -R www-data:www-data src

sudo find src -type f -exec chmod 644 {} \;

sudo find src -type d -exec chmod 755 {} \;</code></pre><p><strong>7. Bring up the containers</strong></p><pre><code class="language-BASH">sudo docker-compose up -d --build site</code></pre><p><strong>8. Set up symlinks</strong></p><pre><code class="language-BASH">sudo docker exec -it lw_php rm /var/www/app/public/storage

sudo docker exec -it lw_php php artisan storage:link</code></pre><p>Now the updated app should be up and running. You may need to refresh your browser to empty your cache.</p><h2 id="uninstalling">Uninstalling</h2><p>To uninstall Last Watch, you first have to stop and remove the containers. Then you can simply delete the application folder to remove it.</p><p><strong>1. Stop and remove the containers</strong></p><pre><code>cd /path/to/last-watch-ai

sudo docker-compose down</code></pre><p><strong>2. Delete the application</strong></p><pre><code class="language-BASH">sudo rm -rf /path/to/last-watch-ai</code></pre>]]></content:encoded></item><item><title><![CDATA[Last Watch - Getting Started Guide]]></title><description><![CDATA[<p>This guide will introduce the key concepts and features of Last Watch by walking through a very basic setup. More advanced features will also be introduced at the end of this guide. If you have not installed Last Watch yet, please refer to the <a href="https://kleypot.com/last-watch-ai-windows-setup/">setup guide</a>.</p><h2 id="1-detection-profiles">1. Detection Profiles</h2><p>The</p>]]></description><link>https://kleypot.com/last-watch-ai-user-guide/</link><guid isPermaLink="false">63052aeb3ecc781c55057c9f</guid><category><![CDATA[home-automation]]></category><category><![CDATA[home-security]]></category><category><![CDATA[last-watch-ai]]></category><dc:creator><![CDATA[Andrew Molina]]></dc:creator><pubDate>Fri, 30 Oct 2020 17:01:03 GMT</pubDate><content:encoded><![CDATA[<p>This guide will introduce the key concepts and features of Last Watch by walking through a very basic setup. More advanced features will also be introduced at the end of this guide. If you have not installed Last Watch yet, please refer to the <a href="https://kleypot.com/last-watch-ai-windows-setup/">setup guide</a>.</p><h2 id="1-detection-profiles">1. Detection Profiles</h2><p>The first step to building automations is to create a <strong>Detection Profile</strong>. These profiles are how you define which image files Last Watch will look for, and what types of objects to search for in those images. You can think of profiles as filters that sort out the image events coming in.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image.png" class="kg-image" alt loading="lazy" width="887" height="594" srcset="https://kleypot.com/content/images/size/w600/2020/10/image.png 600w, https://kleypot.com/content/images/2020/10/image.png 887w"><figcaption>Creating a profile for the driveway camera</figcaption></figure><p>Every profile must have a File Pattern. The <strong>File Pattern</strong> is a search string for images that come in. If your NVR software generates jpegs like &quot;driveway-cam_20201029123255.jpg&quot;, a good search string would be &quot;driveway-cam&quot;. You can also use Regex for more advanced pattern matching.</p><p>You must also select one or more <strong>Relevant Objects</strong> for each profile. When an image file is matched to the profile, Last Watch will run the AI and search for those objects.</p><p>The rest of the settings are optional ways to further refine the profile:</p><ul><li><strong>Minimum Confidence</strong> - how sure the AI must be about what it finds</li><li><strong>Mask File</strong> - a bitmap which defines areas for the AI to ignore</li><li><strong>Smart Filtering</strong> - ignore objects which remain stationary, such as a car parked in a driveway</li></ul><h2 id="2-detection-events">2. Detection Events</h2><p>When new image files come in, Last Watch will create a Detection Event for each image. </p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-1.png" class="kg-image" alt loading="lazy" width="1511" height="679" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-1.png 600w, https://kleypot.com/content/images/size/w1000/2020/10/image-1.png 1000w, https://kleypot.com/content/images/2020/10/image-1.png 1511w" sizes="(min-width: 1200px) 1200px"><figcaption>Detection Events for the driveway camera</figcaption></figure><p>If the image matches one or more of your profiles, then the AI is run to check the image for relevant objects. If the image does contain relevant objects (and they are not masked or filtered out), then the event is marked <strong>Relevant</strong> and the profile&apos;s Automations &#xA0;are triggered.</p><p>From the Detection Events page, you can click into each event to see more details:</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-2.png" class="kg-image" alt loading="lazy" width="1463" height="1028" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-2.png 600w, https://kleypot.com/content/images/size/w1000/2020/10/image-2.png 1000w, https://kleypot.com/content/images/2020/10/image-2.png 1463w" sizes="(min-width: 1200px) 1200px"><figcaption>Detection Event details</figcaption></figure><p>The original image is shown, along with each Detection Profile which was matched and tested for relevance. If you click the profile name you can see the relevant objects with their confidence levels.</p><p>You can also see, in this case, that zero <strong>Automations </strong>were run. That is because none are set up by default.</p><h2 id="3-automations">3. Automations</h2><p>When a relevant event is generated, you typically want to run one or more <strong>Automations </strong>such as sending the image via Telegram, or making a web request to your home automation system.</p><p>Each automation must be defined using the menu options in the navbar:</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-3.png" class="kg-image" alt loading="lazy" width="1676" height="929" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-3.png 600w, https://kleypot.com/content/images/size/w1000/2020/10/image-3.png 1000w, https://kleypot.com/content/images/size/w1600/2020/10/image-3.png 1600w, https://kleypot.com/content/images/2020/10/image-3.png 1676w" sizes="(min-width: 1200px) 1200px"><figcaption>Automations Menu</figcaption></figure><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-4.png" class="kg-image" alt loading="lazy" width="1671" height="1069" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-4.png 600w, https://kleypot.com/content/images/size/w1000/2020/10/image-4.png 1000w, https://kleypot.com/content/images/size/w1600/2020/10/image-4.png 1600w, https://kleypot.com/content/images/2020/10/image-4.png 1671w" sizes="(min-width: 720px) 720px"><figcaption>Setting up a Telegram Bot</figcaption></figure><p>Once you have configured some Automations, they need to be linked to your profile(s). From the Detection Profiles page, you can follow the Automations link for each profile to set up the behavior.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-5.png" class="kg-image" alt loading="lazy" width="1167" height="672" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-5.png 600w, https://kleypot.com/content/images/size/w1000/2020/10/image-5.png 1000w, https://kleypot.com/content/images/2020/10/image-5.png 1167w" sizes="(min-width: 720px) 720px"><figcaption>Automations for the driveway camera profile</figcaption></figure><p>Now the profile is complete and the Automations will begin to trigger when relevant events are generated!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-6.png" class="kg-image" alt loading="lazy" width="1000" height="704" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-6.png 600w, https://kleypot.com/content/images/2020/10/image-6.png 1000w" sizes="(min-width: 720px) 720px"><figcaption>Telegram Bot feed</figcaption></figure><h2 id="4-advanced-settings">4. Advanced Settings</h2><p>So far, this guide has demonstrated the most basic functionality available. Here are some more advanced features to further refine your profiles.</p><h3 id="mask-files">Mask Files</h3><p>You can mask out specific areas for the AI to ignore by creating and uploading a <strong>Mask File</strong>. The mask file is an image with the same dimensions as the input files, where the &quot;masked out&quot; areas are shaded in. This is useful if you want to mask out areas that you don&apos;t care about, like a public street or a neighbor&apos;s property.</p><p>When the AI detects an object like a car or person, if most of the object is in the masked area, then the object is masked out and not considered for relevance. When you view an event that has a masked profile, you will see the mask rendered out when you click on that profile.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-7.png" class="kg-image" alt loading="lazy" width="1652" height="862" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-7.png 600w, https://kleypot.com/content/images/size/w1000/2020/10/image-7.png 1000w, https://kleypot.com/content/images/size/w1600/2020/10/image-7.png 1600w, https://kleypot.com/content/images/2020/10/image-7.png 1652w" sizes="(min-width: 720px) 720px"><figcaption>Street and neighbor&apos;s driveway masked out</figcaption></figure><p>As of release 0.4.0, you have to create the Mask File manually and upload it when creating your profile. This can be done in the photo editor of your choice. Here is <em>a walkthrough using GIMP (coming soon)</em>, which is available for free.</p><h3 id="smart-filtering">Smart Filtering</h3><p>Another option you have when creating a profile is called <strong>Smart Filtering</strong>. This option will try to ignore duplicate detection events. For example, maybe you don&apos;t want to constantly trigger your automations because a car is sitting stationary in the driveway. The object is relevant, but it&apos;s not moving.</p><p>Smart Filtering will compare each event to the previous event to see if the relevant objects were already present in the same area. If the same type of object is in the same position through both events, then the object is &quot;filtered&quot; out and not considered for relevance.</p><p>The <strong>Smart Filtering Precision </strong>is a function of how much the two objects must overlap to be filtered out. The default setting is a good baseline but you may need to adjust it if you feel the filtering is too relaxed or too aggressive.</p><h3 id="enabling-disabling-profiles">Enabling/Disabling Profiles</h3><p>You also have the option to Enable and Disable individual profiles, or run them on a schedule. For example, you could have a schedule that looks for people in your driveway during the night, and triggers automations to trip an alarm.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-8.png" class="kg-image" alt loading="lazy" width="1695" height="864" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-8.png 600w, https://kleypot.com/content/images/size/w1000/2020/10/image-8.png 1000w, https://kleypot.com/content/images/size/w1600/2020/10/image-8.png 1600w, https://kleypot.com/content/images/2020/10/image-8.png 1695w" sizes="(min-width: 1200px) 1200px"><figcaption>Changing the profile status</figcaption></figure><p>Profiles can also be enabled/disabled using the API. If you already have home automation like Home Assistant or openHAB, you could create your own automations to turn profiles on or off:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://kleypot.com/content/images/2020/10/image-10.png" class="kg-image" alt loading="lazy" width="1120" height="341" srcset="https://kleypot.com/content/images/size/w600/2020/10/image-10.png 600w, https://kleypot.com/content/images/size/w1000/2020/10/image-10.png 1000w, https://kleypot.com/content/images/2020/10/image-10.png 1120w" sizes="(min-width: 720px) 720px"><figcaption>NODE-Red Example</figcaption></figure><figure class="kg-card kg-code-card"><pre><code class="language-JSON">[{&quot;id&quot;:&quot;e7c6cfa7.8101f&quot;,&quot;type&quot;:&quot;http request&quot;,&quot;z&quot;:&quot;57c05c38.dd9c14&quot;,&quot;name&quot;:&quot;Last Watch - Driveway Profile&quot;,&quot;method&quot;:&quot;PUT&quot;,&quot;ret&quot;:&quot;txt&quot;,&quot;paytoqs&quot;:&quot;ignore&quot;,&quot;url&quot;:&quot;http://127.0.0.1:8080/api/profiles/1/status&quot;,&quot;tls&quot;:&quot;&quot;,&quot;persist&quot;:false,&quot;proxy&quot;:&quot;&quot;,&quot;authType&quot;:&quot;&quot;,&quot;x&quot;:520,&quot;y&quot;:160,&quot;wires&quot;:[[]]},{&quot;id&quot;:&quot;e7ed3def.d6c3d&quot;,&quot;type&quot;:&quot;change&quot;,&quot;z&quot;:&quot;57c05c38.dd9c14&quot;,&quot;name&quot;:&quot;Enable&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;payload&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;{\&quot;status\&quot;: \&quot;enabled\&quot;}&quot;,&quot;tot&quot;:&quot;json&quot;}],&quot;action&quot;:&quot;&quot;,&quot;property&quot;:&quot;&quot;,&quot;from&quot;:&quot;&quot;,&quot;to&quot;:&quot;&quot;,&quot;reg&quot;:false,&quot;x&quot;:310,&quot;y&quot;:160,&quot;wires&quot;:[[&quot;e7c6cfa7.8101f&quot;]]},{&quot;id&quot;:&quot;586c3090.44936&quot;,&quot;type&quot;:&quot;inject&quot;,&quot;z&quot;:&quot;57c05c38.dd9c14&quot;,&quot;name&quot;:&quot;&quot;,&quot;props&quot;:[{&quot;p&quot;:&quot;payload&quot;},{&quot;p&quot;:&quot;topic&quot;,&quot;vt&quot;:&quot;str&quot;}],&quot;repeat&quot;:&quot;&quot;,&quot;crontab&quot;:&quot;&quot;,&quot;once&quot;:false,&quot;onceDelay&quot;:0.1,&quot;topic&quot;:&quot;&quot;,&quot;payload&quot;:&quot;&quot;,&quot;payloadType&quot;:&quot;date&quot;,&quot;x&quot;:160,&quot;y&quot;:160,&quot;wires&quot;:[[&quot;e7ed3def.d6c3d&quot;]]},{&quot;id&quot;:&quot;f09231a2.74374&quot;,&quot;type&quot;:&quot;change&quot;,&quot;z&quot;:&quot;57c05c38.dd9c14&quot;,&quot;name&quot;:&quot;Disable&quot;,&quot;rules&quot;:[{&quot;t&quot;:&quot;set&quot;,&quot;p&quot;:&quot;payload&quot;,&quot;pt&quot;:&quot;msg&quot;,&quot;to&quot;:&quot;{\&quot;status\&quot;: \&quot;disabled\&quot;}&quot;,&quot;tot&quot;:&quot;json&quot;}],&quot;action&quot;:&quot;&quot;,&quot;property&quot;:&quot;&quot;,&quot;from&quot;:&quot;&quot;,&quot;to&quot;:&quot;&quot;,&quot;reg&quot;:false,&quot;x&quot;:320,&quot;y&quot;:220,&quot;wires&quot;:[[&quot;e7c6cfa7.8101f&quot;]]},{&quot;id&quot;:&quot;27ec8817.0ac578&quot;,&quot;type&quot;:&quot;inject&quot;,&quot;z&quot;:&quot;57c05c38.dd9c14&quot;,&quot;name&quot;:&quot;&quot;,&quot;props&quot;:[{&quot;p&quot;:&quot;payload&quot;},{&quot;p&quot;:&quot;topic&quot;,&quot;vt&quot;:&quot;str&quot;}],&quot;repeat&quot;:&quot;&quot;,&quot;crontab&quot;:&quot;&quot;,&quot;once&quot;:false,&quot;onceDelay&quot;:0.1,&quot;topic&quot;:&quot;&quot;,&quot;payload&quot;:&quot;&quot;,&quot;payloadType&quot;:&quot;date&quot;,&quot;x&quot;:160,&quot;y&quot;:220,&quot;wires&quot;:[[&quot;f09231a2.74374&quot;]]}]</code></pre><figcaption>Node-RED code</figcaption></figure><h3 id="restful-api">RESTful API</h3><p>Note that <em>everything</em> you can do with the web interface can also be done with the API. In fact, the web app is just a skin that sits on top of the API. If you hate the web app, you could create your own interface from scratch or you could manage everything from your home automation system.</p><p>As of release 0.4.0, the API is not yet mature and there is no real documentation. For now, you can use the developer console in your web browser to see how the web app is interacting with the API as a guide for making your own calls. </p><p>Beware that the API is subject to breaking changes. A release is planned soon which is focused on formally rolling out the API which will likely contain many of these breaking changes.</p>]]></content:encoded></item></channel></rss>