silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
As requested by [personal profile] azurelunatic, who has people interested in how I can use my voice commands to make sure that if I need to go do something else, I don't miss out on anything that's happened on the TV or the movie in the interim.

This is a thing that requires four components to work in sequence - the voice assistant, the smart home brain, a programmable IR blaster, and an IR receiver attached to the device that's driving the video stream. So our workflow goes: Rhasspy → Home Assistant → Broadlink IR blaster → FLIRC receiver.

Rhasspy has featured from the very beginning of this series, and the intent scripts involved in getting voice input paired up with actions. The key insight into making this work was getting Rhasspy to understand numeric input and pass those parameters to Home Assistant, such that I can develop a sentence to train Rhasspy on that will collect the correct parameters, and an intent script and supplemental script on the Home Assistant side to format those parameters in a way that Home Assistant is expecting to run a timer with, running the appropriately-named timer with those parameters, and then repeating back what function it heard me invoke and any parameters that were passed to it. The time-out script accepts parameters, or, if none are provided, uses the default value that I've set for it (two minutes, which is the approximate length of a commercial break in the United States.) The time-out timer runs in the same way as any other timer helper or arbitrary timer, and there is an automation listening in Home Assistant for when that timer runs out that will run appropriate actions for the expiry of that timer.

What happens when that timer runs out is, essentially, the IR blaster fires a specific code to the FLIRC receiver interprets as having had the "pause" key pressed on a remote. Which sounds like it's easy, but it's not, because that IR blaster has no database of codes to look up to know what to send, and instead has to be taught what various IR codes are by having them beamed at it from a convenient remote. The FLIRC is just an IR receiver and interpreter. It can't transmit anything to the IR blaster to teach it anything. And while we have wireless keyboards for the convenience of the ten-foot interface, those keyboards don't transmit IR signals to a receiver. (nor do we want them to.)

What I do have, and have had, are programmable remotes, like a Logitech Harmony (no longer manufactured) or the Skip1s remote from the FLIRC folks. They do have databases of IR codes that can be downloaded into their remote, and therefore, that gives our IR blaster something to learn from. (Digression: The FLIRC can be used as a receiver to learn codes from an OEM remote and then teach those codes to the Skip1s, if the device isn't in the Skip database, but that's very much advanced fooling-about with both of those devices, and if you have the OEM remote, you can just teach the Broadcom device directly, rather than going through a Skip or other universal remote. I use programmable universal remotes because I want one remote to control all the things, rather than having to deal with multiple remotes to adjust things. And because while using the voice assistant is great, I don't want to have to use it (or a smartphone app) to do all the sound and picture-related remote control stuff.)

Actually getting IR codes into the IR blaster is actually an adventure in building another callable script that sets the remote into learning mode, and then goes through a sequence of "waiting for command" inputs, where in the script, I tell it what device it's learning how to control, and what the name of the command its learning is. Thankfully, the process on how to get a remote to learn, and then how to get it to send commands, is very well-documented in Home Assistant, far better than the underlying Python library that the integration is based on. It's a very manual process, but when set up right, I only have to do it once, and the IR blaster remembers what it's been taught and can then turn around and transmit the same thing. That's remote → IR blaster → FLIRC receiver, and then I can use both the remote or the IR blaster as needed.

Since Home Assistant can use the IR blaster to transmit to the receiver, the actual automation listening for the end of the time-out timer has one command associated with it - use the IR blaster to transmit a press of the "pause" key to the receiver. So long as the receiver is within range to receive, it will receive the keypress and do the associated action. Thus, the whole chain completes, voice command to script to timer to automation to IR transmission to keypress. It doesn't feel like a complex operation, because when it works, it works, and it doesn't invite contemplation of all the parts that have to work with each other to achieve the equivalent of pressing a key on a remote at the right time.

There are drawbacks to this setup. The biggest and most obvious one is that the pause key will only work on whatever item currently has focus on the receiving computer. If you tend to use picture-in-picture to watch multiple streams and/or listen to multiple sounds, the time-out keypress will only hit the one that the underlying operating system believes is currently in focus. As far as I know, there's no "pause all/resume all" that can be transmitted and interpreted from the things I have available. (After all, even though computers are very good at doing multiple tasks in sequence, humans are thought to be the kind of beings that only want to concentrate on one thing at a time, so why would you need something that pauses and/or resumes everything at once?) If that does actually exist, then I'll do my best to figure out how to incorporate it into scripting and see if I can teach it properly to the IR blaster / receiver that this is what I want.

Second, what the operating system believes is in focus and what the human believes is in focus are not always the same, so sometimes when I'm trying to get one thing to pause, it turns out the operating system has focus on something else. That requires manual intervention to get the focus where it should be, and at that point, you're already on the machine that needs to be stopped, so you may as well just click the right pause button yourself.

And third, not all programs, sites, streams, and the like respond correctly to a pause key, so there are occasions where I could push all the pause keys I wanted to, in whatever way I wanted to, and nothing would happen, because the site or program doesn't recognize it or has locked out that particular input from going through until some other thing is cleared, acknowledged, or otherwise managed. Thankfully, the number of situations where this has happened to me is pretty small, and there are sometimes some efficient and effective workarounds to this problem like the aforementioned picture-in-picture pop-out, which is usually pretty properly responsive when it's the thing that has focus, regardless of what it going on with the underlying website and its playback controls.

This idea is also very scalable and configurable - so long as you can get the remote to learn the appropriate command from another remote, or you have the appropriate base64 encodings that will work for the remote available so that you can drop it directly into the learned codes file, and the receiver on the other end knows what to do with the codes that it receives, you can basically scale this idea to anything that you might want to do with a remote control. I'd suggest using it only for things where you can either have a gap for execution in between commands, or if you have a workflow that can immediately begin recording a new input from the voice assistant after the last command has finished executing. Your own set-up will likely depend on what applications you want to control and their potential quirks, but it is rather nice to have that option in place where you can either let the commercials play and then pause before the action resumes, or give yourself until a breaking point and then have the media pause automatically so that you can task switch without FOMO or so that you can get your brain in gear to do the actual switching instead of just continuing to binge whatever it is that you're doing. (Kickstarting executive function with computers can be really helpful, if for no other reason than that they will do exactly what you tell them to do, on time.)
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
Okay, so, the inciting incident for this is that my venerable and useful routers were declared End of Life by their manufacturer at the end of this year, which meant that the slightly aftermarket firmware that I was using on them would also be discontinued at the end of the year. Cue me trying to figure out whether or not I needed to purchase some replacement hardware, and what kind of network I would want to set up if I did need to do so. Mesh networking seemed like a useful option, given that there were, based on current router placement, a few dead spots in the house or places that got less than optimal signal, But also, so could the deployment of some access points and running cable to them from where the router was. But if I was running cable anyway, then I could just move the current router to a more optimal position and see if that managed the dead zones.

First, the router flashing and the re-setting of the network names and assigning IPs to devices on the network, rather than letting DHCP do all the work )

Second, setting up the environment for a set of scripts to run properly involving re-partitioning, reformatting, and soft-bricking a file system on a flash drive )

Third, after all this hardware flimflam, actually setting up the scripts and getting them to transmit properly )

[VICTORY FANFARE GOES HERE, IF A BIT CONFUSED.]

This one definitely goes in the column of "if it works, you've succeeded." There's got to be a better, cleaner, more elegant solution that somehow manages to notice when Home Assistant comes back on-line after the reboot and knows to rebroadcast all the discovery messages that have happened before, so Home Assistant jumps back in to understanding. Or I need to get better knowledge of how the discovery bits are structured so that I can turn them into sensors that know to seek their own discovery after a restart or have it already in the Home Assistant configuration what topics to listen to for their values. Something that's both automated and flexible enough to adapt to the circumstances and to work with the tools that are available to me. Some of the documentation and community posts I've read about this suggests, however, that it is not that simple to collect a list of what topics have been generated so far on an MQTT broker. And if I can't get it to go in Busybox, then I'd probably have to do something on Entware, and that would still mean writing a script that's specifically listening for something to fire so that it can do something in return. The elegant solution has a significantly larger amount of complexity in it than the simple one, and if I really wanted something that was truly flexible and responsive, I could just set the router to run the command that deletes the file of things that have already been set up at regular intervals, so the topics would be continuously rebroadcast and never more than so many minutes away from coming back online, regardless of when I restarted Home Assistant. That sounds like a lot of unnecessary network traffic, though.

So I've done something else to make my Home Assistant more full of data, using a communication protocol that I've already set up on one machine and a suite of scripts that someone else has already designed for use along with what is essentially an add-on system for the router. And then figured out how to (inelegantly) ensure that the sensor data would continue flowing after a scheduled restart of Home Assistant. Now that I'm on the other side of it, and of the network restructuring that took place before it, I can see how long this could have taken, had I gone straight to the correct (or at least the working) solution immediately, but a lot of how I get to the solutions I either accept or use until a better solution presents itself is reading docs and fucking around and finding out. Which, through its iterative nature, takes time and frustration and thinking and coming back to a problem after sleeping on it for a bit. And accepting that even if this solution is not the correct one, it does not necessarily mean that there isn't one or that I cannot find the correct one. And sometimes it means research. Not succeeding the first time, or the hundredth time, does not mean I am permanently a useless failure at everything. It only means that I have not succeeded this time at this one thing. (Which can be hard to remember when the weasels are biting.)
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
The Home Assistant is humming along smoothly at this point, probably because I haven't actually tried to do a whole lot with it in terms of extending functionality, increasing it, or writing new functions for it to achieve. It's behaving stably, for the moment, even as we keep everything updated and fix issues when they come up, like when the robot vacuum changed its IP address by one from the DHCP assignment and it took me weeks to figure out what the problem was. So this edition of Adventures in Computer Stuff is a lot about fixing and errors and using librarian skills to find solutions to those errors.

Adventures in Error Correction and Workarounds )

So, once again, we have managed to make computers work and do what we want them to, mostly through the skills of search and persistence and finding a workaround when the direct method doesn't work. Not because of superior technical prowess or any of the skills where I would be able to directly understand what went wrong and know what commands to run immediately to confirm the issue or fix it. It's why having machines that you can play around with is vital to your learning, because then you have the freedom to try solutions and restore from backups if those solutions make a bigger mess than they had before. And you learn a little bit more about how the systems work every time that you succeed, and sometimes more every time that you fail. Have fun, everyone!
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
Bit of a redux situation here, brought on by having done what is apparently the most dangerous thing to do for any Linux machine: run updates. The machine that did have the MQTT-CEC bridge from the earlier situation got itself caught in a bind when things wanted to update, but one of the crucial libraries involved would not update itself to a new version that needed it. The attempt to do the update broke everything, and it was easier to wipe the machine and re-image it than to try and fix the snarl. Which put us back on a different path, with an operating system that once again is more intended for the machine, rather than one that hasn't fully managed to hack its way into full compatibility yet. In the interim, though, there were enough upgrades and new systems and things that it was worth going back to what had been. (Most importantly, getting Widevine on the thing was no longer difficult. Since it's a bedroom TV thing, it kind of needs to decode DRM-laden streams to be entertaining.) However, in my distaste for how the upgrade process had bombed spectacularly, I wiped the item without copying off the old executable that I had, which managed the bridge. I figured I could reconstruct it when the new system was in place.

Not so much. But we do find a solution in the end, and it might be a superior one. )

[Victory Fanfare Plays, Somewhat Confused?]

As with so many of these things that involve automation, functionality, and other such kinds of things, where I didn't actually write anything, but instead poked, prodded, borrowed, and banged together something that appears to have worked correctly, I don't know that the thing that I did was "coding" in any meaningful sense. Engineering, absolutely. Debugging, maybe? Systems thinking and information professional work, definitely. And a healthy amount of Effing Around and Finding Out to see what happened when I changed things or inserted stuff. I'm glad it works, but I'm definitely having a moment of "can I be justifiably proud of having managed to put this together and found a working solution?" A+ for effort and persistence, certainly, and hooray that it works! Which are the things that I really do want to encourage for myself and others.

It might be something to do with feeling like my nerd credentials are somehow in jeopardy, or something, since I'm still somewhat resolutely an end-user type who appreciates good GUIs and tutorials, and borrows where possible, rather than trying to build everything myself. And that I often do things in the service of getting games to play, or to tinker and play, or (most often) because someone else has an idea of something they want to do and have asked me how they might go about doing it. Pay no attention to the "has successfully installed many different forms of Linux on different machines," "has installed aftermarket OS on phones and tablets," "rescued a flash gone wrong by literally shorting a circuit to hard boot a device into recovery mode," and other such things, they don't count because I followed tutorials, rather than editing the inodes manually with magnets or tracing and then soldering together all the components of a machine and then wrote the operating system and software for it. Yes, it's a silly worry. But there's always the urge to compare what I'm doing to the people who actually built the libraries and code that I'm using and say that I'm not really doing anything at all.

To some degree, that's why I'm documenting these things: So there's a record, yes, and also so that I can look at what I did and hopefully then recognize that what I did was still impressive if I think about it from the perspective of someone who doesn't have all of the background knowledge and experience that I have that absolutely makes difficult things seem easier to me because I'm starting on step five of the true process, rather than step one.
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
At the moment, most of the automations and controls that have been put in place for the Home Assistant are stable. There's a fair amount more of things to teach to the remotes so they can do more control matters in the rooms they're in, but for the most part, there hasn't been a whole lot of need to add more functions or figure out a method to do a thing that has been requested of me. (Which just means that a new thing hasn't caught our attention at this point and I haven't been tasked with figuring out how to make it work.)

What that does mean, however, is making sure that the things that are in place, the duct tape and string holding it all together, is still strong and properly reinforced. Sometimes that means making sure the glue script for turning a TV on and off over the MQTT/CEC bridge still works and has been put properly into place when the attached SBC changes to a different operating system. (And finding out, delightfully, that a different method of controlling the screen works just fine even if one of the libraries is tied up in the bridge.) Sometimes it means shuffling some methods over to make sure that there's still good remote access for when you want to do work on one of the other computers without disturbing the person who is in the room that's there. (Because the library you had been relying on to do that upgraded itself and the program that worked with it didn't upgrade to the new version.)

For this particular situation, though, it was the announcement from one of the API providers that I've been using for weather and other data saying "We're shutting down the always-free, needs no payment methods on file API access and moving all of you over to the still free for a limited amount of calls, but after that we charge you access instead." At which point I said, "Well, that's a service that I'm no longer going to be using." What that means, though, is that the one service that I was pulling weather, Air Quality Index, and Ultraviolet Index data from needed replacements. I was about to find out how good I had gotten, and how good my design of automations was, because I had to do some wholesale data source replacements.

And the replacement is on )

Having found new sources, then I just had to rewire all the appropriate automations and commands to use the new data sources, which didn't take long, just repeatedly having to change from one source to another in all the places where it had been. And within the last two days, all of the automations that were running on new sources fired correctly, did their actions, and informed us about what was going on in the world and what might be a good action to take for us. So it's good to know that the things I have set up were set up correctly so that I could plug new and different sources into them and still have them work correctly. It gives me confidence both in the way the Home Assistant is set up and in my own abilities to get it to do what I want that this kind of source change was the work of a day to find, add, and rewire everything to run on different sources. This seems like another one of those situations where it looked easy to me and that may have involved more skill than I think it did.
silveradept: A kodama with a trombone. The trombone is playing music, even though it is held in a rest position (Default)
This is more of an assorted miscellaneous, et cetera kind of post, but it does have a few important elements to it involved in home automation and working with technology in general

Backups, Knowing Where To Put The Chalk Mark, and DIYing Universal Remote Codes )

In any case, you can see that sometimes getting things to work the way you want them to involves diving down a few Vittra warrens (or following a dog into Nisse space) and having to do a little work to get things to go for yourself. Which is, to some degree, expected when working with community programs and services. You always want not to have to do it, because it's much more likely for something to get widely adopted if it Just Works for everyone or nearly everyone, but if there are going to be edge cases or situations where someone has to roll up their sleeves and dive in to things, making it as easy as possible to achieve their goals is definitely recommended. (So don't skimp on your tutorials and documentation! It's not just people who want to examine your code who might have to install programs and use them as steps in a complicated dance to get your device to work with their specific setup.)
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
The last time we checked in on the automation efforts, I'd done some cron job work to keep my Pi-Hole updated when there was a new version, having it check and run once a week for new Pi-hole system software. This time around, we're back in Home Assistant, trying to get some useful information ingested for new functionality, and to refine one of the sensors to be a little more accurate.

Complex automation conditions, using API data, trend data, and JSON pathing )

Fine-tuning the Internet Outage Sensor )

[VICTORY FANFARE.]

I'd say that these things are becoming more common for me, but that's not quite true. It's because each of these new projects is building on something I've already done before that they seem to be easier to achieve. When I'm building the new sensors and the URIs to poke the endpoints with from the documentation provided and the code I've already put into place, it's copy-paste-tweak, and the new knowledge is in the tweaking. Or practicing various skills and thinking through the processes that I want to happen means the tweaking generally works once I've picked up the component that's new for this enterprise. It might be because I'm working with a project that has both documentation and a robust forum culture, so I can look at how someone else has achieved their results and then tweak accordingly for my own purposes.

If I really wanted to test myself and what I know and possibly some other things like whether I can learn or remember how to program in a language and then generate an application as well as play around with APIs, Space Traders is right there to use as a deep dive to build a client for the game. I have other things to do than build a client for a game just because I'm feeling hubristic about how I can work with APIs and parsing JSON.
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
Last time, I explored how to post data to a Home Assistant endpoint so I could use a sensor to accept the data and then have a display space on one of my devices display the accepted data, after transferring it so it could be formatted correctly for the viewing.

Since then, Elon's Folly appears to have actually moved API access to the tier of "paid rube," so I had to clean up the running automations so the indicator lights weren't fetching something that didn't work for us any more. It's too bad that such a low data use was still being thrown out with all the other projects, but someone has realized that the amount of money they need to keep the social media platforms going is far more than they would be able to raise by other means, especially when that platform keeps turning into the kind of place that the advertisers want to flee in droves. And even more so now that the "we're actually the people we claim to be" system has been chaosified. In any case, the point is one of the indicator lights bit it because Elon is desperately trying to make money that he isn't going to make.

Building an automatic updater for a Pi-Hole )

And there we have it! I've successfully set something up to automate making sure my Pi-Hole stays up-to-date, thanks to information that it already provides through its API and a little fancy pipe wielding and bash scripting. I feel like I could do a fair amount of these things for other bits of systems, especially if I worked out how to safely and securely pass in an appropriate password without it existing as cleartext in the script or anywhere else. That said, many of the systems I might want to deploy such automation on are the kind that will bite me in the ass with something that needs an interactive component or that will engage in breaking changes that have to be carefully navigated around to ensure they don't break the system.

[VICTORY FANFARE]

Assorted tech misc, etc. )
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
After adopting a "wait and see" approach toward whether Elon's Folly would go through with the threat to cut off all free API access (and smiling a knowledgeable smile when the threat was walked back to some degree,) and determining, as best I can tell, the status lights are still working properly, I had another idea of something I could get my external brain to do for me instead of requiring me to both remember and be motivated to do it. I have a book of excerpts and quotes that I have been meaning to use more as meditative prompts, and I added looking in it at a random page as a tickybox in my daily spread. I wasn't getting consistent results, based on a host of factors, most importantly whether I had time to do the thing as part of my morning routine or before going to bed. So, rather than relying on my internal brain to do the habit thing, I thought it might be possible to make the external brain do it. If the external brain could pull a random quote from a quotes file, and then sling the selected piece of wisdom to my tablet at some point during the day, I'd be more likely to examine it and meditate on it.

And here we go, once again, into the rabbit hole. )

This is the end of this particular adventure in automation, getting an external brain to remind me of things to do, say, or contemplate throughout the day. It's probably not something in the specification or original conception of Home Assistant as a thing that can monitor IoT devices and then issue commands to them based on the data inputs and rules that I set up. Because Home Assistant is flexible, modular, and extensible, though, when I get wild ideas like this in my head, it turns out that there are pathways to implementation that I can find, code snippets to examine and reuse and modify, and as I try to make it work the way I want to, I learn a little bit more about the systems, their limitations, how to format things, how to construct queries, troubleshooting errors and issues, and more. It probably also helps that I'm going at it with the understanding that I can conceive of what I want in my head, that I have been successfully tinkering with machines and their configurations since the MS-DOS era, and the real difficulty of the task is finding enough material so I can create or modify code in such a way to make it work.

It works. There are things I want to do to improve it, but it works, and that's what's important.

ETA: Postscript Victories )
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
There hasn't been much done since the last set of automation work. Which means that I haven't discovered anything that would be improved through scripting or automation yet, or tried to do something that would require a significant amount of research to implement, like some form of free-floating "play this song" skill that would run a search and start playing the result through a selected speaker.

Instead, what's happened is a whole lot of people decamping from Twitter and moving to other spaces. Including a fair few people in the household, some of whom are maintaining a foot in a couple of different sites for when Twitter fully becomes more hassle than it is worth, a calculation that each person gets to make themselves.

So we made the indicator lights project Even Better )

I feel like this is a slightly more clever/programmer type of solution to the problem, since it didn't require asking for any additional data from the API call (the attribute I needed came in the JSON, I just needed to have something explicitly paying attention to it) and it used something built-in as a good-enough proxy, rather than trying to do something more complex. I'm pretty proud of having been able to think it through, test it, see the results, and, perhaps most importantly, not have to have bashed my head against the templating language to figure out what I was doing wrong. Sure, it's 95% recycled working code from other things, but why reinvent the wheel, right?

At some point, I'll come across something else that needs automating, or decide I want to try and do something clever again, and you'll get to hear about it. This one didn't use the voice assistant, but those are skills that I would love to be able to do more of, to voice-control complex reactions or pass in arbitrary things and have it understand. But that requires a lot more work than the simple thing that I'm using and enjoying as a local control device.

More adventures in home automation (or clever light projects) shortly!
silveradept: A head shot of Firefox-ko, a kitsune representation of Mozilla's browser, with a stern, taking-no-crap look on her face. (Firefox-ko)
#6: Weather Lights and Bird Sites )
silveradept: A sheep in purple with the emblem of the Heartless on its chest, red and black thorns growing from the side, and yellow glowing eyes is dreaming a bubble with the Dreamwidth logo in blue and black. (Heartless Dreamsheep)
When we last left the amalgamation of code, sensors, and things, we'd greatly improved the ability of the voice assistant system to set timers of arbitrary length and out effort into distinguishing them from each other. Since then, a hub attached to a component of the systems has regularly been failing to connect to and through the wifi, and there's a switch that has been having trouble with that as well. There's now a radio for the lights that has let them connect to local control instead of a failing hub. Many of the previous light controls have worked perfectly once the lights were renamed to their proper entity identifications. Some features have been lost for the moment, but there's a distinct possibility that they can be put back together with the use of the correct API calls. The switch, we'll have to see if it can be locally controlled or if we have to get a different type of switch and locally control that, instead. Or see whether the hub disappearing allows for greater and better control of that switch.

Right now, however, we get to enjoy a saga of doing things the wrong way repeatedly until something gets done correctly and it works. I have an old television that my ex had me buy for myself so that the older, heavier model didn't have to make a trip with me (or so that I wouldn't feel like I was only spending money on things for her, or both) that still works excellently for display reasons, except for one tiny thing: the Infrared (IR) sensor on it is busted. I have no idea when this happened, but that's the reality of it. With a working IR sensor, controlling this television from Home Assistant would be much easier, as there's already a trainable IR remote in the same room as the TV.

Thus, the saga begins. )

Once again, dirty hacks done dirt cheap and successfully, based on looking at code and then trying things until understanding appeared. It took hours to get everything in place for what seems to be a tiny amount of actual work done (some code lines changed, others added, less than two dozen lines total, I think, to the actual programs, and then another dozen of the service template I borrowed from someone else to make it happen on startup and modified to run the program I wanted.) It doesn't feel like a great and powerful accomplishment to have succeeded this time around, either, perhaps because of all the typo correction and the having modified someone else's heavy lifting instead of generating something myself in the language of my choosing. Except, of course, that I would almost certainly be importing modules that someone else has created even if I were creating my own script, so I could go as far down as I like in that turtle pile until we get to "well, if you're not manipulating the inodes by hand with magnets, then you're not really coding." Someone put their code out there, I took it and made it work for me. That, theoretically, is software and/or systems engineering. But much like creating art doodles or sketches or the craft projects or the baking and cooking that I've been doing, I can always find some reason somewhere to say that I didn't do it the Right and Proper Way, which almost always seems to be stuck in my mind as "from scratch, with no help or recipes from anyone else, and with no already-prepared ingredients," since that's apparently the mark of True Artistry with the thing. Even though I think that it's not very intelligent to test student memorization of things when in the real world applications of what they will be doing, they'll have computers or reference works or colleagues to help them out with the correct measurements and procedures. I think it might be echoes of Giftedness equating the True Artness with already knowing or lack of effort and time put in to produce flawless work. Even though the important things often take effort much more than they need brilliance or knowing it all before beginning.

And, having finished with this project, there's another one or two of them coming around the corner, to try and replicate, as best as possible, some of the functionality with the lights that got lost when we stopped using the manufacturer's API and its links into other web-based services and instead brought them under local control so they would be reliable, instead. It's entirely possible that many of the things that were in use can be replicated, based on sensors already in place, and other ones, well, now might be the time where we start pulling in specific data from someone else's published APIs and manipulating it for our purposes.
silveradept: A kodama with a trombone. The trombone is playing music, even though it is held in a rest position (Default)
Or: There Are No Timers Remaining

So, last we left this situation, I had managed to get a single arbitrary timer working and had a list of suggested improvements to make the arbitrary timer function more smoothly and effectively. There were three suggestions, and it turns out they're all related to each other.

Here's the rest of the story )

All that just to be able to tell the thing to set a timer and have it behave appropriately. Still, I'm glad it's fine, and I learned a fair few things about the system and how to talk to it correctly in the process.
silveradept: A kodama with a trombone. The trombone is playing music, even though it is held in a rest position (Default)
#3: Castle, set a timer. )
silveradept: A librarian wearing a futuristic-looking visor with text squiggles on them. (Librarian Techno-Visor)
Oh, boy, did I have a fun time this weekend and vacation period getting things set up and trying to make my home automation more powerful and better able to respond to requests from the household. Two stellar examples emerge of problem-solving, scratching my head at the documentation, and trying things iteratively until something worked the way I intended it to. I feel like this is something I should be taking more pride in, since they were both successful, but mostly what's come out of them has been feelings of embarrassment at it taking this long, or that the solution that I've hit upon is probably Ugly and Wrong, or disbelief that I managed to make it work at all, since they look so long and so much effort to do. So, I'm laying these examples out for you, partly as a bid for your opinions on the matter so that I can recalibrate reality against brainweasels, and partly because I want to document that these things really did end up working.

Example 1: The TV and the Pi )

Example 2: The Sensor Screensaver Bash-Together (in Python) )

And those are two examples of Adventures in Home Automation (and its related functions), both of which required skills, problem-solving, consultation of the documentation, and iterative design until they worked correctly. If nothing else, I hope they were entertaining stories.
silveradept: A kodama with a trombone. The trombone is playing music, even though it is held in a rest position (Default)
So, after [personal profile] seperis talked about Home Assistant's dedicated, installed device and about how good Home Assistant had been as a smart home hub, I was convinced by others in my household to buy one and see what I can do with it. I like the open source part, I really love the local control option, and there's good compatibility with devices, at least up to this point. (I would like the community Sengled integration to Just Work with the hub without requiring the username and password for the app, but it doesn't seem like that's an option, and if I really want to go that route, I can purchase a zigbee dongle and use a different community integration to get the dongle to be the hub instead.)

Explanations, methods, and how to teach your Home Assistant / Rhasspy setup to recognize when it's being sworn at and to apologize for it. )

Adventures in home automation continue! Have already done a lot to automate and use voice to call forth things. There's more in my future, likely learning how to build integrations of my own, because eventually I would like to be able to catch the data from intents and manipulate them to my liking, and I haven't figured out how to do that with the intent scripts, if it is even possible to do so that way.

Profile

silveradept: A kodama with a trombone. The trombone is playing music, even though it is held in a rest position (Default)
Silver Adept

April 2025

S M T W T F S
   12345
6789101112
131415 16171819
20212223242526
27282930   

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags
Page generated Apr. 23rd, 2025 08:14 am
Powered by Dreamwidth Studios