• 0 Posts
  • 324 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle

  • Like I said, impressive work.
    Converting science to shaders is an art.

    I guess your coding standards follows scientific standards.
    And I guess it depends on your audience.

    I guess the perspective is that science/maths formulae are meant to be manipulated. So writing out descriptive names is only done at the most basic levels of understanding. Most of the workings are done on paper/boards, or manually. Extra letters are not efficient.
    Whereas programming is meant to be understood and adapted. So self-describing code is key! Most workings are done within an IDE with autocomplete. Extra letters don’t matter.

    If you are targeting the science community with this, a paragraph about adapting science to programming will be important.
    Scientists will find your article and go “well yeh, that’s K2”. But explaining why these aren’t named as such will hopefully help them to produce useful code in the future.

    The fun of code that spans disciplines!

    Edit;
    Om a side note, I am terrible at coding standards when I’m working with a new paradigm.
    First is “make it work”, after which it’s pretty much done.
    Never mind consistent naming conventions and all that.
    The fact you wrote up an article on it is amazing!
    Good work!


  • Interesting.
    I love creative applications of shaders. They are very powerful.

    In my opinion only, but willing to discuss.
    And I’ll preface this by saying if I tried to publish a scientific paper and my formulas used a bunch of made up symbols that are not standardised, I imagine it would get a lot of corrections on peer review.

    So, from a programming perspective, don’t use abbreviations.
    Basically working on naming.

    I can read that TAU is the diffusion rate due to a comment. Then I dig further into the code as I am trying to figure something out and I encounter tau. Now I have to remember that tau is explained by a comment, instead of the name of the variable. Why not call it diffusionRate then have a comment indicating this is TAU.
    A science person will be able to find the comment indicating where it is initialised and be able to adjust it without having to know programming. A programming person will be able to understand what it does without having to know science things.
    Programming is essentially writing code to be read.
    It’s written once and read many times.

    Similar with the K variables.
    K is reactionRate.
    K1 is reactionKillRate.
    K2 is reactionFeedRate.
    Scientists know what these are. But I would only expect to see variables like this in some bizarre nested loop, and I would consider it a code smell.

    The inboundFlow “line” has a lot going on with little explanation (except in comments). The calculation is already happening and going into memory. Why not name that memory with variables?
    Things like adjacentFlow and diagonalFlow to essentially name those respective lines.
    Could even have adjacentFlowWeight and diagonalFlowWeight for some of those “magic numbers”.
    Comments shouldn’t explain what is happening, but why it’s happening.
    The code already explains what is happening.
    So a comment indicating what the overall formula is, how that relates to the used variables, then the variables essentially explain what each part of it is.
    If a line is getting too complicated to be easily understood, then parting it out into further variables (or even function call, tho not applicable here) will help.
    I would put in an editted example, however I’m on mobile and I know I will mess up the formatting.

    A final style note, however I’m not certain on this.
    I presume 1. and 1.0 are identical representing the float value of 1.0?
    In which case, standardise to 1.0
    There are instances of 2.0 and 2.
    While both are functionally identical, something like (1.0, 1.0, 1.0) is going to be easier to spot that these are floats, as well as spotting typos/commas - when compared to (1., 1., 1.,).
    IMO, at least






  • At the homelab scale, proxmox is great.
    Create a VM, install docker and use docker compose for various services.
    Create additional VMs when you feel the need. You might never feel the need, and that’s fine. Or you might want a VM per service for isolation purposes.
    Have proxmox take regular snapshots of the VMs.
    Every now and then, copy those backups onto an external USB harddrive.
    Take snapshots before, during and after tinkering so you have checkpoints to restore to. Copy the latest snapshot onto an external USB drive once you are happy with the tinkering.

    Create a private git repository (on GitHub or whatever), and use it to store your docker-compose files, related config files, and little readmes describing how to get that compose file to work.

    Proxmox solves a lot of headaches. Docker solves a lot of headaches. Both are widely used, so plenty of examples and documentation about them.

    That’s all you really need to do.
    At some point, you will run into an issue or limitation. Then you have to solve for that problem, update your VMs, compose files, config files, readmes and git repo.
    Until you hit those limitations, what’s the point in over engineering it? It’s just going to over complicate things. I’m guilty of this.

    Automating any of the above will become apparent when tinkering stops being fun.

    The best thing to do to learn all these services is to comb the documentation, read GitHub issues, browse the source a bit.


  • Bitwarden is cheap enough, and I trust them as a company enough that I have no interest in self hosting vaultwarden.

    However, all these hoops you have had to jump through are excellent learning experiences that are a benefit to apply to more of your self hosted setup.

    Reverse proxies are the backbone of hosting and services these days.
    Learning how to inspect docker containers, source code, config files and documentation to find where critical files are stored is extremely useful.
    Learning how to set up more useful/granular backups beyond a basic VM snapshot in proxmox can be applied to any install anywhere.

    The most annoying thing about a lot of these is that tutorials are “minimal viable setup” sorta things.
    Like “now you have it setup, make sure you tune it for production” and it just ends.
    And finding other tutorials that talk about the next step, to get things production ready, often reference out dated versions, or have different core setups so doesn’t quite apply.

    I understand your frustrations.



  • In France, no one spoke English even though I spoke loudly and slowly

    Haha, reminds me of a holiday ages ago in France.
    Someone left their handbag behind or something, and my friend said “I’ll sort it out, I know French”. To be fair, he did. But when I went back to tell him where we ended up, he was speaking slowly and loudly to the poor french person.

    Which reminds me of another time in France, having breakfast. I ordered “orange juice” and the waiter looked confused. So I said it again slower, and his face lit up and said “ah, jus d’orange”.



  • I felt like adding something about the specific case of 180° between edges and a vertice.
    Makes sense.
    And I guess too many vertices means an open set of edges (ie not close, this not a shape).
    I was kinda hoping for a strange edge case, like a mobius strip or Klein bottle.

    I guess a mobius strip is a 2d representation of a 1d paradigm. And a klein bottle is a 3d representation of a 2d paradigm.
    It would be too much to ask of a 1d representation of a ??d paradigm.


  • I feel my comment adds to the discussion and wants more details.
    But it was too simply phrased.
    I guess the details of such a question should be obvious. And if you need the details, the question doesn’t actually add the the discussion… It just seems idiotic!

    I felt like there might be a really cool scenario where a vertice isn’t considered a vertice.
    Like, there actually might be some case on a 2d plane “where actually” applies.
    I’m fine being wrong




  • Yeh, consoles and generally the engineering side has (somewhat) come down in price. But it is more expensive to actually use it in a live gig.
    I don’t know anyone that would mix on a laptop for a live music gig (as opposed to a band at a conference/function) any larger than solo acoustic for 50-100 people.
    It’s not that a physical control surface would make it sound better (well, especially with preproduction), but that a physical control surface allows you to react to the music faster. Anything more than 2 button presses away is too far for a live gig with any stakes.
    Yes the technology is there, and it is doable. But just because you can, doesn’t mean you should. You are introducing massive disadvantages before you even start the gig.


    Some comments on the increased complexity…

    Wireless systems are more prevalent, along with IEMs. An 8 way stereo IEM system is a lot more than an 8 way monitor system. More expensive , and a lot more planning.

    These days, it is much more common to have DSP amps, a channel (or even multiple channels) per box in an array, arrays are much bigger with additional fills and delays.
    I’ve seen some of the daddy racks used in tours, they will be 2 or 3 x 30-40U racks of amps and systems per PA hang.

    The rigging for the PA is more precise, requires precise measurements (both physical and spectral), and it needs someone to actually run the PA.

    All of this allows an install closer to the ideal PA for the gig, with tooling and simulation to plan it in advance. Which requires a lot broader skill set and planning than throwing in whatever PA you could hire and walking around until it’s good enough.

    I’d say a tour 30-40 years ago was unlikely to have a dedicated systems tech dealing only with the PA. They’d likely supervise the install and some tuning, then be a patch monkey or monitor engineer or something. Or maybe just chill out until the derig.
    These days, it’s not uncommon to have someone continually monitoring the PA, amps, desk racks etc. and it is as much a skill as engineering the actual band.

    20,000 people in a stadium having paid $20 a ticket is $400k budget per show. Seems like a lot, but a venue is going to cost anywhere between $100k and $500k per night.
    100 crew/techs for the in, show & out is going to be $25k to $50k. Equipment hire is going to be anywhere from $50k to $500k.
    Never mind rehearsal and pre-production costs.
    There will be discounts for multiple nights and longer term hires, however anything like an actual tour has a lot of additional accommodation, travel and logistics costs & planning.

    Audience members going to a gig at a large stadium will have certain expectations, regardless of cost.
    Tech crew are going to have certain expectations working at a stadium level gig. These are professionals at (most of the time) the peak of their career.

    While the equipment cost might be somewhat comparable (purchasing a couple Midas, outboard, splits, snakes would’ve been $100k to $250k. A redundant SD10 system with a monitor desk might be $150k to $350k and a hell of a lot more capable - analogue Vs digital sound arguments aside), it generally needs more people and more skill to be able to use and run these systems (analogue splits can be used drunk/hangover. Dante or madi have many layers of complication).
    I’d say digital desks are a bit more fragile than analogue - when digital dies it’s dead, when analogue dies it sounds shit - which will increase the hire cost.
    And by the time you have a desk that can make a live performance sound like a studio album, you also need a PA to back that up, and you need the kit to make sure the band is comfortable playing to that level.
    Also, to attract reliable talent to actually work the gigs (not just the band and their requirements), a certain level of equipment is expected.

    Hell, I’ve been on gigs with dedicated coms techs. All they look after is networking and voice coms systems, and the kit they are deploying makes a video engineers eyes water (you know it’s a good gig when you see anything Riedel)

    Modern gigs are on another level of complexity compared to the $20 gigs of Elvis’ time.
    Even $40 a ticket in a 20k stadium doesn’t leave much wiggle room.
    Then you have profits for the band and organisers. And the demand will drive up prices.

    Like I said, I think current big gig prices are exploitative.
    But the comparison to gigs from decades ago isn’t a good one. Production capabilities are much higher, expectations are much higher, abilities and tech is much more refined.
    You have to remember bands like The Beetles, Queen and Pink Floyd would be drowned out by the fans. Pretty shitty gig if you can’t hear the band.

    And that’s nothing to speak to lighting, video, production and artist management departments.


    Sorry for the ramble. Halfway through a bottle of wine!
    As much as I love working a GOOD budget gig, I’d rather have the equipment to be able to operate at the level I’m capable of - to the point that I no longer work the shitty gigs.


  • I think Ticketmaster and Live Nation absolutely are to blame for hyperinflated ticket prices.
    The fact that scalpers also operate is reprehensible.

    I will however say that production values of a modern gig are many factors higher than they were decades ago.
    Safety standards are much higher, requiring more crowd control, more planning, more specialised equipment (both for the venue, and for the production).
    It’s no longer “a stack of speakers and a mixing desk with 8 channels”. PA design and installation is both a science and an art in itself to achieve an even frequency response throughout as much of the venue as possible. Never mind the production of the actual music.
    It’s no longer “120 par cans over the stage and a bunch of power”, it’s a huge quantity of intelligent lighting fixtures with months of planning and days of programming.
    Never mind the video side of things requiring months of preproduction with kit that would make the lighting or sound budget look like fisher price.
    And all of this has to be built and run with redundancy, so the equipment list is essentially doubled, and likely a lot of spares.
    Venue costs are also higher. So all of that production has to be orchestrated to go in and come out in as fast a time as possible. And packed on and off trucks in specific ways to facilitate this. Logistics of a tour are intimidating.

    There are also entire university degrees based around these roles in production, people want and make a career out of touring. Places on tours are highly sought after.

    Gigs are no longer just a band playing. There is a lot more show to it.
    Whether this is actually what fans want is up for debate. And if it actually makes the experience better is also up for debate.

    Ticket prices are obscene, and I don’t think they are inline with the production provided.
    However, if the live music is in demand then there will be people that pay. A band can only play so many gigs, and venues are limited.
    Some of the increased cost can be attributed to making the job easier and safer for all the crew, staff and fans.
    Some of the increased cost can be attributed “putting on a better show”.
    Some of the cost can be attributed to some of these jobs moving from the “passion and hobby” to “a career”.
    Some of these costs can be attributed to the increased skill level required to put on these gigs.
    Some of these costs can be attributed general cost of living & inflation increases.
    But I think most of the costs can be attributed to the exploitative behaviour of Ticketmaster etc.