RSS SL
Second Life Community - All Activity
-
LL "outgunned," wants more creators in-world. Tell me your use cases that the 64K limitation has made hard or impossible to build.
Bob Oberwager reportedly said on a paywalled NWN interview: “We went to creators” “You do the landing page, we can’t get people to stick around” “You build the game mechanics. A thousand creators are out there, we’re outgunned.” “Nobody wants Second Life to be gamified” “But you’re definitely going to like obsolescence even less. We just have to bite the bullet. We have a slowly melting iceberg that we’ve turned around” “That loud minority has a massive impact on our financials and internal decisions” “Give me some permission [to add game mechanics]” You're outgunned? OK, let us do our part then. 👍 You don't want obsolecense? OK, that's good because we don't want that either. PBR is good, projector lights are good, linkset memory is good, mesh is good. We can see that this platform is making progress in certain ways. 👍 You want us to build game mechanics? OK, well in that case we need to have a more targeted discussion. We're going to need you to stop making it unnecessarily hard to build next-generation content. The 64K limitation has been in place since Mono was first introduced, about 20 years ago. Memory was far more expensive then (even after accounting for the move from colos to AWS), but the limit was never lifted. Lua will be an incremental improvement, and is generally a good thing, but it will not solve the main problem. LSL is already very easy to learn for the vast majority of programmers because it's C-like. Going from C/C++/Java/TypeScript and similar languages to LSL is very easy. The state machine is trivial to learn and highly useful, so not a problem. The llDoThing() API calls are an expected part of getting into any new framework, no particular issue there either... so, what then? The main things that make SL scripting hard to work with are the 64K limit, and a broadcast-only IPC mechanism (link messages) that doesn't allow direct method invocation, which you're pushed into overusing because of the 64K limitation. It looks like Lua could make the IPC part better. However, this does not solve the fundamental problem, which is the 64K limit. Linkset memory takes some memory pressure off, and allows some very useful ways of parallelizing behavior; but by itself, it will not be enough. Nor will Lua's smaller int and float types. Those things get you some headroom, but they simply cannot solve the same set of problems that larger code+data in one script could solve. Surely, 20 years is long enough. Surely, this can be revisited. It's already known that getting more creators to build the next big thing is a management priority, because management is on record about that. So, when is that priority going to turn into plans that we, the developers you're asking to help carry this platform into the future, can see? Here are two concrete examples I've faced in the last year: 1: Aircraft physics simulation. I've been working on a Blade Element Theory-based physics engine for aircraft, which would get flight simulation in SL into the same ballpark as X-plane and better than MS Flight Simulator. This involves storing information on individual wing segments, so there's a fair amount of list allocation, writing, and reading involved. I can fit the linear half of the physics into one 64K LSL script, but only just barely. Angular (moments) won't fit in the same script because while it accesses the same data, it has to have substantially different code. (Calculating linear force requires subsantially different math than calculating torques. You get into matrix math, etc.) So, now I have to have a script that processes inputs and drives the linear and moments scripts, then integrates their responses, introducing significant latency and coordination overhead. What could be done in one script has to be spliced up and divvied out between endpoints that can only communicate through the region's event system, which is orders of magnitude slower than direct method invocation. There's also a LOT of copypasta, because they have to share some basic library functions. If the linear and moments scripts are not working on the same physics state, they drift, and very weird things happen because the scripts think slightly different things are happening to the aircraft. This can cause disagreements between them to amplify, leading to bizarre accumulations of force and torque. Troubleshooting physics calcs was already challenging when an earlier, pre-BET version of the phsyics code had both linear and moments in one script. Now that they have to be in two scripts, I must spend more time working on weird coordination issues, and every time I add more code I risk sending the moments script into stack-heap collision territory. The 20Hz cap on link messages and timer events is a significant factor here, as is the delicate brain surgery that's necessary to parallelize physics calculations in this way. Time that could be spent debugging behavior is instead spent figuring out how to deal with the latest stack-heap collision, which is a problem nearly every day that I work on this. The linear script is working super well at this point. It's VERY convincing, and I say this as a licensed private pilot who can tell the difference from RL experience. That realism is SATISFYING. However, without a working torque calculation system, the picture is only half-complete. At this point, progress is stalled because I have to spend more time fighting this 64K limitation than on figuring out how to make the physics work; and after more than a month with no traction, because I keep having to fight LSL instead of troubleshooting behavior, has left me feeling exhausted. Meanwhile, every lamp in the sim that needs a kilobyte or so of code and data is getting a whole 64K, almost all of which it doesn't need. Where is the concern for all the opex being burned on that, plus CPU time running the queue and context-switching between scripts that have to be massively parallelized, and the opportunity cost of hindering devs like me who just want to deliver next-gen user experiences? Giving the lamps their 1K, my auxiliary scripts maybe 30-60K, and my one physics script 200K or so, would result in a big drop in RAM consumption, and relieve pressure on the event queue as well. 2: Robotic lighting platform. Moving head lamp, like you'd see in a concert/theater/dance club. The use of linkset memory as a "register file" has made this a lot easier, but ultimately the CNC script has to be the single source of llSetLinkPrimitiveParamsFast() calls or else there's a lot of ugly stuttering instead of smooth motion. That one script has to have a G-code interpreter, motion planner, and inverse kinematics, in addition to compositing all the prim pos/rot and projector light updates, in order for everything to work smoothly. I'm nearing two months of dev time on this, and I would say a good 3-4 weeks have been spent on figuring out how to shoehorn enough logic into the main CNC engine to make it all work smoothly. That's 3-4 weeks I've had to spend on "being clever" instead of shipping features. I've also had to implement the touch menu in no fewer than four scripts. I tried doing a dynamic parser that would store the menu defs in linkset memory, but the callbacks wound up taking so much space that I still had to have three scripts anyway. I simply didn't want to burn any more time on that, so I gave up and went back to having all four menu scripts. So, every time there is a link message, the region has to raise the link message handlers in every currently-running state of my 10+ scripts, which then have to run some code to figure out that they don't care about the message. That's queue pressure and CPU time that LL has to factor in. This matters to the robot's framerate as well. If I could allocate more RAM, I could have one touch menu script instead of four, combine gobo/lamp/projector logic into one script (instead of two), migrate certain motion-generation and interpolation algorithms into the CNC script, and possibly consolidate one or two other scripts, while leaving the rest as they are for ideal separation of concerns. This would result in lower RAM, CPU, and queue pressure for the region. -
SecondLife Addiction and Thoughts
Little invested, little return. -
SecondLife Addiction and Thoughts
I don't know why people overcomplicate everything. -
Gaming Laptop Recommendation
MT/s, yes. CL, no. Some vendors don't list the full set of memory ratings, but the decent ones will at least list CL and MT/s or Mhz. CL is "CAS Latency", or in normalspeak, "how long it takes for RAM to respond to a request". Lower numbers better here. -
SecondLife Addiction and Thoughts
That makes it easy then.
