You try to compute something that supercomputer can't - by not computing it? Instead the formula is stored in a data structure.
But once you need to access all the values you still have something that does not fit the memory and needs to be computed.
I can't judge on the Java side, but suggest to pick a better example on how this can be useful.
Most languages force you to choose: either compute everything upfront (O(n) memory) or write complex lazy evaluation code. Coderive gives you declarative intent with automatic optimization. You write what you mean (for i in [0 to 1Qi]), and the runtime figures out the optimal execution strategy. This is like having a JIT compiler that understands mathematical patterns, not just bytecode."
it only compute what was needed on the right time. See this output for example:
Enter file path or press Enter for default [/storage/emulated/0/JavaNIDE/Programming-Language/Coderive/executables/LazyLoop.cod]:
>
Using default file: /storage/emulated/0/JavaNIDE/Programming-Language/Coderive/executables/LazyLoop.cod
Testing timer() function:
Timer resolution: 0.023616 ms
collatz := [1 to 1T]
for n in collatz {
steps := 0
current := n
while current != 1 {
current = if current % 2 == 0 { current/2 } else {3*current + 1}
steps += 1
}
collatz[n] = steps
}
// On my phone, I can instantly check:
outln("27 takes " + collatz[27] + " steps") // 111 steps
outln("871 takes " + collatz[871] + " steps") // 178 steps
You've highlighted exactly why my example was poorly chosen - it sounds physically impossible. Let me explain what's actually happening:
We're NOT loading 13.66 days of 8K video. That would indeed be ~2.5 petabytes.
What we are doing is creating a virtual processing pipeline that could process that much data if you had it, but instead processes only what you actually need.
The Actual Code Behind This:
```java
// 1. Virtual reference, NOT loading
video := virtual_source("8k_video.mp4") // O(1) memory - just metadata
// 2. Algorithm expressed at full scale
for frame in [0 to 33M] { // VIRTUAL: 33 million frames
for pixel in [0 to 33M] { // VIRTUAL: 33 million pixels
brightness = calculate_brightness(pixel) // FORMULA, not computation
// 3. Only compute specific frames (e.g., every 1000th frame for preview)
for preview_frame in [0, 1000, 2000, 3000] {
actual_pixels = video[preview_frame].compute() // Only NOW computes
display(actual_pixels) // These 4 frames only
}
```
What Actually Happens in 50ms:
1. 0-45ms: Pattern detection creates optimization formulas
2. 5ms: Compute the few frames actually requested
3. 0ms: Loading video (never happens)
Better, More Honest Example:
A real use case would be:
```java
// Video editing app on phone
// User wants to apply filter to 10-second clip (240 frames)
// Design filter at full quality
for frame in clip_frames {
frame.apply_filter(filter) // Creates formula application
}
// Preview instantly at lower resolution
preview = clip_frames[0].downsample(0.25).compute() // Fast preview
// Render only when user confirms
if user_confirms {
for frame in clip_frames {
output = frame.compute_full_quality() // Now compute 240 frames
save(output)
}
}
```
The Real Innovation:
Separating algorithm design from data scale.
You can:
· Design algorithms assuming unlimited data
· Test with tiny samples
· Deploy to process only what's needed
· Scale up seamlessly when you have infrastructure
My Mistake:
The '50ms for 937 billion pixels' claim was misleading. What I should have said:
'50ms to create a processing algorithm that could handle 937 billion pixels, then instant access to any specific pixel.'
The value isn't in processing everything instantly (impossible), but in designing at scale without scale anxiety.
it's because I only have a phone to use for coding. Tho, I am planning to make it more general. Mobile development is just one of the main goals of this language.
> Conclusion: Coderive doesn't just make loops faster—it redefines what's computationally possible on commodity hardware.
I mean this as kindly as possible, but please don’t say things like this if you want to be taken seriously. Computer languages cannot possibly change what is possible on a given machine for the simple reason that whatever they are doing had to previously be possible in assembly on the same machine.
I don’t mean to overly discourage you. Lazy execution can be very useful, but it’s also not clearly new or hard to get in other languages (although it would require different syntax than an idiomatic for loop most of the time). It may help to try to pick an example where the lazy execution is actually exercised. Preferably one that would be hard for an optimizing compiler to optimize.
I would also not recommend claiming iteration if you also claim 50ms, since that’s clearly impossible regardless of memory consumption, so you have to optimize away or defer the work in some way (at which point iteration is no longer occurring).
For these examples, I think you would just express the code as a function taking i instead of pre-populating the array. This doesn’t seem hard at least for the provided examples, and has the benefit that it can be opted into when appropriate.
You're right about hardware limits, but wrong about what's being 'redefined.' Coderive redefines developer productivity for certain computationally hard problems to solve:
Before Coderive:
To explore 1 trillion cases, you'd need:
· A cluster of machines
· Distributed computing framework (Spark/Hadoop)
· More time for setup
With Coderive:
```java
results := [1 to 1T] // Conceptually 1 trillion
for i in results {
results[i] = analyzeCase(i)
}
// Check interesting cases immediately
```
It's not about computing faster than physics allows. It's about thinking and exploring with ease without infrastructure constraints.
yeah. I should have been more specific that it takes an imperative-like syntax and not actually iterate it internally but takes the formula it can use to process the loop much faster. Here is an actual output too:
Enter file path or press Enter for default [/storage/emulated/0/JavaNIDE/Programming-Language/Coderive/executables/LazyLoop.cod]:
>
Using default file: /storage/emulated/0/JavaNIDE/Programming-Language/Coderive/executables/LazyLoop.cod
Testing timer() function:
Timer resolution: 0.023616 ms
You try to compute something that supercomputer can't - by not computing it? Instead the formula is stored in a data structure. But once you need to access all the values you still have something that does not fit the memory and needs to be computed.
I can't judge on the Java side, but suggest to pick a better example on how this can be useful.
Enter file path or press Enter for default [/storage/emulated/0/JavaNIDE/Programming-Language/Coderive/executables/LazyLoop.cod]: > Using default file: /storage/emulated/0/JavaNIDE/Programming-Language/Coderive/executables/LazyLoop.cod Testing timer() function: Timer resolution: 0.023616 ms
Testing condition evaluation: 2 % 2 = 0 2 % 2 == 0 = true 3 % 2 = 1 3 % 2 == 0 = false 24000 % 2 = 0.0 24000 % 2 == 0 = true
Conditional formula creation time: 2.657539 ms
Results: arr[2] = even arr[3] = odd arr[24000] = even arr[24001] = odd
=== Testing 2-statement pattern optimization === Pattern optimization time: 0.137 ms arr2[3] = 14 (should be 33 + 5 = 14) arr2[5] = 30 (should be 55 + 5 = 30) arr2[10] = 105 (should be 1010 + 5 = 105)
Variable substitution time: 0.064384 ms arr3[4] = 1 (should be 42 - 7 = 1) arr3[8] = 9 (should be 82 - 7 = 9)
=== Testing conditional + 2-statement === Mixed optimization time: 3.253846 ms arr4[30] = 30 (should be 30) arr4[60] = 121 (should be 602 + 1 = 121)
=== All tests completed ===
---
From:
unit test
share LazyLoop { share main() { // Test timer() first - simplified outln("Testing timer() function:") t1 := timer() t2 := timer() outln("Timer resolution: " + (t2 - t1) + " ms") outln()
collatz := [1 to 1T] for n in collatz { steps := 0 current := n while current != 1 { current = if current % 2 == 0 { current/2 } else {3*current + 1} steps += 1 } collatz[n] = steps }
// On my phone, I can instantly check: outln("27 takes " + collatz[27] + " steps") // 111 steps outln("871 takes " + collatz[871] + " steps") // 178 steps
We're NOT loading 13.66 days of 8K video. That would indeed be ~2.5 petabytes.
What we are doing is creating a virtual processing pipeline that could process that much data if you had it, but instead processes only what you actually need.
The Actual Code Behind This:
```java // 1. Virtual reference, NOT loading video := virtual_source("8k_video.mp4") // O(1) memory - just metadata
// 2. Algorithm expressed at full scale for frame in [0 to 33M] { // VIRTUAL: 33 million frames for pixel in [0 to 33M] { // VIRTUAL: 33 million pixels brightness = calculate_brightness(pixel) // FORMULA, not computation
}// 3. Only compute specific frames (e.g., every 1000th frame for preview) for preview_frame in [0, 1000, 2000, 3000] { actual_pixels = video[preview_frame].compute() // Only NOW computes display(actual_pixels) // These 4 frames only } ```
What Actually Happens in 50ms:
1. 0-45ms: Pattern detection creates optimization formulas 2. 5ms: Compute the few frames actually requested 3. 0ms: Loading video (never happens)
Better, More Honest Example:
A real use case would be:
```java // Video editing app on phone // User wants to apply filter to 10-second clip (240 frames)
clip_frames := video[frame_start to frame_end] // 240 frames, virtual filter := create_filter("vintage") // Filter definition
// Design filter at full quality for frame in clip_frames { frame.apply_filter(filter) // Creates formula application }
// Preview instantly at lower resolution preview = clip_frames[0].downsample(0.25).compute() // Fast preview
// Render only when user confirms if user_confirms { for frame in clip_frames { output = frame.compute_full_quality() // Now compute 240 frames save(output) } } ```
The Real Innovation:
Separating algorithm design from data scale.
You can:
· Design algorithms assuming unlimited data · Test with tiny samples · Deploy to process only what's needed · Scale up seamlessly when you have infrastructure
My Mistake:
The '50ms for 937 billion pixels' claim was misleading. What I should have said:
'50ms to create a processing algorithm that could handle 937 billion pixels, then instant access to any specific pixel.'
The value isn't in processing everything instantly (impossible), but in designing at scale without scale anxiety.
I mean this as kindly as possible, but please don’t say things like this if you want to be taken seriously. Computer languages cannot possibly change what is possible on a given machine for the simple reason that whatever they are doing had to previously be possible in assembly on the same machine.
I don’t mean to overly discourage you. Lazy execution can be very useful, but it’s also not clearly new or hard to get in other languages (although it would require different syntax than an idiomatic for loop most of the time). It may help to try to pick an example where the lazy execution is actually exercised. Preferably one that would be hard for an optimizing compiler to optimize.
I would also not recommend claiming iteration if you also claim 50ms, since that’s clearly impossible regardless of memory consumption, so you have to optimize away or defer the work in some way (at which point iteration is no longer occurring).
For these examples, I think you would just express the code as a function taking i instead of pre-populating the array. This doesn’t seem hard at least for the provided examples, and has the benefit that it can be opted into when appropriate.
Before Coderive: To explore 1 trillion cases, you'd need:
· A cluster of machines · Distributed computing framework (Spark/Hadoop) · More time for setup
With Coderive:
```java results := [1 to 1T] // Conceptually 1 trillion for i in results { results[i] = analyzeCase(i) } // Check interesting cases immediately ```
It's not about computing faster than physics allows. It's about thinking and exploring with ease without infrastructure constraints.
Enter file path or press Enter for default [/storage/emulated/0/JavaNIDE/Programming-Language/Coderive/executables/LazyLoop.cod]: > Using default file: /storage/emulated/0/JavaNIDE/Programming-Language/Coderive/executables/LazyLoop.cod Testing timer() function: Timer resolution: 0.023616 ms
Testing condition evaluation: 2 % 2 = 0 2 % 2 == 0 = true 3 % 2 = 1 3 % 2 == 0 = false 24000 % 2 = 0.0 24000 % 2 == 0 = true
Conditional formula creation time: 2.657539 ms
Results: arr[2] = even arr[3] = odd arr[24000] = even arr[24001] = odd
=== Testing 2-statement pattern optimization === Pattern optimization time: 0.137 ms arr2[3] = 14 (should be 33 + 5 = 14) arr2[5] = 30 (should be 55 + 5 = 30) arr2[10] = 105 (should be 1010 + 5 = 105)
Variable substitution time: 0.064384 ms arr3[4] = 1 (should be 42 - 7 = 1) arr3[8] = 9 (should be 82 - 7 = 9)
=== Testing conditional + 2-statement === Mixed optimization time: 3.253846 ms arr4[30] = 30 (should be 30) arr4[60] = 121 (should be 602 + 1 = 121)
=== All tests completed ===
---
That is from this source:
unit test
share LazyLoop { share main() { // Test timer() first - simplified outln("Testing timer() function:") t1 := timer() t2 := timer() outln("Timer resolution: " + (t2 - t1) + " ms") outln()