Compare commits

...

13 Commits

16 changed files with 1506 additions and 285 deletions

View File

@ -14,7 +14,8 @@ Peon is a multi-paradigm, statically-typed programming language inspired by C, N
features such as automatic type inference, parametrically polymorphic generic types, pure functions, closures, interfaces, single inheritance, features such as automatic type inference, parametrically polymorphic generic types, pure functions, closures, interfaces, single inheritance,
reference types, templates, coroutines, raw pointers and exceptions. reference types, templates, coroutines, raw pointers and exceptions.
The memory management model is rather simple: a Mark and Sweep garbage collector is employed to reclaim unused memory. The memory management model is rather simple: a Mark and Sweep garbage collector is employed to reclaim unused memory, although more garbage
collection strategies (such as generational GC or deferred reference counting) are planned to be added in the future.
Peon features a native cooperative concurrency model designed to take advantage of the inherent waiting of typical I/O workloads, without the use of more than one OS thread (wherever possible), allowing for much greater efficiency and a smaller memory footprint. The asynchronous model used forces developers to write code that is both easy to reason about, thanks to the [Structured concurrency](https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/) model that is core to peon's async event loop implementation, and works as expected every time (without dropping signals, exceptions, or task return values). Peon features a native cooperative concurrency model designed to take advantage of the inherent waiting of typical I/O workloads, without the use of more than one OS thread (wherever possible), allowing for much greater efficiency and a smaller memory footprint. The asynchronous model used forces developers to write code that is both easy to reason about, thanks to the [Structured concurrency](https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/) model that is core to peon's async event loop implementation, and works as expected every time (without dropping signals, exceptions, or task return values).
@ -27,21 +28,13 @@ In peon, all objects are first-class (this includes functions, iterators, closur
**Disclaimer 1**: The project is still in its very early days: lots of stuff is not implemented, a work in progress or **Disclaimer 1**: The project is still in its very early days: lots of stuff is not implemented, a work in progress or
otherwise outright broken. Feel free to report bugs! otherwise outright broken. Feel free to report bugs!
**Disclaimer 2**: Currently, the `std` module has to be _always_ imported explicitly for even the most basic snippets to work. This is because intrinsic types and builtin operators are defined within it: if it is not imported, peon won't even know how to parse `2 + 2` (and even if it could, it would have no idea what the type of the expression would be). You can have a look at the [peon standard library](src/peon/stdlib) to see how the builtins are defined (be aware that they heavily rely on compiler black magic to work) and can even provide your own implementation if you're so inclined.
**Disclaimer 2**: Currently the REPL is very basic (it adds your code to previous input plus a newline, as if it was compiling a new file every time),
because incremental compilation is designed for modules and it doesn't play well with the interactive nature of a REPL session. To show the current state
of the REPL, type `#show` (this will print all the code that has been typed so far), while to reset everything, type `#reset`. You can also type
`#clear` if you want a clean slate to type in, but note that it won't reset the REPL state. If adding a new piece of code causes compilation to fail, the REPL will not add the last piece of code to the input so you can type it again and recompile without having to exit the program and start from scratch. You can move through the code using left/right arrows and go to a new line by pressing Ctrl+Enter. Using the up/down keys on your keyboard
will move through the input history (which is never reset). Also note that UTF-8 is currently unsupported in the REPL (it will be soon though!)
**Disclaimer 3**: Currently, the `std` module has to be _always_ imported explicitly for even the most basic snippets to work. This is because intrinsic types and builtin operators are defined within it: if it is not imported, peon won't even know how to parse `2 + 2` (and even if it could, it would have no idea what the type of the expression would be). You can have a look at the [peon standard library](src/peon/stdlib) to see how the builtins are defined (be aware that they heavily rely on compiler black magic to work) and can even provide your own implementation if you're so inclined.
### TODO List ### TODO List
In no particular order, here's a list of stuff that's done/to do (might be incomplete/out of date): In no particular order, here's a list of stuff that's done/to do (might be incomplete/out of date):
- User-defined types - User-defined types
- Function calls ✅ - Function calls ✅
- Control flow (if-then-else, switch) ✅ - Control flow (if-then-else, switch) ✅
- Looping (while) ✅ - Looping (while) ✅
@ -57,7 +50,6 @@ In no particular order, here's a list of stuff that's done/to do (might be incom
- Named scopes/blocks ✅ - Named scopes/blocks ✅
- Inheritance - Inheritance
- Interfaces - Interfaces
- Indexing operator
- Generics ✅ - Generics ✅
- Automatic types ✅ - Automatic types ✅
- Iterators/Generators - Iterators/Generators
@ -76,12 +68,14 @@ In no particular order, here's a list of stuff that's done/to do (might be incom
Here's a random list of high-level features I would like peon to have and that I think are kinda neat (some may Here's a random list of high-level features I would like peon to have and that I think are kinda neat (some may
have been implemented alredady): have been implemented alredady):
- Reference types are not nullable by default (must use `#pragma[nullable]`) - Reference types are not nullable by default (must use `#pragma[nullable]`)
- The `commutative` pragma, which allows to define just one implementation of an operator
and have it become commutative
- Easy C/Nim interop via FFI - Easy C/Nim interop via FFI
- C/C++ backend - C/C++ backend
- Nim backend - Nim backend
- [Structured concurrency](https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/) (must-have!) - [Structured concurrency](https://vorpus.org/blog/notes-on-structured-concurrency-or-go-statement-considered-harmful/) (must-have!)
- Simple OOP (with multiple dispatch!) - Simple OOP (with multiple dispatch!)
- RTTI, with methods that dispatch at runtime based on the true type of a value - RTTI, with methods that dispatch at runtime based on the true (aka runtime) type of a value
- Limited compile-time evaluation (embed the Peon VM in the C/C++/Nim backend and use that to execute peon code at compile time) - Limited compile-time evaluation (embed the Peon VM in the C/C++/Nim backend and use that to execute peon code at compile time)
@ -134,5 +128,7 @@ out for yourself. Fortunately, the process is quite straightforward:
automate this soon, but as of right now the work is all manual (and it's part of the fun, IMHO ;)) automate this soon, but as of right now the work is all manual (and it's part of the fun, IMHO ;))
__Note__: On Linux, peon will also look into `~/.local/peon/stdlib`
If you've done everything right, you should be able to run `peon` in your terminal and have it drop you into the REPL. Good If you've done everything right, you should be able to run `peon` in your terminal and have it drop you into the REPL. Good
luck and have fun! luck and have fun!

View File

@ -1,7 +1,8 @@
# Peon - Bytecode Specification # Peon - Bytecode Specification
This document aims to document peon's bytecode as well as how it is (de-)serialized to/from files and This document aims to document peon's bytecode as well as how it is (de-)serialized to/from files and
other file-like objects. other file-like objects. Note that the segments in a bytecode dump appear in the order they are listed
in this document.
## Code Structure ## Code Structure
@ -9,12 +10,12 @@ A peon program is compiled into a tightly packed sequence of bytes that contain
the VM needs to execute said program. There is no dependence between the frontend and the backend outside of the the VM needs to execute said program. There is no dependence between the frontend and the backend outside of the
bytecode format (which is implemented in a separate serialiazer module) to allow for maximum modularity. bytecode format (which is implemented in a separate serialiazer module) to allow for maximum modularity.
A peon bytecode dump contains: A peon bytecode file contains the following:
- Constants - Constants
- The bytecode itself - The program's code
- Debugging information - Debugging information (file and version metadata, module info. Optional)
- File and version metadata
## File Headers ## File Headers
@ -34,7 +35,7 @@ in release builds.
### Line data segment ### Line data segment
The line data segment contains information about each instruction in the code segment and associates them The line data segment contains information about each instruction in the code segment and associates them
1:1 with a line number in the original source file for easier debugging using run-length encoding. The section's 1:1 with a line number in the original source file for easier debugging using run-length encoding. The segment's
size is fixed and is encoded at the beginning as a sequence of 4 bytes (i.e. a single 32 bit integer). The data size is fixed and is encoded at the beginning as a sequence of 4 bytes (i.e. a single 32 bit integer). The data
in this segment can be decoded as explained in [this file](../src/frontend/compiler/targgets/bytecode/opcodes.nim#L29), which is quoted in this segment can be decoded as explained in [this file](../src/frontend/compiler/targgets/bytecode/opcodes.nim#L29), which is quoted
below: below:
@ -57,7 +58,7 @@ below:
This segment contains details about each function in the original file. The segment's size is fixed and is encoded at the This segment contains details about each function in the original file. The segment's size is fixed and is encoded at the
beginning as a sequence of 4 bytes (i.e. a single 32 bit integer). The data in this segment can be decoded as explained beginning as a sequence of 4 bytes (i.e. a single 32 bit integer). The data in this segment can be decoded as explained
in [this file](../src/frontend/compiler/targgets/bytecode/opcodes.nim#L39), which is quoted below: in [this file](../src/frontend/compiler/targets/bytecode/opcodes.nim#L39), which is quoted below:
``` ```
[...] [...]
@ -74,6 +75,26 @@ in [this file](../src/frontend/compiler/targgets/bytecode/opcodes.nim#L39), whic
[...] [...]
``` ```
### Modules segment
This segment contains details about the modules that make up the original source code which produced a given bytecode dump.
The data in this segment can be decoded as explained in [this file](../src/frontend/compiler/targets/bytecode/opcodes.nim#L49), which is quoted below:
```
[...]
## modules contains information about all the peon modules that the compiler has encountered,
## along with their start/end offset in the code. Unlike other bytecode-compiled languages like
## Python, peon does not produce a bytecode file for each separate module it compiles: everything
## is contained within a single binary blob. While this simplifies the implementation and makes
## bytecode files entirely "self-hosted", it also means that the original module information is
## lost: this segment serves to fix that. The segment's size is encoded at the beginning as a 4-byte
## sequence (i.e. a single 32-bit integer) and its encoding is similar to that of the functions segment:
## - First, the position into the bytecode where the module begins is encoded (as a 3 byte integer)
## - Second, the position into the bytecode where the module ends is encoded (as a 3 byte integer)
## - Lastly, the module's name is encoded in ASCII, prepended with its size as a 2-byte integer
[...]
```
## Constant segment ## Constant segment
The constant segment contains all the read-only values that the code will need at runtime, such as hardcoded The constant segment contains all the read-only values that the code will need at runtime, such as hardcoded
@ -87,6 +108,6 @@ real-world scenarios it likely won't be.
## Code segment ## Code segment
The code segment contains the linear sequence of bytecode instructions of a peon program. It is to be read directly The code segment contains the linear sequence of bytecode instructions of a peon program to be fed directly to
and without modifications. The segment's size is fixed and is encoded at the beginning as a sequence of 3 bytes peon's virtual machine. The segment's size is fixed and is encoded at the beginning as a sequence of 3 bytes
(i.e. a single 24 bit integer). All the instructions are documented [here](../src/frontend/compiler/targgets/bytecode/opcodes.nim) (i.e. a single 24 bit integer). All the instructions are documented [here](../src/frontend/compiler/targgets/bytecode/opcodes.nim)

View File

@ -68,7 +68,8 @@ type
## this system and is not handled ## this system and is not handled
## manually by the VM ## manually by the VM
bytesAllocated: tuple[total, current: int] bytesAllocated: tuple[total, current: int]
cycles: int when debugGC or debugAlloc:
cycles: int
nextGC: int nextGC: int
pointers: HashSet[uint64] pointers: HashSet[uint64]
PeonVM* = object PeonVM* = object
@ -93,9 +94,10 @@ type
frames: seq[uint64] # Stores the bottom of stack frames frames: seq[uint64] # Stores the bottom of stack frames
results: seq[uint64] # Stores function return values results: seq[uint64] # Stores function return values
gc: PeonGC # A reference to the VM's garbage collector gc: PeonGC # A reference to the VM's garbage collector
breakpoints: seq[uint64] # Breakpoints where we call our debugger when debugVM:
debugNext: bool # Whether to debug the next instruction breakpoints: seq[uint64] # Breakpoints where we call our debugger
lastDebugCommand: string # The last debugging command input by the user debugNext: bool # Whether to debug the next instruction
lastDebugCommand: string # The last debugging command input by the user
# Implementation of peon's memory manager # Implementation of peon's memory manager
@ -105,25 +107,17 @@ proc newPeonGC*: PeonGC =
## garbage collector ## garbage collector
result.bytesAllocated = (0, 0) result.bytesAllocated = (0, 0)
result.nextGC = FirstGC result.nextGC = FirstGC
result.cycles = 0 when debugGC or debugAlloc:
result.cycles = 0
proc collect*(self: var PeonVM) proc collect*(self: var PeonVM)
# Our pointer tagging routines
template tag(p: untyped): untyped = cast[pointer](cast[uint64](p) or (1'u64 shl 63'u64))
template untag(p: untyped): untyped = cast[pointer](cast[uint64](p) and 0x7fffffffffffffff'u64)
template getTag(p: untyped): untyped = (p and (1'u64 shl 63'u64)) == 0
proc reallocate*(self: var PeonVM, p: pointer, oldSize: int, newSize: int): pointer = proc reallocate*(self: var PeonVM, p: pointer, oldSize: int, newSize: int): pointer =
## Simple wrapper around realloc with ## Simple wrapper around realloc with
## built-in garbage collection. Callers ## built-in garbage collection
## should keep in mind that the returned
## pointer is tagged (bit 63 is set to 1)
## and should be passed to untag() before
## being dereferenced or otherwise used
self.gc.bytesAllocated.current += newSize - oldSize self.gc.bytesAllocated.current += newSize - oldSize
try: try:
when debugMem: when debugMem:
@ -147,7 +141,7 @@ proc reallocate*(self: var PeonVM, p: pointer, oldSize: int, newSize: int): poin
else: else:
if self.gc.bytesAllocated.current >= self.gc.nextGC: if self.gc.bytesAllocated.current >= self.gc.nextGC:
self.collect() self.collect()
result = tag(realloc(untag(p), newSize)) result = realloc(p, newSize)
except NilAccessDefect: except NilAccessDefect:
stderr.writeLine("Peon: could not manage memory, segmentation fault") stderr.writeLine("Peon: could not manage memory, segmentation fault")
quit(139) # For now, there's not much we can do if we can't get the memory we need, so we exit quit(139) # For now, there's not much we can do if we can't get the memory we need, so we exit
@ -178,12 +172,12 @@ proc allocate(self: var PeonVM, kind: ObjectKind, size: typedesc, count: int): p
## Allocates an object on the heap and adds its ## Allocates an object on the heap and adds its
## location to the internal pointer list of the ## location to the internal pointer list of the
## garbage collector ## garbage collector
result = cast[ptr HeapObject](untag(self.reallocate(nil, 0, sizeof(HeapObject)))) result = cast[ptr HeapObject](self.reallocate(nil, 0, sizeof(HeapObject)))
setkind(result[], kind, kind) setkind(result[], kind, kind)
result.marked = false result.marked = false
case kind: case kind:
of String: of String:
result.str = cast[ptr UncheckedArray[char]](untag(self.reallocate(nil, 0, sizeof(size) * count))) result.str = cast[ptr UncheckedArray[char]](self.reallocate(nil, 0, sizeof(size) * count))
result.len = count result.len = count
else: else:
discard # TODO discard # TODO
@ -213,30 +207,33 @@ proc markRoots(self: var PeonVM): HashSet[ptr HeapObject] =
# Unlike what Bob does in his book, we keep track # Unlike what Bob does in his book, we keep track
# of objects another way, mainly due to the difference # of objects another way, mainly due to the difference
# of our respective designs. Specifically, our VM only # of our respective designs. Specifically, our VM only
# handles a single type (uint64) while Lox stores all objects
# in heap-allocated structs (which is convenient, but slow).
# What we do is store the pointers to the objects we allocated in
# a hash set and then, at collection time, do a set difference
# between the reachable objects and the whole set and discard
# whatever is left; Unfortunately, this means that if a primitive
# object's value happens to collide with an active pointer the GC
# will mistakenly assume the object to be reachable, potentially
# leading to a nasty memory leak. Let's just hope a 48+ bit address
# space makes this occurrence rare enough not to be a problem
# handles a single type (uint64), while Lox has a stack # handles a single type (uint64), while Lox has a stack
# of heap-allocated structs (which is convenient, but slow). # of heap-allocated structs (which is convenient, but slow).
# The previous implementation would just store all pointers # What we do instead is store all pointers allocated by us
# allocated by us in a hash set and then check if any source # in a hash set and then check if any source of roots contained
# of roots contained any of the integer values that it was # any of the integer values that we're keeping track of. Note
# keeping track of, but this meant that if a primitive object's # that this means that if a primitive object's value happens to
# value happened to collide with an active pointer the GC would # collide with an active pointer, the GC will mistakenly assume
# mistakenly assume the object was reachable, potentially leading # the object to be reachable (potentially leading to a nasty
# to a nasty memory leak. The current implementation uses pointer # memory leak). Hopefully, in a 64-bit address space, this
# tagging: we know that modern CPUs never use bit 63 in addresses, # occurrence is rare enough for us to ignore
# so if it set we know it cannot be a pointer, and if it is set we
# just need to check if it's in our list of active addresses or not.
# This should resolve the potential memory leak (hopefully)
var result = initHashSet[uint64](self.gc.pointers.len()) var result = initHashSet[uint64](self.gc.pointers.len())
for obj in self.calls: for obj in self.calls:
if not obj.getTag():
continue
if obj in self.gc.pointers: if obj in self.gc.pointers:
result.incl(obj) result.incl(obj)
for obj in self.operands: for obj in self.operands:
if not obj.getTag():
continue
if obj in self.gc.pointers: if obj in self.gc.pointers:
result.incl(obj) result.incl(obj)
var obj: ptr HeapObject var obj: ptr HeapObject
for p in result: for p in result:
obj = cast[ptr HeapObject](p) obj = cast[ptr HeapObject](p)
@ -301,7 +298,6 @@ proc sweep(self: var PeonVM) =
## during the mark phase. ## during the mark phase.
when debugGC: when debugGC:
echo "DEBUG - GC: Beginning sweeping phase" echo "DEBUG - GC: Beginning sweeping phase"
when debugGC:
var count = 0 var count = 0
var current: ptr HeapObject var current: ptr HeapObject
var freed: HashSet[uint64] var freed: HashSet[uint64]
@ -380,19 +376,19 @@ proc newPeonVM*: PeonVM =
# Getters for singleton types # Getters for singleton types
{.push inline.} {.push inline.}
proc getNil*(self: var PeonVM): uint64 = self.cache[2] func getNil*(self: var PeonVM): uint64 = self.cache[2]
proc getBool*(self: var PeonVM, value: bool): uint64 = func getBool*(self: var PeonVM, value: bool): uint64 =
if value: if value:
return self.cache[1] return self.cache[1]
return self.cache[0] return self.cache[0]
proc getInf*(self: var PeonVM, positive: bool): uint64 = func getInf*(self: var PeonVM, positive: bool): uint64 =
if positive: if positive:
return self.cache[3] return self.cache[3]
return self.cache[4] return self.cache[4]
proc getNan*(self: var PeonVM): uint64 = self.cache[5] func getNan*(self: var PeonVM): uint64 = self.cache[5]
# Thanks to nim's *genius* idea of making x > y a template # Thanks to nim's *genius* idea of making x > y a template
@ -402,11 +398,11 @@ proc getNan*(self: var PeonVM): uint64 = self.cache[5]
# and https://github.com/nim-lang/Nim/issues/10425 and try not to # and https://github.com/nim-lang/Nim/issues/10425 and try not to
# bang your head against the nearest wall), we need a custom operator # bang your head against the nearest wall), we need a custom operator
# that preserves the natural order of evaluation # that preserves the natural order of evaluation
proc `!>`[T](a, b: T): auto {.inline.} = func `!>`[T](a, b: T): auto =
b < a b < a
proc `!>=`[T](a, b: T): auto {.inline, used.} = proc `!>=`[T](a, b: T): auto {.used.} =
b <= a b <= a
@ -414,26 +410,26 @@ proc `!>=`[T](a, b: T): auto {.inline, used.} =
# that go through the (get|set|peek)c wrappers are frame-relative, # that go through the (get|set|peek)c wrappers are frame-relative,
# meaning that the given index is added to the current stack frame's # meaning that the given index is added to the current stack frame's
# bottom to obtain an absolute stack index # bottom to obtain an absolute stack index
proc push(self: var PeonVM, obj: uint64) = func push(self: var PeonVM, obj: uint64) =
## Pushes a value object onto the ## Pushes a value object onto the
## operand stack ## operand stack
self.operands.add(obj) self.operands.add(obj)
proc pop(self: var PeonVM): uint64 = func pop(self: var PeonVM): uint64 =
## Pops a value off the operand ## Pops a value off the operand
## stack and returns it ## stack and returns it
return self.operands.pop() return self.operands.pop()
proc peekb(self: PeonVM, distance: BackwardsIndex = ^1): uint64 = func peekb(self: PeonVM, distance: BackwardsIndex = ^1): uint64 =
## Returns the value at the given (backwards) ## Returns the value at the given (backwards)
## distance from the top of the operand stack ## distance from the top of the operand stack
## without consuming it ## without consuming it
return self.operands[distance] return self.operands[distance]
proc peek(self: PeonVM, distance: int = 0): uint64 = func peek(self: PeonVM, distance: int = 0): uint64 =
## Returns the value at the given ## Returns the value at the given
## distance from the top of the ## distance from the top of the
## operand stack without consuming it ## operand stack without consuming it
@ -442,33 +438,33 @@ proc peek(self: PeonVM, distance: int = 0): uint64 =
return self.operands[self.operands.high() + distance] return self.operands[self.operands.high() + distance]
proc pushc(self: var PeonVM, val: uint64) = func pushc(self: var PeonVM, val: uint64) =
## Pushes a value onto the ## Pushes a value onto the
## call stack ## call stack
self.calls.add(val) self.calls.add(val)
proc popc(self: var PeonVM): uint64 = func popc(self: var PeonVM): uint64 =
## Pops a value off the call ## Pops a value off the call
## stack and returns it ## stack and returns it
return self.calls.pop() return self.calls.pop()
proc peekc(self: PeonVM, distance: int = 0): uint64 {.used.} = func peekc(self: PeonVM, distance: int = 0): uint64 {.used.} =
## Returns the value at the given ## Returns the value at the given
## distance from the top of the ## distance from the top of the
## call stack without consuming it ## call stack without consuming it
return self.calls[self.calls.high() + distance] return self.calls[self.calls.high() + distance]
proc getc(self: PeonVM, idx: int): uint64 = func getc(self: PeonVM, idx: int): uint64 =
## Getter method that abstracts ## Getter method that abstracts
## indexing our call stack through ## indexing our call stack through
## stack frames ## stack frames
return self.calls[idx.uint64 + self.frames[^1]] return self.calls[idx.uint64 + self.frames[^1]]
proc setc(self: var PeonVM, idx: int, val: uint64) = func setc(self: var PeonVM, idx: int, val: uint64) =
## Setter method that abstracts ## Setter method that abstracts
## indexing our call stack through ## indexing our call stack through
## stack frames ## stack frames
@ -700,7 +696,7 @@ proc dispatch*(self: var PeonVM) =
while true: while true:
{.computedgoto.} # https://nim-lang.org/docs/manual.html#pragmas-computedgoto-pragma {.computedgoto.} # https://nim-lang.org/docs/manual.html#pragmas-computedgoto-pragma
when debugVM: when debugVM:
if self.ip in self.breakpoints or self.breakpoints.len() == 0 or self.debugNext: if self.ip in self.breakpoints or self.debugNext:
self.debug() self.debug()
instruction = OpCode(self.readByte()) instruction = OpCode(self.readByte())
case instruction: case instruction:
@ -768,6 +764,10 @@ proc dispatch*(self: var PeonVM) =
# not needed there anymore # not needed there anymore
discard self.pop() discard self.pop()
discard self.pop() discard self.pop()
of ReplExit:
# Preserves the VM's state for the next
# execution. Used in the REPL
return
of Return: of Return:
# Returns from a function. # Returns from a function.
# Every peon program is wrapped # Every peon program is wrapped
@ -829,9 +829,13 @@ proc dispatch*(self: var PeonVM) =
# not a great idea) # not a great idea)
self.pushc(self.pop()) self.pushc(self.pop())
of LoadVar: of LoadVar:
# Pushes a variable from the call stack # Pushes a local variable from the call stack
# onto the operand stack # onto the operand stack
self.push(self.getc(self.readLong().int)) self.push(self.getc(self.readLong().int))
of LoadGlobal:
# Pushes a global variable from the call stack
# onto the operand stack
self.push(self.calls[self.readLong().int])
of NoOp: of NoOp:
# Does nothing # Does nothing
continue continue
@ -1002,6 +1006,8 @@ proc dispatch*(self: var PeonVM) =
self.push(self.getBool(cast[float32](self.pop()) !>= cast[float32](self.pop()))) self.push(self.getBool(cast[float32](self.pop()) !>= cast[float32](self.pop())))
of Float32LessOrEqual: of Float32LessOrEqual:
self.push(self.getBool(cast[float32](self.pop()) <= cast[float32](self.pop()))) self.push(self.getBool(cast[float32](self.pop()) <= cast[float32](self.pop())))
of Identity:
self.push(cast[uint64](self.pop() == self.pop()))
# Print opcodes # Print opcodes
of PrintInt64: of PrintInt64:
echo cast[int64](self.pop()) echo cast[int64](self.pop())
@ -1050,23 +1056,41 @@ proc dispatch*(self: var PeonVM) =
discard discard
proc run*(self: var PeonVM, chunk: Chunk, breakpoints: seq[uint64] = @[]) = proc run*(self: var PeonVM, chunk: Chunk, breakpoints: seq[uint64] = @[], repl: bool = false) =
## Executes a piece of Peon bytecode ## Executes a piece of Peon bytecode
self.chunk = chunk self.chunk = chunk
self.frames = @[] self.frames = @[]
self.calls = @[] self.calls = @[]
self.operands = @[] self.operands = @[]
self.breakpoints = breakpoints
self.results = @[] self.results = @[]
self.ip = 0 self.ip = 0
self.lastDebugCommand = "" when debugVM:
self.breakpoints = breakpoints
self.lastDebugCommand = ""
try:
self.dispatch()
except NilAccessDefect:
stderr.writeLine("Memory Access Violation: SIGSEGV")
quit(1)
if not repl:
# We clean up after ourselves!
self.collect()
proc resume*(self: var PeonVM, chunk: Chunk) =
## Resumes execution of the given chunk (which
## may have changed since the last call to run()).
## No other state mutation occurs and all stacks as
## well as other metadata are left intact. This should
## not be used directly unless you know what you're
## doing, as incremental compilation support is very
## experimental and highly unstable
self.chunk = chunk
try: try:
self.dispatch() self.dispatch()
except NilAccessDefect: except NilAccessDefect:
stderr.writeLine("Memory Access Violation: SIGSEGV") stderr.writeLine("Memory Access Violation: SIGSEGV")
quit(1) quit(1)
# We clean up after ourselves!
self.collect()
{.pop.} {.pop.}

View File

@ -15,14 +15,14 @@
import strformat import strformat
# These variables can be tweaked to debug and test various components of the toolchain # These variables can be tweaked to debug and test various components of the toolchain
const debugLexer* {.booldefine.} = false # Print the tokenizer's output var debugLexer* = false # Print the tokenizer's output
const debugParser* {.booldefine.} = false # Print the AST generated by the parser var debugParser* = false # Print the AST generated by the parser
const debugCompiler* {.booldefine.} = false # Disassemble and/or print the code generated by the compiler var debugCompiler* = false # Disassemble and/or print the code generated by the compiler
const debugVM* {.booldefine.} = false # Enable the runtime debugger in the bytecode VM const debugVM* {.booldefine.} = false # Enable the runtime debugger in the bytecode VM
const debugGC* {.booldefine.} = false # Debug the Garbage Collector (extremely verbose) const debugGC* {.booldefine.} = false # Debug the Garbage Collector (extremely verbose)
const debugAlloc* {.booldefine.} = false # Trace object allocation (extremely verbose) const debugAlloc* {.booldefine.} = false # Trace object allocation (extremely verbose)
const debugMem* {.booldefine.} = false # Debug the memory allocator (extremely verbose) const debugMem* {.booldefine.} = false # Debug the memory allocator (extremely verbose)
const debugSerializer* {.booldefine.} = false # Validate the bytecode serializer's output var debugSerializer* = false # Validate the bytecode serializer's output
const debugStressGC* {.booldefine.} = false # Make the GC run a collection at every allocation (VERY SLOW!) const debugStressGC* {.booldefine.} = false # Make the GC run a collection at every allocation (VERY SLOW!)
const debugMarkGC* {.booldefine.} = false # Trace the marking phase object by object (extremely verbose) const debugMarkGC* {.booldefine.} = false # Trace the marking phase object by object (extremely verbose)
const PeonBytecodeMarker* = "PEON_BYTECODE" # Magic value at the beginning of bytecode files const PeonBytecodeMarker* = "PEON_BYTECODE" # Magic value at the beginning of bytecode files
@ -70,8 +70,11 @@ Options
yes/on and no/off yes/on and no/off
--noWarn Disable a specific warning (for example, --noWarn:unusedVariable) --noWarn Disable a specific warning (for example, --noWarn:unusedVariable)
--showMismatches Show all mismatches when function dispatching fails (output is really verbose) --showMismatches Show all mismatches when function dispatching fails (output is really verbose)
--backend Select the compilation backend (valid values are: 'c', 'cpp' and 'bytecode'). Note --backend Select the compilation backend (valid values are: 'c' and 'bytecode'). Note
that the REPL always uses the bytecode target. Defaults to 'bytecode' that the REPL always uses the bytecode target. Defaults to 'bytecode'
-o, --output Rename the output file with this value (with --backend:bytecode, a '.pbc' extension -o, --output Rename the output file with this value (with --backend:bytecode, a '.pbc' extension
is added if not already present) is added if not already present)
--debug-dump Debug the bytecode serializer. Only makes sense with --backend:bytecode
--debug-lexer Debug the peon lexer
--debug-parser Debug the peon parser
""" """

View File

@ -12,19 +12,7 @@
# See the License for the specific language governing permissions and # See the License for the specific language governing permissions and
# limitations under the License. # limitations under the License.
# Copyright 2022 Mattia Giambirtone & All Contributors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import std/tables import std/tables
import std/strformat import std/strformat
import std/algorithm import std/algorithm
@ -52,7 +40,7 @@ export ast, token, symbols, config, errors
type type
PeonBackend* = enum PeonBackend* = enum
## An enumeration of the peon backends ## An enumeration of the peon backends
Bytecode, NativeC, NativeCpp Bytecode, NativeC
PragmaKind* = enum PragmaKind* = enum
## An enumeration of pragma types ## An enumeration of pragma types
@ -146,7 +134,7 @@ type
node*: Declaration node*: Declaration
# Who is this name exported to? (Only makes sense if isPrivate # Who is this name exported to? (Only makes sense if isPrivate
# equals false) # equals false)
exportedTo*: HashSet[Name] exportedTo*: HashSet[string]
# Has the compiler generated this name internally or # Has the compiler generated this name internally or
# does it come from user code? # does it come from user code?
isReal*: bool isReal*: bool
@ -224,7 +212,7 @@ type
# The module importing us, if any # The module importing us, if any
parentModule*: Name parentModule*: Name
# Currently imported modules # Currently imported modules
modules*: HashSet[Name] modules*: HashSet[string]
TypedNode* = ref object TypedNode* = ref object
## A wapper for AST nodes ## A wapper for AST nodes
@ -353,11 +341,9 @@ proc step*(self: Compiler): ASTNode {.inline.} =
# and can be reused across multiple compilation backends # and can be reused across multiple compilation backends
proc resolve*(self: Compiler, name: string): Name = proc resolve*(self: Compiler, name: string): Name =
## Traverses all existing namespaces and returns ## Traverses all existing namespaces in reverse order
## the first object with the given name. Returns ## and returns the first object with the given name.
## nil when the name can't be found. Note that ## Returns nil when the name can't be found
## when a type or function declaration is first
## resolved, it is also compiled on-the-fly
for obj in reversed(self.names): for obj in reversed(self.names):
if obj.ident.token.lexeme == name: if obj.ident.token.lexeme == name:
if obj.owner.path != self.currentModule.path: if obj.owner.path != self.currentModule.path:
@ -368,11 +354,12 @@ proc resolve*(self: Compiler, name: string): Name =
# module, so we definitely can't # module, so we definitely can't
# use it # use it
continue continue
elif self.currentModule in obj.exportedTo: elif self.currentModule.path in obj.exportedTo:
# The name is public in its owner # The name is public in its owner
# module and said module has explicitly # module and said module has explicitly
# exported it to us: we can use it # exported it to us: we can use it
result = obj result = obj
result.resolved = true
break break
# If the name is public but not exported in # If the name is public but not exported in
# its owner module, then we act as if it's # its owner module, then we act as if it's
@ -382,6 +369,7 @@ proc resolve*(self: Compiler, name: string): Name =
# might not want to also have access to C's and D's # might not want to also have access to C's and D's
# names as they might clash with its own stuff) # names as they might clash with its own stuff)
continue continue
# We own this name, so we can definitely access it
result = obj result = obj
result.resolved = true result.resolved = true
break break
@ -725,7 +713,7 @@ method findByName*(self: Compiler, name: string): seq[Name] =
for obj in reversed(self.names): for obj in reversed(self.names):
if obj.ident.token.lexeme == name: if obj.ident.token.lexeme == name:
if obj.owner.path != self.currentModule.path: if obj.owner.path != self.currentModule.path:
if obj.isPrivate or self.currentModule notin obj.exportedTo: if obj.isPrivate or self.currentModule.path notin obj.exportedTo:
continue continue
result.add(obj) result.add(obj)
@ -739,11 +727,13 @@ method findInModule*(self: Compiler, name: string, module: Name): seq[Name] =
## the current one or not ## the current one or not
if name == "": if name == "":
for obj in reversed(self.names): for obj in reversed(self.names):
if not obj.isPrivate and obj.owner == module: if obj.owner.isNil():
continue
if not obj.isPrivate and obj.owner.path == module.path:
result.add(obj) result.add(obj)
else: else:
for obj in self.findInModule("", module): for obj in self.findInModule("", module):
if obj.ident.token.lexeme == name and self.currentModule in obj.exportedTo: if obj.ident.token.lexeme == name and self.currentModule.path in obj.exportedTo:
result.add(obj) result.add(obj)
@ -1046,7 +1036,7 @@ proc declare*(self: Compiler, node: ASTNode): Name {.discardable.} =
break break
if name.ident.token.lexeme != declaredName: if name.ident.token.lexeme != declaredName:
continue continue
if name.owner != n.owner and (name.isPrivate or n.owner notin name.exportedTo): if name.owner != n.owner and (name.isPrivate or n.owner.path notin name.exportedTo):
continue continue
if name.kind in [NameKind.Var, NameKind.Module, NameKind.CustomType, NameKind.Enum]: if name.kind in [NameKind.Var, NameKind.Module, NameKind.CustomType, NameKind.Enum]:
if name.depth < n.depth: if name.depth < n.depth:

File diff suppressed because it is too large Load Diff

View File

@ -46,10 +46,21 @@ type
## - After that follows the argument count as a 1 byte integer ## - After that follows the argument count as a 1 byte integer
## - Lastly, the function's name (optional) is encoded in ASCII, prepended with ## - Lastly, the function's name (optional) is encoded in ASCII, prepended with
## its size as a 2-byte integer ## its size as a 2-byte integer
## modules contains information about all the peon modules that the compiler has encountered,
## along with their start/end offset in the code. Unlike other bytecode-compiled languages like
## Python, peon does not produce a bytecode file for each separate module it compiles: everything
## is contained within a single binary blob. While this simplifies the implementation and makes
## bytecode files entirely "self-hosted", it also means that the original module information is
## lost: this segment serves to fix that. The segment's size is encoded at the beginning as a 4-byte
## sequence (i.e. a single 32-bit integer) and its encoding is similar to that of the functions segment:
## - First, the position into the bytecode where the module begins is encoded (as a 3 byte integer)
## - Second, the position into the bytecode where the module ends is encoded (as a 3 byte integer)
## - Lastly, the module's name is encoded in ASCII, prepended with its size as a 2-byte integer
consts*: seq[uint8] consts*: seq[uint8]
code*: seq[uint8] code*: seq[uint8]
lines*: seq[int] lines*: seq[int]
functions*: seq[uint8] functions*: seq[uint8]
modules*: seq[uint8]
OpCode* {.pure.} = enum OpCode* {.pure.} = enum
## Enum of Peon's bytecode opcodes ## Enum of Peon's bytecode opcodes
@ -136,6 +147,7 @@ type
Float32GreaterOrEqual, Float32GreaterOrEqual,
Float32LessOrEqual, Float32LessOrEqual,
LogicalNot, LogicalNot,
Identity, # Pointer equality
## Print opcodes ## Print opcodes
PrintInt64, PrintInt64,
PrintUInt64, PrintUInt64,
@ -188,7 +200,9 @@ type
PushC, # Pop off the operand stack onto the call stack PushC, # Pop off the operand stack onto the call stack
SysClock64, # Pushes the output of a monotonic clock on the stack SysClock64, # Pushes the output of a monotonic clock on the stack
LoadTOS, # Pushes the top of the call stack onto the operand stack LoadTOS, # Pushes the top of the call stack onto the operand stack
DupTop # Duplicates the top of the operand stack onto the operand stack DupTop, # Duplicates the top of the operand stack onto the operand stack
ReplExit, # Exits the VM immediately, leaving its state intact. Used in the REPL
LoadGlobal # Loads a global variable
# We group instructions by their operation/operand types for easier handling when debugging # We group instructions by their operation/operand types for easier handling when debugging
@ -267,7 +281,9 @@ const simpleInstructions* = {Return, LoadNil,
Float32LessThan, Float32LessThan,
Float32GreaterOrEqual, Float32GreaterOrEqual,
Float32LessOrEqual, Float32LessOrEqual,
DupTop DupTop,
ReplExit,
Identity
} }
# Constant instructions are instructions that operate on the bytecode constant table # Constant instructions are instructions that operate on the bytecode constant table
@ -280,7 +296,7 @@ const constantInstructions* = {LoadInt64, LoadUInt64,
# Stack triple instructions operate on the stack at arbitrary offsets and pop arguments off of it in the form # Stack triple instructions operate on the stack at arbitrary offsets and pop arguments off of it in the form
# of 24 bit integers # of 24 bit integers
const stackTripleInstructions* = {StoreVar, LoadVar, } const stackTripleInstructions* = {StoreVar, LoadVar, LoadGlobal}
# Stack double instructions operate on the stack at arbitrary offsets and pop arguments off of it in the form # Stack double instructions operate on the stack at arbitrary offsets and pop arguments off of it in the form
# of 16 bit integers # of 16 bit integers

View File

@ -461,7 +461,8 @@ proc handleBuiltinFunction(self: BytecodeCompiler, fn: Type, args: seq[Expressio
"PrintString": PrintString, "PrintString": PrintString,
"SysClock64": SysClock64, "SysClock64": SysClock64,
"LogicalNot": LogicalNot, "LogicalNot": LogicalNot,
"NegInf": LoadNInf "NegInf": LoadNInf,
"Identity": Identity
}.to_table() }.to_table()
if fn.builtinOp == "print": if fn.builtinOp == "print":
let typ = self.inferOrError(args[0]) let typ = self.inferOrError(args[0])
@ -565,6 +566,8 @@ proc endScope(self: BytecodeCompiler) =
var names: seq[Name] = @[] var names: seq[Name] = @[]
var popCount = 0 var popCount = 0
for name in self.names: for name in self.names:
if self.replMode and name.depth == 0:
continue
# We only pop names in scopes deeper than ours # We only pop names in scopes deeper than ours
if name.depth > self.depth: if name.depth > self.depth:
if name.depth == 0 and not self.isMainModule: if name.depth == 0 and not self.isMainModule:
@ -999,9 +1002,12 @@ proc terminateProgram(self: BytecodeCompiler, pos: int) =
## Utility to terminate a peon program ## Utility to terminate a peon program
self.patchForwardDeclarations() self.patchForwardDeclarations()
self.endScope() self.endScope()
self.emitByte(OpCode.Return, self.peek().token.line) if self.replMode:
self.emitByte(0, self.peek().token.line) # Entry point has no return value (TODO: Add easter eggs, cuz why not) self.emitByte(ReplExit, self.peek().token.line)
self.patchReturnAddress(pos) else:
self.emitByte(OpCode.Return, self.peek().token.line)
self.emitByte(0, self.peek().token.line) # Entry point has no return value
self.patchReturnAddress(pos)
proc beginProgram(self: BytecodeCompiler): int = proc beginProgram(self: BytecodeCompiler): int =
@ -1228,10 +1234,14 @@ method identifier(self: BytecodeCompiler, node: IdentExpr, name: Name = nil, com
if not s.belongsTo.isNil() and s.belongsTo.valueType.fun.kind == funDecl and FunDecl(s.belongsTo.valueType.fun).isTemplate: if not s.belongsTo.isNil() and s.belongsTo.valueType.fun.kind == funDecl and FunDecl(s.belongsTo.valueType.fun).isTemplate:
discard discard
else: else:
# Loads a regular variable from the current frame if s.depth > 0:
self.emitByte(LoadVar, s.ident.token.line) # Loads a regular variable from the current frame
# No need to check for -1 here: we already did a nil check above! self.emitByte(LoadVar, s.ident.token.line)
self.emitBytes(s.position.toTriple(), s.ident.token.line) # No need to check for -1 here: we already did a nil check above!
self.emitBytes(s.position.toTriple(), s.ident.token.line)
else:
self.emitByte(LoadGlobal, s.ident.token.line)
self.emitBytes(s.position.toTriple(), s.ident.token.line)
method assignment(self: BytecodeCompiler, node: ASTNode, compile: bool = true): Type {.discardable.} = method assignment(self: BytecodeCompiler, node: ASTNode, compile: bool = true): Type {.discardable.} =
@ -1468,8 +1478,9 @@ method lambdaExpr(self: BytecodeCompiler, node: LambdaExpr, compile: bool = true
line: node.token.line, line: node.token.line,
kind: NameKind.Function, kind: NameKind.Function,
belongsTo: function, belongsTo: function,
isReal: true) isReal: true,
if compile and node notin self.lambdas: )
if compile and node notin self.lambdas and not node.body.isNil():
self.lambdas.add(node) self.lambdas.add(node)
let jmp = self.emitJump(JumpForwards, node.token.line) let jmp = self.emitJump(JumpForwards, node.token.line)
if BlockStmt(node.body).code.len() == 0: if BlockStmt(node.body).code.len() == 0:
@ -1677,7 +1688,7 @@ proc importStmt(self: BytecodeCompiler, node: ImportStmt, compile: bool = true)
# Importing a module automatically exports # Importing a module automatically exports
# its public names to us # its public names to us
for name in self.findInModule("", module): for name in self.findInModule("", module):
name.exportedTo.incl(self.currentModule) name.exportedTo.incl(self.currentModule.path)
except IOError: except IOError:
self.error(&"could not import '{module.ident.token.lexeme}': {getCurrentExceptionMsg()}") self.error(&"could not import '{module.ident.token.lexeme}': {getCurrentExceptionMsg()}")
except OSError: except OSError:
@ -1695,22 +1706,22 @@ proc exportStmt(self: BytecodeCompiler, node: ExportStmt, compile: bool = true)
var name = self.resolveOrError(node.name) var name = self.resolveOrError(node.name)
if name.isPrivate: if name.isPrivate:
self.error("cannot export private names") self.error("cannot export private names")
name.exportedTo.incl(self.parentModule) name.exportedTo.incl(self.parentModule.path)
case name.kind: case name.kind:
of NameKind.Module: of NameKind.Module:
# We need to export everything # We need to export everything
# this module defines! # this module defines!
for name in self.findInModule("", name): for name in self.findInModule("", name):
name.exportedTo.incl(self.parentModule) name.exportedTo.incl(self.parentModule.path)
of NameKind.Function: of NameKind.Function:
# Only exporting a single function (or, well # Only exporting a single function (or, well
# all of its implementations) # all of its implementations)
for name in self.findByName(name.ident.token.lexeme): for name in self.findByName(name.ident.token.lexeme):
if name.kind != NameKind.Function: if name.kind != NameKind.Function:
continue continue
name.exportedTo.incl(self.parentModule) name.exportedTo.incl(self.parentModule.path)
else: else:
discard self.error("unsupported export type")
proc breakStmt(self: BytecodeCompiler, node: BreakStmt) = proc breakStmt(self: BytecodeCompiler, node: BreakStmt) =
@ -1972,12 +1983,12 @@ proc funDecl(self: BytecodeCompiler, node: FunDecl, name: Name) =
self.patchJump(jump) self.patchJump(jump)
self.endScope() self.endScope()
# Terminates the function's context # Terminates the function's context
let stop = self.chunk.code.len().toTriple()
self.emitByte(OpCode.Return, self.peek().token.line) self.emitByte(OpCode.Return, self.peek().token.line)
if hasVal: if hasVal:
self.emitByte(1, self.peek().token.line) self.emitByte(1, self.peek().token.line)
else: else:
self.emitByte(0, self.peek().token.line) self.emitByte(0, self.peek().token.line)
let stop = self.chunk.code.len().toTriple()
self.chunk.functions[idx] = stop[0] self.chunk.functions[idx] = stop[0]
self.chunk.functions[idx + 1] = stop[1] self.chunk.functions[idx + 1] = stop[1]
self.chunk.functions[idx + 2] = stop[2] self.chunk.functions[idx + 2] = stop[2]
@ -2046,26 +2057,32 @@ proc compile*(self: BytecodeCompiler, ast: seq[Declaration], file: string, lines
self.chunk = newChunk() self.chunk = newChunk()
else: else:
self.chunk = chunk self.chunk = chunk
self.ast = ast
self.file = file self.file = file
self.depth = 0 self.depth = 0
self.currentFunction = nil self.currentFunction = nil
self.current = 0 if self.replMode:
self.lines = lines self.ast &= ast
self.source = source self.source &= "\n" & source
self.lines &= lines
else:
self.ast = ast
self.current = 0
self.stackIndex = 1
self.lines = lines
self.source = source
self.isMainModule = isMainModule self.isMainModule = isMainModule
self.disabledWarnings = disabledWarnings self.disabledWarnings = disabledWarnings
self.showMismatches = showMismatches self.showMismatches = showMismatches
self.mode = mode self.mode = mode
self.stackIndex = 1 let start = self.chunk.code.len()
if not incremental: if not incremental:
self.jumps = @[] self.jumps = @[]
let pos = self.beginProgram() let pos = self.beginProgram()
let idx = self.stackIndex
self.stackIndex = idx
while not self.done(): while not self.done():
self.declaration(Declaration(self.step())) self.declaration(Declaration(self.step()))
self.terminateProgram(pos) self.terminateProgram(pos)
# TODO: REPL is broken, we need a new way to make
# incremental compilation resume from where it stopped!
result = self.chunk result = self.chunk
@ -2083,7 +2100,7 @@ proc compileModule(self: BytecodeCompiler, module: Name) =
break break
elif i == searchPath.high(): elif i == searchPath.high():
self.error(&"""could not import '{path}': module not found""") self.error(&"""could not import '{path}': module not found""")
if self.modules.contains(module): if self.modules.contains(module.path):
return return
let source = readFile(path) let source = readFile(path)
let current = self.current let current = self.current
@ -2094,13 +2111,23 @@ proc compileModule(self: BytecodeCompiler, module: Name) =
let currentModule = self.currentModule let currentModule = self.currentModule
let mainModule = self.isMainModule let mainModule = self.isMainModule
let parentModule = self.parentModule let parentModule = self.parentModule
let replMode = self.replMode
self.replMode = false
self.parentModule = currentModule self.parentModule = currentModule
self.currentModule = module self.currentModule = module
let start = self.chunk.code.len()
discard self.compile(self.parser.parse(self.lexer.lex(source, path), discard self.compile(self.parser.parse(self.lexer.lex(source, path),
path, self.lexer.getLines(), path, self.lexer.getLines(),
self.lexer.getSource(), persist=true), self.lexer.getSource(), persist=true),
path, self.lexer.getLines(), self.lexer.getSource(), chunk=self.chunk, incremental=true, path, self.lexer.getLines(), self.lexer.getSource(), chunk=self.chunk, incremental=true,
isMainModule=false, self.disabledWarnings, self.showMismatches, self.mode) isMainModule=false, self.disabledWarnings, self.showMismatches, self.mode)
# Mark the end of a new module
self.chunk.modules.extend(start.toTriple())
self.chunk.modules.extend(self.chunk.code.high().toTriple())
# I swear to god if someone ever creates a peon module with a name that's
# longer than 2^16 bytes I will hit them with a metal pipe. Mark my words
self.chunk.modules.extend(self.currentModule.ident.token.lexeme.len().toDouble())
self.chunk.modules.extend(self.currentModule.ident.token.lexeme.toBytes())
module.file = path module.file = path
# No need to save the old scope depth: import statements are # No need to save the old scope depth: import statements are
# only allowed at the top level! # only allowed at the top level!
@ -2111,6 +2138,7 @@ proc compileModule(self: BytecodeCompiler, module: Name) =
self.currentModule = currentModule self.currentModule = currentModule
self.isMainModule = mainModule self.isMainModule = mainModule
self.parentModule = parentModule self.parentModule = parentModule
self.replMode = replMode
self.lines = lines self.lines = lines
self.source = src self.source = src
self.modules.incl(module) self.modules.incl(module.path)

View File

@ -22,12 +22,15 @@ import std/terminal
type type
Function = ref object Function = object
start, stop, bottom, argc: int start, stop, argc: int
name: string
Module = object
start, stop: int
name: string name: string
started, stopped: bool
Debugger* = ref object Debugger* = ref object
chunk: Chunk chunk: Chunk
modules: seq[Module]
functions: seq[Function] functions: seq[Function]
current: int current: int
@ -66,21 +69,38 @@ proc checkFunctionStart(self: Debugger, n: int) =
## Checks if a function begins at the given ## Checks if a function begins at the given
## bytecode offset ## bytecode offset
for i, e in self.functions: for i, e in self.functions:
if n == e.start and not (e.started or e.stopped): # Avoids duplicate output
e.started = true if n == e.start:
styledEcho fgBlue, "\n==== Peon Bytecode Disassembler - Function Start ", fgYellow, &"'{e.name}' ", fgBlue, "(", fgYellow, $i, fgBlue, ") ====" styledEcho fgBlue, "\n==== Peon Bytecode Disassembler - Function Start ", fgYellow, &"'{e.name}' ", fgBlue, "(", fgYellow, $i, fgBlue, ") ===="
styledEcho fgGreen, "\t- Start offset: ", fgYellow, $e.start styledEcho fgGreen, "\t- Start offset: ", fgYellow, $e.start
styledEcho fgGreen, "\t- End offset: ", fgYellow, $e.stop styledEcho fgGreen, "\t- End offset: ", fgYellow, $e.stop
styledEcho fgGreen, "\t- Argument count: ", fgYellow, $e.argc styledEcho fgGreen, "\t- Argument count: ", fgYellow, $e.argc, "\n"
proc checkFunctionEnd(self: Debugger, n: int) = proc checkFunctionEnd(self: Debugger, n: int) =
## Checks if a function ends at the given ## Checks if a function ends at the given
## bytecode offset ## bytecode offset
for i, e in self.functions: for i, e in self.functions:
if n == e.stop and e.started and not e.stopped: if n == e.stop:
e.stopped = true
styledEcho fgBlue, "\n==== Peon Bytecode Disassembler - Function End ", fgYellow, &"'{e.name}' ", fgBlue, "(", fgYellow, $i, fgBlue, ") ====" styledEcho fgBlue, "\n==== Peon Bytecode Disassembler - Function End ", fgYellow, &"'{e.name}' ", fgBlue, "(", fgYellow, $i, fgBlue, ") ===="
proc checkModuleStart(self: Debugger, n: int) =
## Checks if a module begins at the given
## bytecode offset
for i, m in self.modules:
if m.start == n:
styledEcho fgBlue, "\n==== Peon Bytecode Disassembler - Module Start ", fgYellow, &"'{m.name}' ", fgBlue, "(", fgYellow, $i, fgBlue, ") ===="
styledEcho fgGreen, "\t- Start offset: ", fgYellow, $m.start
styledEcho fgGreen, "\t- End offset: ", fgYellow, $m.stop, "\n"
proc checkModuleEnd(self: Debugger, n: int) =
## Checks if a module ends at the given
## bytecode offset
for i, m in self.modules:
if m.stop == n:
styledEcho fgBlue, "\n==== Peon Bytecode Disassembler - Module End ", fgYellow, &"'{m.name}' ", fgBlue, "(", fgYellow, $i, fgBlue, ") ===="
proc simpleInstruction(self: Debugger, instruction: OpCode) = proc simpleInstruction(self: Debugger, instruction: OpCode) =
@ -94,9 +114,6 @@ proc simpleInstruction(self: Debugger, instruction: OpCode) =
else: else:
stdout.styledWriteLine(fgYellow, "No") stdout.styledWriteLine(fgYellow, "No")
self.current += 1 self.current += 1
self.checkFunctionEnd(self.current - 2)
self.checkFunctionEnd(self.current - 1)
self.checkFunctionEnd(self.current)
proc stackTripleInstruction(self: Debugger, instruction: OpCode) = proc stackTripleInstruction(self: Debugger, instruction: OpCode) =
@ -168,20 +185,27 @@ proc jumpInstruction(self: Debugger, instruction: OpCode) =
self.current += 4 self.current += 4
while self.chunk.code[self.current] == NoOp.uint8: while self.chunk.code[self.current] == NoOp.uint8:
inc(self.current) inc(self.current)
for i in countup(orig, self.current + 1):
self.checkFunctionStart(i)
proc disassembleInstruction*(self: Debugger) = proc disassembleInstruction*(self: Debugger) =
## Takes one bytecode instruction and prints it ## Takes one bytecode instruction and prints it
let opcode = OpCode(self.chunk.code[self.current])
self.checkModuleStart(self.current)
self.checkFunctionStart(self.current)
printDebug("Offset: ") printDebug("Offset: ")
stdout.styledWriteLine(fgYellow, $(self.current)) stdout.styledWriteLine(fgYellow, $(self.current))
printDebug("Line: ") printDebug("Line: ")
stdout.styledWriteLine(fgYellow, &"{self.chunk.getLine(self.current)}") stdout.styledWriteLine(fgYellow, &"{self.chunk.getLine(self.current)}")
var opcode = OpCode(self.chunk.code[self.current])
case opcode: case opcode:
of simpleInstructions: of simpleInstructions:
self.simpleInstruction(opcode) self.simpleInstruction(opcode)
# Functions (and modules) only have a single return statement at the
# end of their body, so we never execute this more than once per module/function
if opcode == Return:
# -2 to skip the hardcoded argument to return
# and the increment by simpleInstruction()
self.checkFunctionEnd(self.current - 2)
self.checkModuleEnd(self.current - 1)
of constantInstructions: of constantInstructions:
self.constantInstruction(opcode) self.constantInstruction(opcode)
of stackDoubleInstructions: of stackDoubleInstructions:
@ -197,7 +221,9 @@ proc disassembleInstruction*(self: Debugger) =
else: else:
echo &"DEBUG - Unknown opcode {opcode} at index {self.current}" echo &"DEBUG - Unknown opcode {opcode} at index {self.current}"
self.current += 1 self.current += 1
proc parseFunctions(self: Debugger) = proc parseFunctions(self: Debugger) =
## Parses function information in the chunk ## Parses function information in the chunk
@ -206,7 +232,7 @@ proc parseFunctions(self: Debugger) =
name: string name: string
idx = 0 idx = 0
size = 0 size = 0
while idx < len(self.chunk.functions) - 1: while idx < self.chunk.functions.high():
start = int([self.chunk.functions[idx], self.chunk.functions[idx + 1], self.chunk.functions[idx + 2]].fromTriple()) start = int([self.chunk.functions[idx], self.chunk.functions[idx + 1], self.chunk.functions[idx + 2]].fromTriple())
idx += 3 idx += 3
stop = int([self.chunk.functions[idx], self.chunk.functions[idx + 1], self.chunk.functions[idx + 2]].fromTriple()) stop = int([self.chunk.functions[idx], self.chunk.functions[idx + 1], self.chunk.functions[idx + 2]].fromTriple())
@ -220,15 +246,36 @@ proc parseFunctions(self: Debugger) =
self.functions.add(Function(start: start, stop: stop, argc: argc, name: name)) self.functions.add(Function(start: start, stop: stop, argc: argc, name: name))
proc parseModules(self: Debugger) =
## Parses module information in the chunk
var
start, stop: int
name: string
idx = 0
size = 0
while idx < self.chunk.modules.high():
start = int([self.chunk.modules[idx], self.chunk.modules[idx + 1], self.chunk.modules[idx + 2]].fromTriple())
idx += 3
stop = int([self.chunk.modules[idx], self.chunk.modules[idx + 1], self.chunk.modules[idx + 2]].fromTriple())
idx += 3
size = int([self.chunk.modules[idx], self.chunk.modules[idx + 1]].fromDouble())
idx += 2
name = self.chunk.modules[idx..<idx + size].fromBytes()
inc(idx, size)
self.modules.add(Module(start: start, stop: stop, name: name))
proc disassembleChunk*(self: Debugger, chunk: Chunk, name: string) = proc disassembleChunk*(self: Debugger, chunk: Chunk, name: string) =
## Takes a chunk of bytecode and prints it ## Takes a chunk of bytecode and prints it
self.chunk = chunk self.chunk = chunk
styledEcho fgBlue, &"==== Peon Bytecode Disassembler - Chunk '{name}' ====\n" styledEcho fgBlue, &"==== Peon Bytecode Disassembler - Chunk '{name}' ====\n"
self.current = 0 self.current = 0
self.parseFunctions() self.parseFunctions()
self.parseModules()
while self.current < self.chunk.code.len: while self.current < self.chunk.code.len:
self.disassembleInstruction() self.disassembleInstruction()
echo "" echo ""
styledEcho fgBlue, &"==== Peon Bytecode Disassembler - Chunk '{name}' ====" styledEcho fgBlue, &"==== Peon Bytecode Disassembler - Chunk '{name}' ===="

View File

@ -64,7 +64,8 @@ proc newSerializer*(self: Serializer = nil): Serializer =
proc writeHeaders(self: Serializer, stream: var seq[byte]) = proc writeHeaders(self: Serializer, stream: var seq[byte]) =
## Writes the Peon bytecode headers in-place into a byte stream ## Writes the Peon bytecode headers in-place into the
## given byte sequence
stream.extend(PeonBytecodeMarker.toBytes()) stream.extend(PeonBytecodeMarker.toBytes())
stream.add(byte(PEON_VERSION.major)) stream.add(byte(PEON_VERSION.major))
stream.add(byte(PEON_VERSION.minor)) stream.add(byte(PEON_VERSION.minor))
@ -77,25 +78,31 @@ proc writeHeaders(self: Serializer, stream: var seq[byte]) =
proc writeLineData(self: Serializer, stream: var seq[byte]) = proc writeLineData(self: Serializer, stream: var seq[byte]) =
## Writes line information for debugging ## Writes line information for debugging
## bytecode instructions ## bytecode instructions to the given byte
## sequence
stream.extend(len(self.chunk.lines).toQuad()) stream.extend(len(self.chunk.lines).toQuad())
for b in self.chunk.lines: for b in self.chunk.lines:
stream.extend(b.toTriple()) stream.extend(b.toTriple())
proc writeCFIData(self: Serializer, stream: var seq[byte]) = proc writeFunctions(self: Serializer, stream: var seq[byte]) =
## Writes Call Frame Information for debugging ## Writes debug info about functions to the
## functions ## given byte sequence
stream.extend(len(self.chunk.functions).toQuad()) stream.extend(len(self.chunk.functions).toQuad())
stream.extend(self.chunk.functions) stream.extend(self.chunk.functions)
proc writeConstants(self: Serializer, stream: var seq[byte]) = proc writeConstants(self: Serializer, stream: var seq[byte]) =
## Writes the constants table in-place into the ## Writes the constants table in-place into the
## given stream ## byte sequence
stream.extend(self.chunk.consts.len().toQuad()) stream.extend(self.chunk.consts.len().toQuad())
for constant in self.chunk.consts: stream.extend(self.chunk.consts)
stream.add(constant)
proc writeModules(self: Serializer, stream: var seq[byte]) =
## Writes module information to the given stream
stream.extend(self.chunk.modules.len().toQuad())
stream.extend(self.chunk.modules)
proc writeCode(self: Serializer, stream: var seq[byte]) = proc writeCode(self: Serializer, stream: var seq[byte]) =
@ -106,7 +113,7 @@ proc writeCode(self: Serializer, stream: var seq[byte]) =
proc readHeaders(self: Serializer, stream: seq[byte], serialized: Serialized): int = proc readHeaders(self: Serializer, stream: seq[byte], serialized: Serialized): int =
## Reads the bytecode headers from a given stream ## Reads the bytecode headers from a given sequence
## of bytes ## of bytes
var stream = stream var stream = stream
if stream[0..<len(PeonBytecodeMarker)] != PeonBytecodeMarker.toBytes(): if stream[0..<len(PeonBytecodeMarker)] != PeonBytecodeMarker.toBytes():
@ -131,7 +138,6 @@ proc readHeaders(self: Serializer, stream: seq[byte], serialized: Serialized): i
result += 8 result += 8
proc readLineData(self: Serializer, stream: seq[byte]): int = proc readLineData(self: Serializer, stream: seq[byte]): int =
## Reads line information from a stream ## Reads line information from a stream
## of bytes ## of bytes
@ -142,10 +148,11 @@ proc readLineData(self: Serializer, stream: seq[byte]): int =
self.chunk.lines.add(int([stream[0], stream[1], stream[2]].fromTriple())) self.chunk.lines.add(int([stream[0], stream[1], stream[2]].fromTriple()))
result += 3 result += 3
stream = stream[3..^1] stream = stream[3..^1]
doAssert len(self.chunk.lines) == int(size)
proc readCFIData(self: Serializer, stream: seq[byte]): int = proc readFunctions(self: Serializer, stream: seq[byte]): int =
## Reads Call Frame Information from a stream ## Reads the function segment from a stream
## of bytes ## of bytes
let size = [stream[0], stream[1], stream[2], stream[3]].fromQuad() let size = [stream[0], stream[1], stream[2], stream[3]].fromQuad()
result += 4 result += 4
@ -153,22 +160,34 @@ proc readCFIData(self: Serializer, stream: seq[byte]): int =
for i in countup(0, int(size) - 1): for i in countup(0, int(size) - 1):
self.chunk.functions.add(stream[i]) self.chunk.functions.add(stream[i])
inc(result) inc(result)
doAssert len(self.chunk.functions) == int(size)
proc readConstants(self: Serializer, stream: seq[byte]): int = proc readConstants(self: Serializer, stream: seq[byte]): int =
## Reads the constant table from the given stream ## Reads the constant table from the given
## of bytes ## byte sequence
let size = [stream[0], stream[1], stream[2], stream[3]].fromQuad() let size = [stream[0], stream[1], stream[2], stream[3]].fromQuad()
result += 4 result += 4
var stream = stream[4..^1] var stream = stream[4..^1]
for i in countup(0, int(size) - 1): for i in countup(0, int(size) - 1):
self.chunk.consts.add(stream[i]) self.chunk.consts.add(stream[i])
inc(result) inc(result)
doAssert len(self.chunk.consts) == int(size)
proc readModules(self: Serializer, stream: seq[byte]): int =
## Reads module information
let size = [stream[0], stream[1], stream[2], stream[3]].fromQuad()
result += 4
var stream = stream[4..^1]
for i in countup(0, int(size) - 1):
self.chunk.modules.add(stream[i])
inc(result)
doAssert len(self.chunk.modules) == int(size)
proc readCode(self: Serializer, stream: seq[byte]): int = proc readCode(self: Serializer, stream: seq[byte]): int =
## Reads the bytecode from a given stream and writes ## Reads the bytecode from a given byte sequence
## it into the given chunk
let size = [stream[0], stream[1], stream[2]].fromTriple() let size = [stream[0], stream[1], stream[2]].fromTriple()
var stream = stream[3..^1] var stream = stream[3..^1]
for i in countup(0, int(size) - 1): for i in countup(0, int(size) - 1):
@ -178,13 +197,16 @@ proc readCode(self: Serializer, stream: seq[byte]): int =
proc dumpBytes*(self: Serializer, chunk: Chunk, filename: string): seq[byte] = proc dumpBytes*(self: Serializer, chunk: Chunk, filename: string): seq[byte] =
## Dumps the given bytecode and file to a sequence of bytes and returns it. ## Dumps the given chunk to a sequence of bytes and returns it.
## The filename argument is for error reporting only, use dumpFile
## to dump bytecode to a file
self.filename = filename self.filename = filename
self.chunk = chunk self.chunk = chunk
self.writeHeaders(result) self.writeHeaders(result)
self.writeLineData(result) self.writeLineData(result)
self.writeCFIData(result) self.writeFunctions(result)
self.writeConstants(result) self.writeConstants(result)
self.writeModules(result)
self.writeCode(result) self.writeCode(result)
@ -207,8 +229,9 @@ proc loadBytes*(self: Serializer, stream: seq[byte]): Serialized =
try: try:
stream = stream[self.readHeaders(stream, result)..^1] stream = stream[self.readHeaders(stream, result)..^1]
stream = stream[self.readLineData(stream)..^1] stream = stream[self.readLineData(stream)..^1]
stream = stream[self.readCFIData(stream)..^1] stream = stream[self.readFunctions(stream)..^1]
stream = stream[self.readConstants(stream)..^1] stream = stream[self.readConstants(stream)..^1]
stream = stream[self.readModules(stream)..^1]
stream = stream[self.readCode(stream)..^1] stream = stream[self.readCode(stream)..^1]
except IndexDefect: except IndexDefect:
self.error("truncated bytecode stream") self.error("truncated bytecode stream")

View File

@ -16,6 +16,7 @@
import std/strformat import std/strformat
import std/strutils import std/strutils
import std/tables
import std/os import std/os
@ -31,9 +32,6 @@ export token, ast, errors
type type
LoopContext {.pure.} = enum
Loop, None
Precedence {.pure.} = enum Precedence {.pure.} = enum
## Operator precedence ## Operator precedence
## clearly stolen from ## clearly stolen from
@ -66,18 +64,16 @@ type
# Only meaningful for parse errors # Only meaningful for parse errors
file: string file: string
# The list of tokens representing # The list of tokens representing
# the source code to be parsed. # the source code to be parsed
# In most cases, those will come
# from the builtin lexer, but this
# behavior is not enforced and the
# tokenizer is entirely separate from
# the parser
tokens: seq[Token] tokens: seq[Token]
# Little internal attribute that tells # Just like scope depth tells us how
# us if we're inside a loop or not. This # many nested scopes are above us, the
# allows us to detect errors like break # loop depth tells us how many nested
# being used outside loops # loops are above us. It's just a simple
currentLoop: LoopContext # way of statically detecting stuff like
# the break statement being used outside
# loops. Maybe a bit overkill for a parser?
loopDepth: int
# Stores the current function # Stores the current function
# being parsed. This is a reference # being parsed. This is a reference
# to either a FunDecl or LambdaExpr # to either a FunDecl or LambdaExpr
@ -96,8 +92,13 @@ type
lines: seq[tuple[start, stop: int]] lines: seq[tuple[start, stop: int]]
# The source of the current module # The source of the current module
source: string source: string
# Keeps track of imported modules # Keeps track of imported modules.
modules: seq[tuple[name: string, loaded: bool]] # The key is the module's fully qualified
# path, while the boolean indicates whether
# it has been fully loaded. This is useful
# to avoid importing a module twice and to
# detect recursive dependency cycles
modules: TableRef[string, bool]
ParseError* = ref object of PeonException ParseError* = ref object of PeonException
## A parsing exception ## A parsing exception
parser*: Parser parser*: Parser
@ -140,7 +141,7 @@ proc newOperatorTable: OperatorTable =
result.tokens = @[] result.tokens = @[]
for prec in Precedence: for prec in Precedence:
result.precedence[prec] = @[] result.precedence[prec] = @[]
# These operators are currently not built-in # These operators are currently hardcoded
# due to compiler limitations # due to compiler limitations
result.addOperator("=") result.addOperator("=")
result.addOperator(".") result.addOperator(".")
@ -161,11 +162,12 @@ proc newParser*: Parser =
result.file = "" result.file = ""
result.tokens = @[] result.tokens = @[]
result.currentFunction = nil result.currentFunction = nil
result.currentLoop = LoopContext.None result.loopDepth = 0
result.scopeDepth = 0 result.scopeDepth = 0
result.operators = newOperatorTable() result.operators = newOperatorTable()
result.tree = @[] result.tree = @[]
result.source = "" result.source = ""
result.modules = newTable[string, bool]()
# Public getters for improved error formatting # Public getters for improved error formatting
@ -180,7 +182,7 @@ template endOfLine(msg: string, tok: Token = nil) = self.expect(Semicolon, msg,
proc peek(self: Parser, distance: int = 0): Token = proc peek(self: Parser, distance: int = 0): Token {.inline.} =
## Peeks at the token at the given distance. ## Peeks at the token at the given distance.
## If the distance is out of bounds, an EOF ## If the distance is out of bounds, an EOF
## token is returned. A negative distance may ## token is returned. A negative distance may
@ -201,7 +203,7 @@ proc done(self: Parser): bool {.inline.} =
result = self.peek().kind == EndOfFile result = self.peek().kind == EndOfFile
proc step(self: Parser, n: int = 1): Token = proc step(self: Parser, n: int = 1): Token {.inline.} =
## Steps n tokens into the input, ## Steps n tokens into the input,
## returning the last consumed one ## returning the last consumed one
if self.done(): if self.done():
@ -227,7 +229,7 @@ proc error(self: Parser, message: string, token: Token = nil) {.raises: [ParseEr
# as a symbol and in the cases where we need a specific token we just match the string # as a symbol and in the cases where we need a specific token we just match the string
# directly # directly
proc check[T: TokenType or string](self: Parser, kind: T, proc check[T: TokenType or string](self: Parser, kind: T,
distance: int = 0): bool = distance: int = 0): bool {.inline.} =
## Checks if the given token at the given distance ## Checks if the given token at the given distance
## matches the expected kind and returns a boolean. ## matches the expected kind and returns a boolean.
## The distance parameter is passed directly to ## The distance parameter is passed directly to
@ -239,7 +241,7 @@ proc check[T: TokenType or string](self: Parser, kind: T,
self.peek(distance).lexeme == kind self.peek(distance).lexeme == kind
proc check[T: TokenType or string](self: Parser, kind: openarray[T]): bool = proc check[T: TokenType or string](self: Parser, kind: openarray[T]): bool {.inline.} =
## Calls self.check() in a loop with each entry of ## Calls self.check() in a loop with each entry of
## the given openarray of token kinds and returns ## the given openarray of token kinds and returns
## at the first match. Note that this assumes ## at the first match. Note that this assumes
@ -251,7 +253,7 @@ proc check[T: TokenType or string](self: Parser, kind: openarray[T]): bool =
return false return false
proc match[T: TokenType or string](self: Parser, kind: T): bool = proc match[T: TokenType or string](self: Parser, kind: T): bool {.inline.} =
## Behaves like self.check(), except that when a token ## Behaves like self.check(), except that when a token
## matches it is also consumed ## matches it is also consumed
if self.check(kind): if self.check(kind):
@ -261,7 +263,7 @@ proc match[T: TokenType or string](self: Parser, kind: T): bool =
result = false result = false
proc match[T: TokenType or string](self: Parser, kind: openarray[T]): bool = proc match[T: TokenType or string](self: Parser, kind: openarray[T]): bool {.inline.} =
## Calls self.match() in a loop with each entry of ## Calls self.match() in a loop with each entry of
## the given openarray of token kinds and returns ## the given openarray of token kinds and returns
## at the first match. Note that this assumes ## at the first match. Note that this assumes
@ -273,7 +275,7 @@ proc match[T: TokenType or string](self: Parser, kind: openarray[T]): bool =
result = false result = false
proc expect[T: TokenType or string](self: Parser, kind: T, message: string = "", token: Token = nil) = proc expect[T: TokenType or string](self: Parser, kind: T, message: string = "", token: Token = nil) {.inline.} =
## Behaves like self.match(), except that ## Behaves like self.match(), except that
## when a token doesn't match, an error ## when a token doesn't match, an error
## is raised. If no error message is ## is raised. If no error message is
@ -285,7 +287,7 @@ proc expect[T: TokenType or string](self: Parser, kind: T, message: string = "",
self.error(message) self.error(message)
proc expect[T: TokenType or string](self: Parser, kind: openarray[T], message: string = "", token: Token = nil) {.used.} = proc expect[T: TokenType or string](self: Parser, kind: openarray[T], message: string = "", token: Token = nil) {.inline, used.} =
## Behaves like self.expect(), except that ## Behaves like self.expect(), except that
## an error is raised only if none of the ## an error is raised only if none of the
## given token kinds matches ## given token kinds matches
@ -307,6 +309,7 @@ proc funDecl(self: Parser, isAsync: bool = false, isGenerator: bool = false,
isLambda: bool = false, isOperator: bool = false, isTemplate: bool = false): Declaration isLambda: bool = false, isOperator: bool = false, isTemplate: bool = false): Declaration
proc declaration(self: Parser): Declaration proc declaration(self: Parser): Declaration
proc parse*(self: Parser, tokens: seq[Token], file: string, lines: seq[tuple[start, stop: int]], source: string, persist: bool = false): seq[Declaration] proc parse*(self: Parser, tokens: seq[Token], file: string, lines: seq[tuple[start, stop: int]], source: string, persist: bool = false): seq[Declaration]
proc findOperators(self: Parser, tokens: seq[Token])
# End of forward declarations # End of forward declarations
@ -436,7 +439,7 @@ proc makeCall(self: Parser, callee: Expression): CallExpr =
proc parseGenericArgs(self: Parser) = proc parseGenericArgs(self: Parser) =
## Parses function generic arguments ## Parses function generic arguments
## like function[type](arg) ## like function[type](arg)
discard discard # TODO
proc call(self: Parser): Expression = proc call(self: Parser): Expression =
@ -596,12 +599,12 @@ proc assertStmt(self: Parser): Statement =
result.file = self.file result.file = self.file
proc beginScope(self: Parser) = proc beginScope(self: Parser) {.inline.} =
## Begins a new lexical scope ## Begins a new lexical scope
inc(self.scopeDepth) inc(self.scopeDepth)
proc endScope(self: Parser) = proc endScope(self: Parser) {.inline.} =
## Ends a new lexical scope ## Ends a new lexical scope
dec(self.scopeDepth) dec(self.scopeDepth)
@ -631,8 +634,7 @@ proc namedBlockStmt(self: Parser): Statement =
self.expect(Identifier, "expecting block name after 'block'") self.expect(Identifier, "expecting block name after 'block'")
var name = newIdentExpr(self.peek(-1), self.scopeDepth) var name = newIdentExpr(self.peek(-1), self.scopeDepth)
name.file = self.file name.file = self.file
let enclosingLoop = self.currentLoop inc(self.loopDepth)
self.currentLoop = Loop
self.expect(LeftBrace, "expecting '{' after 'block'") self.expect(LeftBrace, "expecting '{' after 'block'")
while not self.check(RightBrace) and not self.done(): while not self.check(RightBrace) and not self.done():
code.add(self.declaration()) code.add(self.declaration())
@ -642,14 +644,14 @@ proc namedBlockStmt(self: Parser): Statement =
result = newNamedBlockStmt(code, name, tok) result = newNamedBlockStmt(code, name, tok)
result.file = self.file result.file = self.file
self.endScope() self.endScope()
self.currentLoop = enclosingLoop dec(self.loopDepth)
proc breakStmt(self: Parser): Statement = proc breakStmt(self: Parser): Statement =
## Parses break statements ## Parses break statements
let tok = self.peek(-1) let tok = self.peek(-1)
var label: IdentExpr var label: IdentExpr
if self.currentLoop != Loop: if self.loopDepth == 0:
self.error("'break' cannot be used outside loops") self.error("'break' cannot be used outside loops")
if self.match(Identifier): if self.match(Identifier):
label = newIdentExpr(self.peek(-1), self.scopeDepth) label = newIdentExpr(self.peek(-1), self.scopeDepth)
@ -673,7 +675,7 @@ proc continueStmt(self: Parser): Statement =
## Parses continue statements ## Parses continue statements
let tok = self.peek(-1) let tok = self.peek(-1)
var label: IdentExpr var label: IdentExpr
if self.currentLoop != Loop: if self.loopDepth == 0:
self.error("'continue' cannot be used outside loops") self.error("'continue' cannot be used outside loops")
if self.match(Identifier): if self.match(Identifier):
label = newIdentExpr(self.peek(-1), self.scopeDepth) label = newIdentExpr(self.peek(-1), self.scopeDepth)
@ -747,8 +749,7 @@ proc raiseStmt(self: Parser): Statement =
proc forEachStmt(self: Parser): Statement = proc forEachStmt(self: Parser): Statement =
## Parses C#-like foreach loops ## Parses C#-like foreach loops
let tok = self.peek(-1) let tok = self.peek(-1)
let enclosingLoop = self.currentLoop inc(self.loopDepth)
self.currentLoop = Loop
self.expect(Identifier) self.expect(Identifier)
let identifier = newIdentExpr(self.peek(-1), self.scopeDepth) let identifier = newIdentExpr(self.peek(-1), self.scopeDepth)
self.expect("in") self.expect("in")
@ -756,10 +757,7 @@ proc forEachStmt(self: Parser): Statement =
self.expect(LeftBrace) self.expect(LeftBrace)
result = newForEachStmt(identifier, expression, self.blockStmt(), tok) result = newForEachStmt(identifier, expression, self.blockStmt(), tok)
result.file = self.file result.file = self.file
self.currentLoop = enclosingLoop dec(self.loopDepth)
proc findOperators(self: Parser, tokens: seq[Token])
proc importStmt(self: Parser, fromStmt: bool = false): Statement = proc importStmt(self: Parser, fromStmt: bool = false): Statement =
@ -806,6 +804,10 @@ proc importStmt(self: Parser, fromStmt: bool = false): Statement =
break break
elif i == searchPath.high(): elif i == searchPath.high():
self.error(&"""could not import '{path}': module not found""") self.error(&"""could not import '{path}': module not found""")
if not self.modules.getOrDefault(path, true):
self.error(&"coult not import '{path}' (recursive dependency detected)")
else:
self.modules[path] = false
try: try:
var source = readFile(path) var source = readFile(path)
var tree = self.tree var tree = self.tree
@ -819,6 +821,8 @@ proc importStmt(self: Parser, fromStmt: bool = false): Statement =
self.tree = tree self.tree = tree
self.current = current self.current = current
self.tokens = tokens self.tokens = tokens
# Module has been fully loaded and can now be used
self.modules[path] = true
except IOError: except IOError:
self.error(&"could not import '{path}': {getCurrentExceptionMsg()}") self.error(&"could not import '{path}': {getCurrentExceptionMsg()}")
except OSError: except OSError:
@ -859,14 +863,13 @@ proc whileStmt(self: Parser): Statement =
## Parses a C-style while loop statement ## Parses a C-style while loop statement
let tok = self.peek(-1) let tok = self.peek(-1)
self.beginScope() self.beginScope()
let enclosingLoop = self.currentLoop inc(self.loopDepth)
let condition = self.expression() let condition = self.expression()
self.expect(LeftBrace) self.expect(LeftBrace)
self.currentLoop = Loop
result = newWhileStmt(condition, self.blockStmt(), tok) result = newWhileStmt(condition, self.blockStmt(), tok)
result.file = self.file result.file = self.file
self.currentLoop = enclosingLoop
self.endScope() self.endScope()
dec(self.loopDepth)
proc ifStmt(self: Parser): Statement = proc ifStmt(self: Parser): Statement =
@ -1049,7 +1052,7 @@ proc parseFunExpr(self: Parser): LambdaExpr =
proc parseGenericConstraint(self: Parser): Expression = proc parseGenericConstraint(self: Parser): Expression =
## Recursivelt parses a generic constraint ## Recursively parses a generic constraint
## and returns it as an expression ## and returns it as an expression
result = self.expression() # First value is always an identifier of some sort result = self.expression() # First value is always an identifier of some sort
if not self.check(RightBracket): if not self.check(RightBracket):
@ -1301,6 +1304,7 @@ proc typeDecl(self: Parser): TypeDecl =
var generics: seq[tuple[name: IdentExpr, cond: Expression]] = @[] var generics: seq[tuple[name: IdentExpr, cond: Expression]] = @[]
var pragmas: seq[Pragma] = @[] var pragmas: seq[Pragma] = @[]
result = newTypeDecl(name, fields, defaults, isPrivate, token, pragmas, generics, nil, false, false) result = newTypeDecl(name, fields, defaults, isPrivate, token, pragmas, generics, nil, false, false)
result.file = self.file
if self.match(LeftBracket): if self.match(LeftBracket):
self.parseGenerics(result) self.parseGenerics(result)
self.expect("=", "expecting '=' after type name") self.expect("=", "expecting '=' after type name")
@ -1315,7 +1319,6 @@ proc typeDecl(self: Parser): TypeDecl =
result.isEnum = true result.isEnum = true
of "object": of "object":
discard self.step() discard self.step()
discard # Default case
else: else:
hasNone = true hasNone = true
if hasNone: if hasNone:
@ -1334,7 +1337,7 @@ proc typeDecl(self: Parser): TypeDecl =
self.expect(LeftBrace, "expecting '{' after type declaration") self.expect(LeftBrace, "expecting '{' after type declaration")
if self.match(TokenType.Pragma): if self.match(TokenType.Pragma):
for pragma in self.parsePragmas(): for pragma in self.parsePragmas():
pragmas.add(pragma) result.pragmas.add(pragma)
var var
argName: IdentExpr argName: IdentExpr
argPrivate: bool argPrivate: bool
@ -1356,8 +1359,6 @@ proc typeDecl(self: Parser): TypeDecl =
else: else:
if not self.check(RightBrace): if not self.check(RightBrace):
self.expect(",", "expecting comma after enum field declaration") self.expect(",", "expecting comma after enum field declaration")
result.pragmas = pragmas
result.file = self.file
proc declaration(self: Parser): Declaration = proc declaration(self: Parser): Declaration =
@ -1420,11 +1421,12 @@ proc parse*(self: Parser, tokens: seq[Token], file: string, lines: seq[tuple[sta
self.lines = lines self.lines = lines
self.current = 0 self.current = 0
self.scopeDepth = 0 self.scopeDepth = 0
self.currentLoop = LoopContext.None self.loopDepth = 0
self.currentFunction = nil self.currentFunction = nil
self.tree = @[] self.tree = @[]
if not persist: if not persist:
self.operators = newOperatorTable() self.operators = newOperatorTable()
self.modules = newTable[string, bool]()
self.findOperators(tokens) self.findOperators(tokens)
while not self.done(): while not self.done():
self.tree.add(self.declaration()) self.tree.add(self.declaration())

View File

@ -51,28 +51,28 @@ proc getLineEditor: LineEditor =
result.bindHistory(history) result.bindHistory(history)
proc repl(warnings: seq[WarningKind] = @[], mismatches: bool = false, mode: CompileMode = Debug) = proc repl(warnings: seq[WarningKind] = @[], mismatches: bool = false, mode: CompileMode = Debug, breakpoints: seq[uint64] = @[]) =
styledEcho fgMagenta, "Welcome into the peon REPL!" styledEcho fgMagenta, "Welcome into the peon REPL!"
var var
keep = true keep = true
tokens: seq[Token] = @[] tokens: seq[Token] = @[]
tree: seq[Declaration] = @[] tree: seq[Declaration] = @[]
compiler = newBytecodeCompiler(replMode=true) compiler = newBytecodeCompiler(replMode=true)
compiled: Chunk compiled: Chunk = newChunk()
serialized: Serialized serialized: Serialized
tokenizer = newLexer() tokenizer = newLexer()
vm = newPeonVM() vm = newPeonVM()
parser = newParser()
debugger = newDebugger() debugger = newDebugger()
serializer = newSerializer() serializer = newSerializer()
editor = getLineEditor() editor = getLineEditor()
input: string input: string
current: string first: bool = false
tokenizer.fillSymbolTable() tokenizer.fillSymbolTable()
editor.bindEvent(jeQuit): editor.bindEvent(jeQuit):
stdout.styledWriteLine(fgGreen, "Goodbye!") stdout.styledWriteLine(fgGreen, "Goodbye!")
keep = false keep = false
input = "" input = ""
current = ""
editor.bindKey("ctrl+a"): editor.bindKey("ctrl+a"):
editor.content.home() editor.content.home()
editor.bindKey("ctrl+e"): editor.bindKey("ctrl+e"):
@ -80,21 +80,15 @@ proc repl(warnings: seq[WarningKind] = @[], mismatches: bool = false, mode: Comp
while keep: while keep:
try: try:
input = editor.read() input = editor.read()
if input == "#reset": if input == "#clear":
compiled = newChunk()
current = ""
continue
elif input == "#show":
echo current
elif input == "#clear":
stdout.write("\x1Bc") stdout.write("\x1Bc")
continue continue
elif input == "": elif input == "":
continue continue
tokens = tokenizer.lex(current & input & "\n", "stdin") tokens = tokenizer.lex(input, "stdin")
if tokens.len() == 0: if tokens.len() == 0:
continue continue
when debugLexer: if debugLexer:
styledEcho fgCyan, "Tokenization step:" styledEcho fgCyan, "Tokenization step:"
for i, token in tokens: for i, token in tokens:
if i == tokens.high(): if i == tokens.high():
@ -102,22 +96,22 @@ proc repl(warnings: seq[WarningKind] = @[], mismatches: bool = false, mode: Comp
break break
styledEcho fgGreen, "\t", $token styledEcho fgGreen, "\t", $token
echo "" echo ""
tree = newParser().parse(tokens, "stdin", tokenizer.getLines(), current & input & "\n") tree = parser.parse(tokens, "stdin", tokenizer.getLines(), input, persist=true)
if tree.len() == 0: if tree.len() == 0:
continue continue
when debugParser: if debugParser:
styledEcho fgCyan, "Parsing step:" styledEcho fgCyan, "Parsing step:"
for node in tree: for node in tree:
styledEcho fgGreen, "\t", $node styledEcho fgGreen, "\t", $node
echo "" echo ""
compiled = newBytecodeCompiler(replMode=true).compile(tree, "stdin", tokenizer.getLines(), current & input & "\n", showMismatches=mismatches, disabledWarnings=warnings, mode=mode) compiled = compiler.compile(tree, "stdin", tokenizer.getLines(), input, chunk=compiled, showMismatches=mismatches, disabledWarnings=warnings, mode=mode, incremental=true)
when debugCompiler: if debugCompiler:
styledEcho fgCyan, "Compilation step:\n" styledEcho fgCyan, "Compilation step:\n"
debugger.disassembleChunk(compiled, "stdin") debugger.disassembleChunk(compiled, "stdin")
echo "" echo ""
serialized = serializer.loadBytes(serializer.dumpBytes(compiled, "stdin")) serialized = serializer.loadBytes(serializer.dumpBytes(compiled, "stdin"))
when debugSerializer: if debugSerializer:
styledEcho fgCyan, "Serialization step: " styledEcho fgCyan, "Serialization step: "
styledEcho fgBlue, "\t- Peon version: ", fgYellow, &"{serialized.version.major}.{serialized.version.minor}.{serialized.version.patch}", fgBlue, " (commit ", fgYellow, serialized.commit[0..8], fgBlue, ") on branch ", fgYellow, serialized.branch styledEcho fgBlue, "\t- Peon version: ", fgYellow, &"{serialized.version.major}.{serialized.version.minor}.{serialized.version.patch}", fgBlue, " (commit ", fgYellow, serialized.commit[0..8], fgBlue, ") on branch ", fgYellow, serialized.branch
stdout.styledWriteLine(fgBlue, "\t- Compilation date & time: ", fgYellow, fromUnix(serialized.compileDate).format("d/M/yyyy HH:mm:ss")) stdout.styledWriteLine(fgBlue, "\t- Compilation date & time: ", fgYellow, fromUnix(serialized.compileDate).format("d/M/yyyy HH:mm:ss"))
@ -141,8 +135,11 @@ proc repl(warnings: seq[WarningKind] = @[], mismatches: bool = false, mode: Comp
styledEcho fgGreen, "OK" styledEcho fgGreen, "OK"
else: else:
styledEcho fgRed, "Corrupted" styledEcho fgRed, "Corrupted"
vm.run(serialized.chunk) if not first:
current &= input & "\n" vm.run(serialized.chunk, repl=true, breakpoints=breakpoints)
first = true
else:
vm.resume(serialized.chunk)
except LexingError: except LexingError:
print(LexingError(getCurrentException())) print(LexingError(getCurrentException()))
except ParseError: except ParseError:
@ -157,7 +154,7 @@ proc repl(warnings: seq[WarningKind] = @[], mismatches: bool = false, mode: Comp
quit(0) quit(0)
proc runFile(f: string, fromString: bool = false, dump: bool = true, breakpoints: seq[uint64] = @[], dis: bool = false, proc runFile(f: string, fromString: bool = false, dump: bool = true, breakpoints: seq[uint64] = @[],
warnings: seq[WarningKind] = @[], mismatches: bool = false, mode: CompileMode = Debug, run: bool = true, warnings: seq[WarningKind] = @[], mismatches: bool = false, mode: CompileMode = Debug, run: bool = true,
backend: PeonBackend = PeonBackend.Bytecode, output: string) = backend: PeonBackend = PeonBackend.Bytecode, output: string) =
var var
@ -186,7 +183,7 @@ proc runFile(f: string, fromString: bool = false, dump: bool = true, breakpoints
tokens = tokenizer.lex(input, f) tokens = tokenizer.lex(input, f)
if tokens.len() == 0: if tokens.len() == 0:
return return
when debugLexer: if debugLexer:
styledEcho fgCyan, "Tokenization step:" styledEcho fgCyan, "Tokenization step:"
for i, token in tokens: for i, token in tokens:
if i == tokens.high(): if i == tokens.high():
@ -197,7 +194,7 @@ proc runFile(f: string, fromString: bool = false, dump: bool = true, breakpoints
tree = parser.parse(tokens, f, tokenizer.getLines(), input) tree = parser.parse(tokens, f, tokenizer.getLines(), input)
if tree.len() == 0: if tree.len() == 0:
return return
when debugParser: if debugParser:
styledEcho fgCyan, "Parsing step:" styledEcho fgCyan, "Parsing step:"
for node in tree: for node in tree:
styledEcho fgGreen, "\t", $node styledEcho fgGreen, "\t", $node
@ -205,11 +202,9 @@ proc runFile(f: string, fromString: bool = false, dump: bool = true, breakpoints
case backend: case backend:
of PeonBackend.Bytecode: of PeonBackend.Bytecode:
compiled = compiler.compile(tree, f, tokenizer.getLines(), input, disabledWarnings=warnings, showMismatches=mismatches, mode=mode) compiled = compiler.compile(tree, f, tokenizer.getLines(), input, disabledWarnings=warnings, showMismatches=mismatches, mode=mode)
when debugCompiler: if debugCompiler:
styledEcho fgCyan, "Compilation step:\n" styledEcho fgCyan, "Compilation step:\n"
debugger.disassembleChunk(compiled, f) debugger.disassembleChunk(compiled, f)
if dis:
debugger.disassembleChunk(compiled, f)
var path = splitFile(if output.len() > 0: output else: f).dir var path = splitFile(if output.len() > 0: output else: f).dir
if path.len() > 0: if path.len() > 0:
path &= "/" path &= "/"
@ -224,31 +219,35 @@ proc runFile(f: string, fromString: bool = false, dump: bool = true, breakpoints
stderr.styledWriteLine(fgRed, styleBright, "Error: ", fgDefault, "the selected backend is not implemented yet") stderr.styledWriteLine(fgRed, styleBright, "Error: ", fgDefault, "the selected backend is not implemented yet")
elif backend == PeonBackend.Bytecode: elif backend == PeonBackend.Bytecode:
serialized = serializer.loadFile(f) serialized = serializer.loadFile(f)
if backend == PeonBackend.Bytecode: if backend == PeonBackend.Bytecode and debugSerializer:
when debugSerializer: styledEcho fgCyan, "Serialization step: "
styledEcho fgCyan, "Serialization step: " styledEcho fgBlue, "\t- Peon version: ", fgYellow, &"{serialized.version.major}.{serialized.version.minor}.{serialized.version.patch}", fgBlue, " (commit ", fgYellow, serialized.commit[0..8], fgBlue, ") on branch ", fgYellow, serialized.branch
styledEcho fgBlue, "\t- Peon version: ", fgYellow, &"{serialized.version.major}.{serialized.version.minor}.{serialized.version.patch}", fgBlue, " (commit ", fgYellow, serialized.commit[0..8], fgBlue, ") on branch ", fgYellow, serialized.branch stdout.styledWriteLine(fgBlue, "\t- Compilation date & time: ", fgYellow, fromUnix(serialized.compileDate).format("d/M/yyyy HH:mm:ss"))
stdout.styledWriteLine(fgBlue, "\t- Compilation date & time: ", fgYellow, fromUnix(serialized.compileDate).format("d/M/yyyy HH:mm:ss")) stdout.styledWrite(fgBlue, &"\t- Constants segment: ")
stdout.styledWrite(fgBlue, &"\t- Constants segment: ") if serialized.chunk.consts == compiled.consts:
if serialized.chunk.consts == compiled.consts: styledEcho fgGreen, "OK"
styledEcho fgGreen, "OK" else:
else: styledEcho fgRed, "Corrupted"
styledEcho fgRed, "Corrupted" stdout.styledWrite(fgBlue, &"\t- Code segment: ")
stdout.styledWrite(fgBlue, &"\t- Code segment: ") if serialized.chunk.code == compiled.code:
if serialized.chunk.code == compiled.code: styledEcho fgGreen, "OK"
styledEcho fgGreen, "OK" else:
else: styledEcho fgRed, "Corrupted"
styledEcho fgRed, "Corrupted" stdout.styledWrite(fgBlue, "\t- Line info segment: ")
stdout.styledWrite(fgBlue, "\t- Line info segment: ") if serialized.chunk.lines == compiled.lines:
if serialized.chunk.lines == compiled.lines: styledEcho fgGreen, "OK"
styledEcho fgGreen, "OK" else:
else: styledEcho fgRed, "Corrupted"
styledEcho fgRed, "Corrupted" stdout.styledWrite(fgBlue, "\t- Functions segment: ")
stdout.styledWrite(fgBlue, "\t- Functions segment: ") if serialized.chunk.functions == compiled.functions:
if serialized.chunk.functions == compiled.functions: styledEcho fgGreen, "OK"
styledEcho fgGreen, "OK" else:
else: styledEcho fgRed, "Corrupted"
styledEcho fgRed, "Corrupted" stdout.styledWrite(fgBlue, "\t- Modules segment: ")
if serialized.chunk.modules == compiled.modules:
styledEcho fgGreen, "OK"
else:
styledEcho fgRed, "Corrupted"
if run: if run:
case backend: case backend:
of PeonBackend.Bytecode: of PeonBackend.Bytecode:
@ -284,7 +283,6 @@ when isMainModule:
var dump: bool = true var dump: bool = true
var warnings: seq[WarningKind] = @[] var warnings: seq[WarningKind] = @[]
var breaks: seq[uint64] = @[] var breaks: seq[uint64] = @[]
var dis: bool = false
var mismatches: bool = false var mismatches: bool = false
var mode: CompileMode = CompileMode.Debug var mode: CompileMode = CompileMode.Debug
var run: bool = true var run: bool = true
@ -350,7 +348,7 @@ when isMainModule:
stderr.styledWriteLine(fgRed, styleBright, "Error: ", fgDefault, &"error: invalid breakpoint value '{point}'") stderr.styledWriteLine(fgRed, styleBright, "Error: ", fgDefault, &"error: invalid breakpoint value '{point}'")
quit() quit()
of "disassemble": of "disassemble":
dis = true debugCompiler = true
of "compile": of "compile":
run = false run = false
of "output": of "output":
@ -361,8 +359,12 @@ when isMainModule:
backend = PeonBackend.Bytecode backend = PeonBackend.Bytecode
of "c": of "c":
backend = PeonBackend.NativeC backend = PeonBackend.NativeC
of "cpp": of "debug-dump":
backend = PeonBackend.NativeCpp debugSerializer = true
of "debug-lexer":
debugLexer = true
of "debug-parser":
debugParser = true
else: else:
stderr.styledWriteLine(fgRed, styleBright, "Error: ", fgDefault, &"error: unkown option '{key}'") stderr.styledWriteLine(fgRed, styleBright, "Error: ", fgDefault, &"error: unkown option '{key}'")
quit() quit()
@ -403,14 +405,16 @@ when isMainModule:
of "c": of "c":
run = false run = false
of "d": of "d":
dis = true debugCompiler = true
else: else:
stderr.styledWriteLine(fgRed, styleBright, "Error: ", fgDefault, &"unkown option '{key}'") stderr.styledWriteLine(fgRed, styleBright, "Error: ", fgDefault, &"unkown option '{key}'")
quit() quit()
else: else:
echo "usage: peon [options] [filename.pn]" echo "usage: peon [options] [filename.pn]"
quit() quit()
if breaks.len() == 0 and debugVM:
breaks.add(0)
if file == "": if file == "":
repl(warnings, mismatches, mode) repl(warnings, mismatches, mode, breaks)
else: else:
runFile(file, fromString, dump, breaks, dis, warnings, mismatches, mode, run, backend, output) runFile(file, fromString, dump, breaks, warnings, mismatches, mode, run, backend, output)

View File

@ -2,6 +2,11 @@
import values; import values;
operator `is`*[T: any](a, b: T): bool {
#pragma[magic: "Identity", pure]
}
operator `>`*[T: UnsignedInteger](a, b: T): bool { operator `>`*[T: UnsignedInteger](a, b: T): bool {
#pragma[magic: "GreaterThan", pure] #pragma[magic: "GreaterThan", pure]
} }
@ -12,7 +17,7 @@ operator `<`*[T: UnsignedInteger](a, b: T): bool {
} }
operator `==`*[T: Number | inf](a, b: T): bool { operator `==`*[T: Number | inf | bool](a, b: T): bool {
#pragma[magic: "Equal", pure] #pragma[magic: "Equal", pure]
} }

View File

@ -16,4 +16,9 @@ export comparisons;
var version* = 1; var version* = 1;
var _private = 5; # Invisible outside the module (underscore is to silence warning) var _private = 5; # Invisible outside the module (underscore is to silence warning)
var test* = 0x60; var test* = 0x60;
fn testGlobals*: bool {
return version == 1 and _private == 5 and test == 0x60;
}

View File

@ -1,4 +1,5 @@
import std; import std;
import time;
fn fib(n: int): int { fn fib(n: int): int {
@ -10,7 +11,7 @@ fn fib(n: int): int {
print("Computing the value of fib(37)"); print("Computing the value of fib(37)");
var x = clock(); var x = time.clock();
print(fib(37)); print(fib(37));
print(clock() - x); print(time.clock() - x);
print("Done!"); print("Done!");

View File

@ -1,7 +1,7 @@
import std; import std;
const max = 50000; const max = 500000;
var x = max; var x = max;
var s = "just a test"; var s = "just a test";