Forum Discussion
Altera_Forum
Honored Contributor
11 years agoTo both you guys (and anyone else reading):
Okay, I'll give verilog a try. But seriously, don't you guys have one big "block diagram" for your FPGA designs? Sure, I write huge software applications that don't have a single visual "block diagram" (though often I do draw one). But at least I have many clearly distinguished "subsystems" that perform the same purpose. But I think the main problem (for me) is the fundamental difference between hardware and software. In hardware everything in the circuit is happening all the time. The entire schematic is "executing" constantly (or at least on every clock edge). While in software, one statement at one place is executing at any given moment. Very different! Which is why I find schematics more naturally fit hardware, because you can "see the total" which is very relevant because everything in the design is always executing, all the time. In contrast, text code is a reasonable match to software, because execution is a serial process of "one operation at a time"... just like reading lines of code. Nonetheless, it appears I must accept being forced into "unnatural acts" because the industry "says so". I've got to assume that EVERY wire in HDL, no matter how trivial, must have a name. Otherwise it couldn't get from one place to another. While on a schematic, everything close-by or related is simply connected by a line (a wire) or a bus (a bundle of 8, 16, 32, 64 ordered wires). This must create a massive overabundance of names! And where you have, say, 38 registers or 17 multiplexers in a design, now all the names on every register or multiplex are the same --- unless you actually have to create 38 separate HDL files for the 38 separate registers (just like I draw 38 rectangles on a schematic). I guess I'll find that out once I get started. I guess you're saying the Quartus tools (?the RTL viewer?) can create a block-diagram of sorts from a directory full of HDL files. That must be... eh... ugly (because the locations on the diagram are not made sensible like in a human-drawn schematic). Frankly, I don't believe all the propaganda about schematics being inherently overwhelming in size. For example, on a schematic, I can see a whole CPU as one rectangle. If I designed that CPU, I would also have a diagram that showed the internal subsystems in the CPU in case I wanted to examine them (which would be rare if ever once I made the CPU work, which is why software folks don't need to ever even see the block-diagram of the CPU). I could, for example, see the ALU (arithmetic logic unit) portion as a block. If I wanted to drill down into the ALU, I could do that and see the add/subtract/compare section, the OR section, the AND section, the XOR section, the NOT section and so forth, plus the multiplexers that choose which operation is to be routed to the output. And if I wanted to see how the multiplexer was created with NAND gates and inverters, well, I could dive into that too. The bottom line of the above is... as long as every component/subsystem in a schematics is just a collection of simpler components/subsystems inside, one can always look at the exact level of complexity and functionality they need at any given instant. Try that with hundreds of pages of text. Doesn't work and can't work (unless I'm missing something very important). Well, I'm sure the "RTL viewer" tries to MAKE something like this work as best it can without knowing how to organize each level in the most sensible way (because the RTL viewer doesn't possess human-level hardware engineer consciousness), but ultimately these software generated views can never be as cleanly organized as the schematics a hardware engineer would create. Of course, I'm saying this without having seen what "RTL viewers" create, but my decades of experience with both hardware and software lead me to believe what I said. Maybe I'll find these tools are infinitely more brilliant than I can imagine. I hope so, and we shall see. BTW, how many gates do you imagine were in the 32-bit and 64-bit CPUs I designed decades ago? And that was drawing on paper, when no schematic entry software existed? I didn't have any problem of "mess" at all. Why? Because I didn't repeat the pattern of gates for an 8-input multiplexer dozens or hundreds of time in my schematic. I just drew a rectangle, labeled the inputs and outputs, and named it "8_input_mux" or equivalent. If someone wanted to find out how that 8-input multiplexer was implemented with gates, they just had to look at the sheet of paper that implemented that component. On the flip side, I do recognize the very real and very significant advantages of text code. Namely, no proprietary anything. And you are certainly correct to claim that in principle at least, version control is a lot easier to automate with nothing but text files. That wasn't a problem for me, but in principle it may be for some people. Anyway, I'm not trying to sell you on schematics, I just doubt some of the claims I see over and over and over again about HDL versus schematics. I know better, because I've been there, done that... even with ancient technology (paper, pencil, and no software aids). ----- Okay, now to ask my next question (and expose one of my personal weaknesses). My brain has been trained over decades (by me) to learn better in the "bottom-up" fashion, not "top-down" fashion. I've seen other smart people who are exactly the opposite, and sometimes I feel a bit jealous. Because of this, I would much prefer to learn verilog by viewing pages and pages of simple examples of hardware components... then let my brain form the appropriate abstractions (rather than the opposite direction, which is much more common). So, for example, I'd like to find a book or course or tutorial that lets me see the verilog for a 4-bit register and an 8-bit register. From those two examples I would instantly understand how to write verilog for an n-bit register. Then show me a 4-input multiplexer and an 8-bit multiplexer. Then show me how to connect the 1-bit outputs from 4 multiplexers into a 4-bit register. From that, I'll understand how to connect any n-output-bits from anything anywhere to any n-input register (or n-bit anything else). And so forth. The infinitely abstract approach always drives me nuts. I know, that's just me. My brain wants to form abstractions from multiple instances of concretes, not have abstractions thrown at me and hope that someday I can figure out how those abstractions apply to concretes I might encounter (or want to invent). And so my question is this. Do you know any book or tutorials or information sources that approach verilog in this bottom-up manner? By analogy, I want to understand the existence and nature of OR-gates, AND-gates, XOR-gates and NOT-gates... before I draw some rectangle and label it "64-bit CPU". Frankly, if my purpose is to DESIGN a freaking 64-bit CPU, a rectangle called "64-bit CPU" tells me NOTHING. I might not even know what a CPU is when I look at such a rectangle. And even if I want to design a CPU, or an interplanetary spacecraft, or anything else, I need to know what are the components that I have available to build it with, and how those components work. Otherwise, I'd be no more able to build a 64-bit CPU or interplanetary spacecraft than some random doofus on the street. Incidentally, I accept that some people work in the opposite manner. Sometimes I'm jealous. But... I'm also aware that my way has some advantages too. So... do you know of any books or other materials organized "upside down"?