Forum Discussion

Altera_Forum's avatar
Altera_Forum
Icon for Honored Contributor rankHonored Contributor
11 years ago

are "symbols" or "block symbols" sorta like "personal IP" ???

I'm trying to figure out how to create an FPGA design in a plausible manner. From centuries of hardware design with schematics, I naturally prefer to create an FPGA in schematic mode. However, I can see that a large design could easily grow so huge that I'd need a house size display to see the whole schematic at once. And scrolling a little subwindow around the entire schematic would be a terrible experience (and easy to get lost).

So the obvious idea that occurs to me is to "encapsulate" and "create my own components". For example, once I create a CRC32 subsystem in schematic mode (or get my assistant to create that subsystem in verilog), I'd want to be able to convert that CRC32 subsystem into a single "component"... which for practical purposes is a "block diagram"... which for practical purposes sorta seems similar to a "megafunction" or perhaps call it "personal IP".

The next question is, when I create them, how to I specify which signals are inputs and outputs (or bidirectional or busses), so when I later insert them as a component/subsystem into a larger "schematic" (or as a sub-component in a larger component), it appears like a rectangle with some inputs [on the left] and outputs [on the right] and control signals [on top and bottom]?

Now, I don't quite understand how this works (or if this fully works), but how is this done, and what is each component created this called in altera/quartus terminology? Would this be the same as a "symbol" or a "block symbol"?

And then, how would I insert a few of these CRC32 symbols/block-symbols/components into a design? Where would I find them and how would I insert them? Would they all appear in some "personal IP" menu somewhere once I save them? Or what?

One of the really nice things about this technique... assuming it works like I envision here... is that I could create some of these components in schematic mode, and my assistant could create some with verilog, but they'd both become components that could be dropped into a larger "schematic" or "block" AKA "block diagram".

Is a "schematic" considered the same thing as a "block diagram" in altera/quartus terminology? Is there a difference?

The next question is, can I dig down deeper into a component by double-clicking that component in the schematic or block-diagram I'm currently working in? Would it create a new window somewhere that contains the innards of that component? If so, could I edit that window to change the component? And, what if I double-click a component inside another component? Will that create yet another window, and so forth, hierarchically? That would be great, though I doubt that would be necessary very often.

Lots to understand in quartus-land. For a "schematic guy" who is not comfortable with HDL, these features would help a lot. Actually, now that I think about it, how could a pure HDL guy even deal with these [component hierarchy] issues? No, don't answer that. I don't want to know. Yet.

5 Replies

  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    when you create a schematic, you can create a symbol file from the schematic to import into other schematics - file -> create/update -> create symbol file from current source (or something like that). It will use the inputs and outputs you have specified as the block IOs (and in the same order). Then you can import your block into a new schematic.

    But I highly highly highly recommend you do not go down this route. FPGA design using schematic has all sorts of problems:

    1. Cannot simulate the schematic directly

    2. very hard to see changes in version control

    3. it can easily become a mess

    4. Its very hard for others to follow

    5. Very difficult to add comments to the design

    I have had to make modifications to someone elses design/schematics in the past - and I have to say it was one of the worst jobs I ever I had to do. It was very hard to work out what was going on. The design really didnt work as it was meant to so I had to create bolt ons to fix it - as re-writing was not a possibility.

    The schematics are a fallback from the start of FPGAs when they were designs by hardware designers. At the time FPGAs were very simple so schematics were a viable option, but now, with hundreds of thousands of luts in modern FPGAs, schematics is really not a good idea.
  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    Tricky is right, please look into Verilog or VHDL. (I learned schematic first, then VHDL, but now prefer Verilog. Either will work fine for you, but I find Verilog easier, especially at first. ) Your design will have hierarchy, which is done through module instantiation. Google should show examples, but almost any HDL design has hierarchy. The hierarchy browser in Quartus can be used to open upper/lower files quickly. You'll also want to use the RTL viewer, especially at first, to "see" what you write looks like in schematic.

    FYI, I find having a schematic background to be very useful, as I still write code that I think of in schematic terms first, for the most part. I find designers often get in trouble early on when they have a software background and try to write HDL like software. You'll have a bit of a learning curve but it will be worth it in the end.
  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    To both you guys (and anyone else reading):

    Okay, I'll give verilog a try. But seriously, don't you guys have one big "block diagram" for your FPGA designs? Sure, I write huge software applications that don't have a single visual "block diagram" (though often I do draw one). But at least I have many clearly distinguished "subsystems" that perform the same purpose.

    But I think the main problem (for me) is the fundamental difference between hardware and software. In hardware everything in the circuit is happening all the time. The entire schematic is "executing" constantly (or at least on every clock edge). While in software, one statement at one place is executing at any given moment. Very different!

    Which is why I find schematics more naturally fit hardware, because you can "see the total" which is very relevant because everything in the design is always executing, all the time. In contrast, text code is a reasonable match to software, because execution is a serial process of "one operation at a time"... just like reading lines of code.

    Nonetheless, it appears I must accept being forced into "unnatural acts" because the industry "says so".

    I've got to assume that EVERY wire in HDL, no matter how trivial, must have a name. Otherwise it couldn't get from one place to another. While on a schematic, everything close-by or related is simply connected by a line (a wire) or a bus (a bundle of 8, 16, 32, 64 ordered wires).

    This must create a massive overabundance of names!

    And where you have, say, 38 registers or 17 multiplexers in a design, now all the names on every register or multiplex are the same --- unless you actually have to create 38 separate HDL files for the 38 separate registers (just like I draw 38 rectangles on a schematic). I guess I'll find that out once I get started.

    I guess you're saying the Quartus tools (?the RTL viewer?) can create a block-diagram of sorts from a directory full of HDL files. That must be... eh... ugly (because the locations on the diagram are not made sensible like in a human-drawn schematic).

    Frankly, I don't believe all the propaganda about schematics being inherently overwhelming in size. For example, on a schematic, I can see a whole CPU as one rectangle. If I designed that CPU, I would also have a diagram that showed the internal subsystems in the CPU in case I wanted to examine them (which would be rare if ever once I made the CPU work, which is why software folks don't need to ever even see the block-diagram of the CPU). I could, for example, see the ALU (arithmetic logic unit) portion as a block. If I wanted to drill down into the ALU, I could do that and see the add/subtract/compare section, the OR section, the AND section, the XOR section, the NOT section and so forth, plus the multiplexers that choose which operation is to be routed to the output. And if I wanted to see how the multiplexer was created with NAND gates and inverters, well, I could dive into that too.

    The bottom line of the above is... as long as every component/subsystem in a schematics is just a collection of simpler components/subsystems inside, one can always look at the exact level of complexity and functionality they need at any given instant.

    Try that with hundreds of pages of text. Doesn't work and can't work (unless I'm missing something very important). Well, I'm sure the "RTL viewer" tries to MAKE something like this work as best it can without knowing how to organize each level in the most sensible way (because the RTL viewer doesn't possess human-level hardware engineer consciousness), but ultimately these software generated views can never be as cleanly organized as the schematics a hardware engineer would create.

    Of course, I'm saying this without having seen what "RTL viewers" create, but my decades of experience with both hardware and software lead me to believe what I said. Maybe I'll find these tools are infinitely more brilliant than I can imagine. I hope so, and we shall see.

    BTW, how many gates do you imagine were in the 32-bit and 64-bit CPUs I designed decades ago? And that was drawing on paper, when no schematic entry software existed? I didn't have any problem of "mess" at all. Why? Because I didn't repeat the pattern of gates for an 8-input multiplexer dozens or hundreds of time in my schematic. I just drew a rectangle, labeled the inputs and outputs, and named it "8_input_mux" or equivalent. If someone wanted to find out how that 8-input multiplexer was implemented with gates, they just had to look at the sheet of paper that implemented that component.

    On the flip side, I do recognize the very real and very significant advantages of text code. Namely, no proprietary anything. And you are certainly correct to claim that in principle at least, version control is a lot easier to automate with nothing but text files. That wasn't a problem for me, but in principle it may be for some people.

    Anyway, I'm not trying to sell you on schematics, I just doubt some of the claims I see over and over and over again about HDL versus schematics. I know better, because I've been there, done that... even with ancient technology (paper, pencil, and no software aids).

    -----

    Okay, now to ask my next question (and expose one of my personal weaknesses). My brain has been trained over decades (by me) to learn better in the "bottom-up" fashion, not "top-down" fashion. I've seen other smart people who are exactly the opposite, and sometimes I feel a bit jealous. Because of this, I would much prefer to learn verilog by viewing pages and pages of simple examples of hardware components... then let my brain form the appropriate abstractions (rather than the opposite direction, which is much more common).

    So, for example, I'd like to find a book or course or tutorial that lets me see the verilog for a 4-bit register and an 8-bit register. From those two examples I would instantly understand how to write verilog for an n-bit register. Then show me a 4-input multiplexer and an 8-bit multiplexer. Then show me how to connect the 1-bit outputs from 4 multiplexers into a 4-bit register. From that, I'll understand how to connect any n-output-bits from anything anywhere to any n-input register (or n-bit anything else).

    And so forth.

    The infinitely abstract approach always drives me nuts. I know, that's just me. My brain wants to form abstractions from multiple instances of concretes, not have abstractions thrown at me and hope that someday I can figure out how those abstractions apply to concretes I might encounter (or want to invent).

    And so my question is this. Do you know any book or tutorials or information sources that approach verilog in this bottom-up manner?

    By analogy, I want to understand the existence and nature of OR-gates, AND-gates, XOR-gates and NOT-gates... before I draw some rectangle and label it "64-bit CPU". Frankly, if my purpose is to DESIGN a freaking 64-bit CPU, a rectangle called "64-bit CPU" tells me NOTHING. I might not even know what a CPU is when I look at such a rectangle. And even if I want to design a CPU, or an interplanetary spacecraft, or anything else, I need to know what are the components that I have available to build it with, and how those components work. Otherwise, I'd be no more able to build a 64-bit CPU or interplanetary spacecraft than some random doofus on the street.

    Incidentally, I accept that some people work in the opposite manner. Sometimes I'm jealous. But... I'm also aware that my way has some advantages too. So... do you know of any books or other materials organized "upside down"?
  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    --- Quote Start ---

    To both you guys (and anyone else reading):

    Okay, I'll give verilog a try. But seriously, don't you guys have one big "block diagram" for your FPGA designs? Sure, I write huge software applications that don't have a single visual "block diagram" (though often I do draw one). But at least I have many clearly distinguished "subsystems" that perform the same purpose.

    --- Quote End ---

    Yes we do - but its often documented in MS Visio and Word. Where I work we need logic that compiles for both Xilinx AND Altera devices. So altera or xilinx schemtics are out the window (even if we wanted to use them) as they are proprietary. There is Mentor's HDL designer, which you may really like - which works from a graphical POV but underneath works with HDL (VHDL or Verilog). But there are issues:

    1. It imposes it's own directory structures

    2. You have to keep all the graphical files along with the HDL in version control.

    3. Its not cheap, and it's only a code visualisation tool.

    There are some people out there who absolutly love it, afaik none of the members of our 30 strong firmware team use it any more, mostly for the above reasons.

    --- Quote Start ---

    But I think the main problem (for me) is the fundamental difference between hardware and software. In hardware everything in the circuit is happening all the time. The entire schematic is "executing" constantly (or at least on every clock edge). While in software, one statement at one place is executing at any given moment. Very different!

    --- Quote End ---

    Yes. Which is why HDL works like a schematic. As long as you follow the basic templates you can code from a single and gate all the way up to a massive 64bit CPU (How do you think CPUs are designed now?). The synthesisors will even try to convert all behavioural code, and do quite a good job.

    --- Quote Start ---

    I've got to assume that EVERY wire in HDL, no matter how trivial, must have a name. Otherwise it couldn't get from one place to another. While on a schematic, everything close-by or related is simply connected by a line (a wire) or a bus (a bundle of 8, 16, 32, 64 ordered wires).

    This must create a massive overabundance of names!

    --- Quote End ---

    You can create busses in HDL. You just create arrays in Verilog. In VHDL you can create record types for more complex busses:

    
    type my_bus_t is record
      addr : std_logic_vector(13 downto 0);
      data : std_logic_vector(31 downto 0);
      enable : std_logic;
    end record my_bus_t;
    and then at the port level:
    port (
      my_input_bus : in my_bus_t;
     ....
    

    And that encapsulates multiple busses within a single name (and you can still access individual elements).

    And Im sure verilog has similar (well it has at least arrays)

    --- Quote End ---

    --- Quote Start ---

    And so my question is this. Do you know any book or tutorials or information sources that approach verilog in this bottom-up manner?

    --- Quote End ---

    There are many tutorials online, and they should all teach you the language through the basic elements.

    As VHDL person - I dont know any good Verilog books, but Im sure there are people here who can.
  • Altera_Forum's avatar
    Altera_Forum
    Icon for Honored Contributor rankHonored Contributor

    There's no question that schematics have some advantages, and easy visualization and understanding is top of the list for me. I'm always looking at other people's code, and find HDL painful to get up to speed. But as a designer working with your code(and maybe instantiating other people's), you usually have a good idea what all the names are and I don't think it's as big a deal. I find schematics nicely show flow and how things connect, but are often lacking in:

    - Complex synthesis. Something as simple as a state-machine can be a pain in schematic, but HDL works nicely. There are lots of structures that take some mental debugging of the logic to figure out what it does, where HDL is much easier to read and understand. It won't be that way at first, but will get there.

    - Parameterization. This isn't important in all projects, but being able to parameterize easily is a huge benefit. This is not only making busses wider/small, but adding/modifying functionality, etc.

    I'm sure I'm just touching the surface. You're going to lose some advantages of schematics but hopefully gain other advantages with HDL.