通用加法器"推理架构":模拟错误

时间:2015-05-03 23:08:16

标签: vhdl xilinx modelsim inference

所以,我必须创建一个带有进位和执行的通用N位加法器。 到目前为止,我已经制作了两个完全正常工作的架构,一个使用generate函数,一个使用rtl描述如下:

实体:

library ieee;
use ieee.std_logic_1164.all;
use ieee.numeric_std.all;

entity adder_n is
generic (N: integer:=8);
port (
    a,b: in std_logic_vector(0 to N-1);
    cin: in std_logic;
    s: out std_logic_vector(0 to N-1);
    cout: out std_logic);
end adder_n;

体系结构1和2:

    --STRUCT
architecture struct of adder_n is
    component f_adder
        port (
            a,b,cin: in std_logic;
            s,cout: out std_logic);
    end component;
signal c: std_logic_vector(0 to N);
begin
    c(0)<=cin;
    cout<=c(N);
    adders: for k in 0 to N-1 generate
        A1: f_adder port map(a(k),b(k),c(k),s(k),c(k+1));
    end generate adders;
end struct;
--END STRUCT

architecture rtl of adder_n is
    signal c: std_logic_vector(1 to N);
begin
    s<=(a xor b) xor (cin&c(1 to N-1));
    c<=((a or b) and (cin&c(1 to N-1))) or (a and b);
    cout<=c(N);
end rtl;

现在,我的问题出现在第三种架构中,我试图推断加法器。即使我创建的以下架构编译得很好,当我尝试模拟它时,我得到一个模拟错误(在Modelsim上),我在本文末尾附上了。 我猜测numeric_std定义有问题。我试图避免使用arith库,我仍然试图习惯IEEE标准。 欢迎任何想法!谢谢!

推理拱:

--INFERENCE

architecture inference of adder_n is
    signal tmp: std_logic_vector(0 to N);
    signal atmp, btmp, ctmp, add_all : integer :=0;
    signal cin_usgn: std_logic_vector(0 downto 0);
    signal U: unsigned(0 to N);
begin

    atmp <= to_integer(unsigned(a));
    btmp <= to_integer(unsigned(b));
    cin_usgn(0) <= cin;
    ctmp <= to_integer(unsigned(cin_usgn));


    add_all <= (atmp + btmp + ctmp);
    U <= to_unsigned(add_all,N);

    tmp <= std_logic_vector(U);
    s <= tmp(0 to N-1);
    cout <= tmp(N); 
end inference;

-- END

模拟错误:

  

#由于致命错误而无法继续。
  #HDL呼叫序列:
  #停在C:/altera/14.1/modelsim_ase/test1_simon/adder_inference.vhd 58架构推断

1 个答案:

答案 0 :(得分:1)

U的长度是N + 1(0到N)

更改

    U <= to_unsigned(add_all,N);

    U <= to_unsigned(add_all,N+1);

将防止inference的架构adder_n中信号分配的左侧和右侧之间的长度不匹配。

传递给to_unsigned的参数指定长度。