ContributionsMost RecentMost LikesSolutionsRe: Email from Intel Identity Consolidation - real or PHISHING ? The issue of the "experience" logging into the forum, is secondary. The MAIN issue, which you have not addressed is the validity of the email. I still have NO CONFIRMATION from Intel that this is a genuine Intel email. I will also point out, with some annoyance, that Intel Twitter support, when I reported this issue, GHOSTED me. If the email is a PHISHING email then... there has been a leak of customer information from Intel. If the email is a genuine email from Intel (which I now lean towards), then using URLs which have every appearance of PHISHING type URLs, is an act of gross stupidity, and the person responsible needs to get the sack. Given that this email purports to be about SECURITY, it is IRONIC that there is no way a receiver of this email can verify that it really is a genuine email sent from Intel. I hope that anyone else receiving this email realises that it is VERY DANGEROUS to click on any links/URLs found in emails, that are suspicious. There is no reason that the links in the email should not use the "intel.com" domain, which would give confidence that the link is genuine, however it is well known that PHISHING style emails have one or two DANGEROUS links, and other links such as for "Contact Us" or "Privacy" have genuine URLs which point back to the company which is being imitated. Due to stupidity by IT, it is getting difficult to identify genuine emails/messages. I recently received a text to my phone, purporting to be from my medical practice, when clicked on, asks for my date of birth, "to validate that it is me". Again it was likely this was a genuine text, but there is no way I am entering those type of details without 100% certainty of the origin of the text message. Re: Email from Intel Identity Consolidation - real or PHISHING ? Hi AK6DN. I recognise your "call sign" even though I am an infrequent visitor to this forum. I know that security is important, but as you say it's only a user forum. I also lost my handle. When I logged in a few minutes ago 1) I was sent an email to my registered email address, with a PIN 2) I was then send another PIN to my mobile phone. At least I didn't have to remember my password ! Then the website appeared to hang up and it was a minute or so before I could do anything. Not an ideal user experience hi-hi. Re: Email from Intel Identity Consolidation - real or PHISHING ? Thank you Mark ! I'm so relieved to hear that someone else thinks the same way as me, that the "Intel" email (yet to be confirmed of course) fails all basic security rules. Sort of ironic given the subject matter 🙂 Email from Intel Identity Consolidation - real or PHISHING ? Sorry, I don't really know which forum area to post this in, so mods please move as required. I recently received an email sent to my (unique) email address as registered with Altera (now acquired by Intel of course). The email account is unique and allowed me to confirm that this was the email account which I registered with Altera, and NO-ONE else. The title in the graphic says "Security Matters". There are lots of URLs. The problem is when I hover over them, not a single one includes "intel.com" but they have long weird and wacky URLs and to me looks like a well designed PHISHING email. There is no way whatsoever I am going to click on ANY of these links. One URL can be read and starts https://login.microsoftoneline.com/etc.... but when hovered over shows a different link. This is CLASSIC phishing email way of tricking the receiver to click on the innocuous link, which is hiding the dangerous link. Why on earth would Intel show Microsoft in the FAKE URL ??? I have made multiple Twitter postings but the Intel "twits" having initially showed interest and asked for a screen shot, have now GHOSTED me. For example the "Intel Customer Support" link is https://u11087391.ct.sendgrid.net/ls/click?upn=wOR6BGsLK9U8rmZUpk1tg41q9hDJPfkor3lPBXBevchvfDNXv-2FCPqKgKvFyubldg3Jq3srXq3hnDxUeqhXNImv1OC7y-2Fk0jILoqELSLgSqKUusqHz08AiEC-2BfdmkzDpKVaaeNn_HoZvLqGpk4JfKQ7nktNDEd26lxI22jN3K1iHFzPKKM4OX2v6tqAuQUcfhgBtfixFKBFhP2NcTp7nlvc4ioII46nfS9TdW3kKCrLW7d-2Bh2r-2BX8ACc8OEP0fK5Ofva5ps2noTiwjKJalLECW0koG1LX6VqsGDHLZ63Pwyjev35utKVAvVNKzLXJltuH6jdeG-2FPHtIHfHGGKkj53h3kbZ-2BFeaqg-3D-3D Would you click on that link? NOTE: it is mangled/changed for YOUR safety. Now here's the kicker.... In order to make this posting I had to log in. I was forced to go through some wierd procedure to enhance security, which appears to tie in with the PHISHING email. So perhaps it is a genuine email? Having logged into the forum, it does appear that Intel have moved over to some unfriendly Microsoft based system, and might well explain why "Microsoft" was seen in the fake URL (I say fake as when hovered over it reveals the true UTL which does not point to Microsoft). If this IS a genuine email from Intel then IMO the chief IT officer needs to get the sack, for sending out an email full of super dodgy links which to all purposes are exactly the same format as you see in phishing emails. It's totally bonkers to send out an email about "SECURITY MATTERS" and then fill the email full of phishing like URLs. We train ordinary (non computer literate) people to be VERY suspicious of URLs embedded in emails and unless 100% confident to NEVER click on them. We make a point of saying that if when hovered over the URL which is revealed does not match the one printed, then this is a MASSIVE red flag and DO NOT CLICK ON THIS LINK. If this is a phishing email then Intel must have leaked my email address. If this is a genuine email from Intel, which I now lean towards having logged into this forum(!!!), then Intel should be ASHAMED at creating an email which has all the hallmarks of a PHISHING email. Re: Any way to get around "Transmitter ... exceeds the maximum allowed data rate ..."? Just as a quick test I edited the PLL to generate 50 and 250MHz (for the DDIO IP) instead of 75 and 375MHz. This drops the data rate to 500Mbps and so the ERROR message is not triggered. My Dell 2408WFP displays a picture and informs me that it is 1280x720 @34Hz. So the flexible Dell can handle this very non-standard rate, but I am not expecting to work on my UK 50Hz HD TV! It does appear that the timing tool is clever enough to figure out the PLL outputs and that they drive the DDIO clocks. Is there any way to define something in the SDC to fool the timing tool into thinking that the DDIO clocks are lower frequencies? BTW my SDC knowledge is pretty basic and any time I try to define something on an internal signal I get an error :-(. Re: Any way to get around "Transmitter ... exceeds the maximum allowed data rate ..."? Thank you for taking the time to reply, but I have no idea what you are suggesting. I just re-visited this project today. I have just tried compiling the code for 720p @50Hz over HDMI, and get the same ERROR message. Is there a way to downgrade an error to a warning? Re: Connection Setup for JTAG - simplification This really got me thinking. FPGA pins tend to be limited, so it's not great if you have to dedicate pins for example to JTAG, as that's 4 less I/Os available. So there is the pin JTAGEN, when high forces 4 I/O pins to be JTAG (TDI,TDO, TMS, TCK). But the JTAGEN pn can be used as user I/O. Or if JTAGEN=0 the 4 "JTAG" I/O can be used as normal I/O. There is a config bit to "Enable JTAG pin sharing". I wonder what this does internally? If as in your case you are using JTAGEN as user I/O, to use the JTAG pins, do you need to temporarily force JTAGEN high or is this unnecessary? Possible NO only if JTAG pin sharing is disabled??? If JTAG pin sharing is enabled then foir sure you need to force JTAGEN high during programming in order for the JTAG pins to work. What I was working towards is that during "mission mode" operation, if JTAG pins were to toggle then the TAP (internal IP which is controlled by the JTAG pins) would operate and could possibly corrupt the programmed state of the device. I used a chip where it was recommended (i.e. must do it) to pull the JTAG reset pin (TRST - which many chips have) low to hold the TAP in reset. The concern was that unintentional toggling of TCK and other JTAG pins could cause the device to go into test mode! Floating pins can in theory see false data depending upon whether there are wires close to other toggling wires (cross coupling). So I am now thinking that if the "JTAG Pin Sharing" config bit is set to disabled, the JTAGEN pin is totally disconnected from the internal circuitry. Also if "JTAG Pin Sharing" config bit is enabled it likely goes to the select of 4 muxes where the "A" inputs connect the pads to internal user I/O and the "B" inputs connect the pads to the internal JTAG circuitry. More assumptions, if "JTAG Pin Sharing" is enabled and JTAGEN is low (for user I/O pins), the the internal connections to the TAP (from the "JTAG" pads) will be forced to a known safe value (probably '0'). => So IF you have the above scenario, I suspect that you can get away with no external pull ups/downs on the JTAG pins since it will be managed internally. "The JTAG pin sharing is disabled, so i will use JTAGEN pin as usual USER I/O pin. Can i in this case leave all JTAG pins floating, so no pull up/down resistor on JTAG lines are required?" So in this case I agree that you need to use pull ups/downs on the dedicated JTAG pins to avoid any operation of the TAP during "mission mode" by pulling them to a known inactive state. Any way to get around "Transmitter ... exceeds the maximum allowed data rate ..."? I have a board which has a 208 pin QFP Cyclone II (EPC2C5). There are 4 differentail pairs connected to a HDMI connector and 4 more pairs to a DVI-D connector. This is for experimentation with video generation. I have ported some code which works on a competitors FPGA, and adjusted it to work with the available Megafunctions given by Altera/Intel. Specifically the competitors FPGA has a built in SERDES block, which I have re-implemented using the DDIO_OUT Megafunction. Now I realise that the megafunction is implemented using regular fabric rather than using a dedicated hardware block so is unlikely to be as fast as a dedicated piece of hardware and it's timing is compile/layout critical. After figuring out that the data to the DDIO_OUT needed to be reversed I finally got video on my Dell monitor. This is running at 480p60, and both HDMI and DVI-D connectors are operational (good to know that I wired them up correctly!). I use a PLL to generate 25MHz and 125MHz for the DDIO_OUT which gives a data rate of 250Mbps. For fun and experimentation I would like to try other video resolutions. The one I was particularly interested in is 720p60. I re-generated the PLL to generate 75MHz and 375MHz for a data rate of 750Mbps. During compilation I get the following message... "Error (176058): The Transmitter driving I/O pin DVID_CH0P at data rate 750 Mbps exceeds the maximum allowed data rate of 550 Mbps" It's an ERROR not a warning, so I can't even try it. Yes I understand it is there for good reason as the circuit may fail to operate at the higher speed, but I want to try it anyway. BTW, the competitors FPGA actually can be overclocked (above it's Mbps rating) to work at 1080p60. So yes it's a risk, but shouldn't blow the chip up! Is there a way to workaround? I have a 50MHz XTAL which goes to the PLL, whose two outputs (c1 and c2) connect directly to the DDIO_OUT block. I sort of remember when I generate the DDIO_OUT that I told it about the expected data rate. So I tried to fool the compiler. I told it that the input clock (XTAL) was only 25MHz in the SDC file, however I was forced to re-implement the PLL (the original one was generated having been told it was getting a 50MHz input). When I downloaded and ran the "code" I got a picture which the Dell informed me was 720p40(!). Checking the slower PLL output (on a test pin) it was only 66MHz and not 75MHz. So a PLL designed for a 25MHz clock did not work correctly with a 50MHz clock. You might say that this is no surprise, but a PLL is actually just an input clock "multiplier", and this trick can work for some more flexible PLLs. I don't know how the compiler figures out the data rate. I really want to try to lie about the clock frequencies going into the DDIO_OUT. I tried to add some internal clock definitions to the SDC but failed miserably. I just can't get my head around the syntax. Clearly the compiler is very clever(!) in that I tell it about the PLL input clock (from the SDC), and it then figures out the PLL output frequencies, it knows that the DDIO_OUT is driven from the PLL directly, and then from that calculates the data rate. I've thought about having two PLLs which a mux on the output, controlled by an external pin, to try to fool the compiler, so it can't deduce the clock frequencies to the DDIO_OUT, but it seems horribly contrived. There must be a better way. NOTE: I accept all responsibility for anything bad that happens when I overclock the DDIO_OUT by 20%! It's my own hardware design. regards... --migry Re: Arria 10 LVDS transmitter IP Core needs more documentation As a general comment, Personally I find the Altera/Intel documentation hard going. It would really help if there were some more block diagrams. I recently used the ALTLVDS on the Cyclone II. The circuit didn't work. Finally I got it working by reversing the data inputs. The code was ported from a competitors FPGA. I read the ALTLVDS documentation several times, there are lots of words, but at least for me a lack of clarity and lack of nice block diagrams. Re: State machine crashes (Cyclone II) - no idea why. How do I debug? Hello @RSree, thank for your reply, but the problem has been solved. It was not caused by bad power supply to the Cyclone II. I t was caused by my bad RTL coding. I just could not understand why the state machine was crashing, as I had reviewed *my* code again and again. As a consequence I started to NOT trust my circuit/PCB hardware implementation, even though I used plently of decoupling capacitors and bulk caps too. But when I scoped the 1.2V core power supply, it was clean in the area where the crash was happening (I added the 1.2V trace to the above pictured conditions). I think that I was "clutching at straws" and bad power was the best idea I could come up with. Thankfully I was wrong! Once it was clearly explained about the race condition of un-re-synchronoised inputs from other clock domains, the penny dropped. I changed the RTL to re-sync all inputs using standard "shift register" techniques. My (fixed) RTL is now solid and works perfectly. I am very relieved, and very importantly I am also confident that my hardware design using the Cyclone II can be trusted! We learn by our mistakes, and I am actually pleased to have better understood how to code robust state machines. I have gone through all my RTL and made sure that I have fixed the same problem elsewhere in my code. regards... --migry